Scalable Top-N Local Outlier Detection

Scalable Top-N Local Outlier Detection

KDD 2017 Research Paper KDD’17, August 13–17, 2017, Halifax, NS, Canada Scalable Top-n Local Outlier Detection Yizhou Yan∗ Lei Cao∗ Elke A. Rundensteiner Computer Science CSAIL Computer Science Worcester Polytechnic Institute Massachusetts Institute of Technology Worcester Polytechnic Institute Worcester, MA, USA Cambridge, MA, USA Worcester, MA, USA [email protected] [email protected] [email protected] ABSTRACT Local Outlier Factor (LOF)[6] is one of the most popular outlier Local Outlier Factor (LOF) method that labels all points with their detection methods that addresses challenges caused by data skew- respective LOF scores to indicate their status is known to be very ef- ness. Namely, in a skewed dataset, outliers in one portion of the fective for identifying outliers in datasets with a skewed distribution. data may have very dierent characteristics compared to those in Since outliers by denition are the absolute minority in a dataset, other data regions. Therefore the outlier detection methods such the concept of Top-N local outlier was proposed to discover the n as distance [14] and neighbor-based techniques [4] tend to fail, be- points with the largest LOF scores. The detection of the Top-N local cause they classify points as outliers by applying one global criteria outliers is prohibitively expensive, since it requires huge number of on all data uniformly regardless of their surrounding neighborhood. high complexity k-nearest neighbor (kNN) searches. In this work, LOF instead utilizes the relative density of each point in relation we present the rst scalable Top-N local outlier detection approach to its local neighbors to detect outliers. Since the relative density called TOLF. The key innovation of TOLF is a multi-granularity automatically reects the local data distribution, LOF is very ef- pruning strategy that quickly prunes most points from the set of fective at handing skewed datasets. Since real world datasets tend potential outlier candidates without computing their exact LOF to be skewed [18], LOF has been shown to be superior to other scores or even without conducting any kNN search for them. Our algorithms in detecting outliers for a broad range of applications customized density-aware indexing structure not only eectively [3, 16]. supports the pruning strategy, but also accelerates the kNN search. State-of-the-Art. The popular LOF method [6] generates an out- Our extensive experimental evaluation on OpenStreetMap, SDSS, lierness score (LOF score) for each point in the dataset. This process and TIGER datasets demonstrates the eectiveness of TOLF up is rather expensive because it requires k nearest neighbors (kNN) − to 35 times faster than the state-of-the-art methods. search for each point. A variation of LOF called Top-n LOF was proposed [13] that only returns to the users the n points with largest KEYWORDS LOF scores. This leverages the insight that points with highest LOF scores are the most extreme outliers and thus of great importance Local Outlier Factor; Top-N; Pruning Strategy to the application. Second, by its very denition, applications tend ACM Reference format: to be interested in only the top worst oenders, i.e., top few points Yizhou Yan, Lei Cao, and Elke A. Rundensteiner. 2017. Scalable Top-n Local with highest outlier scores. Any analyst will never be able to exam- Outlier Detection. In Proceedings of KDD ’17, Halifax, NS, Canada, August ine the LOF scores of all or even a large percentage of any truly big 13-17, 2017, 10 pages. dataset. https://doi.org/10.1145/3097983.3098191 However as conrmed in its experiments, the Top-n LOF al- gorithm introduced in [13] takes thousands seconds to handle a synthetic dataset smaller than 1M. Clearly it cannot scale to large 1 INTRODUCTION datasets. Therefore, the development of highly scalable solutions Motivation. Outlier detection is an important data mining tech- for Top-n LOF is urgent. nique [3] that discovers abnormal phenomena, namely values that Proposed TOLF Approach. In this work, we propose therst deviate signicantly from the common occurrence of values in the scalable Top-n LOF approach, called TOLF, that eciently detects data [12]. Outlier detection is critical for applications from credit local outliers in large datasets. TOLF features a detection method fraud prevention, network intrusion detection, stock investment that successfully discovers the Top-n LOF outliers without having planning, to disastrous weather forecasting. to rst compute the LOF score for each input point. It is based on a multi-granularity pruning strategy that quickly locates and thus ∗ Authors contributed equally to this work. prunes the points having no chance to be in the Top-n outlier list. The key insight of our strategy is that by partitioning the data into Permission to make digital or hard copies of all or part of this work for personal or regular shaped cells with a carefully designed size, a cell at its coarse classroom use is granted without fee provided that copies are not made or distributed for prot or commercial advantage and that copies bear this notice and the full citation granularity that contains more than k points can be immediately on the rst page. Copyrights for components of this work owned by others than ACM pruned without any further computation. If a cell cannot be pruned must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, in its entirety, then the pruning is conducted at the individual point to post on servers or to redistribute to lists, requires prior specic permission and/or a fee. Request permissions from [email protected]. level within the cell’s point population based on an ecient LOF KDD ’17, August 13-17, 2017, Halifax, NS, Canada score upper bound estimation mechanism. Moreover, to fully exploit © 2017 Association for Computing Machinery. the power of the multi-granularity pruning strategy on skewed ACM ISBN 978-1-4503-4887-4/17/08...$15.00 https://doi.org/10.1145/3097983.3098191 datasets, we design a data-driven mechanism that automatically 1235 KDD 2017 Research Paper KDD’17, August 13–17, 2017, Halifax, NS, Canada adapts the generation of the cells to the data distribution. As a Denition 2.4. Given the input parameters k and n, the outliers bonus, a data density-aware index structure is constructed for free O of a dataset D are a subset O D with cardinality n, where for ⇢ that signicantly accelerates the kNN search and LOF computation any p O and any q O - D, LOF p LOF q . 2 2 ( )≥ ( ) process for points that could not be pruned. Contributions. The key contributions of this work include: We propose the rst Top-n LOF approach scalable to large 3 TOLF: TOP-N LOF DETECTION APPROACH • datasets. The key ideas of TOLF are inspired by the cuto threshold observa- Our multi-granularity pruning strategy core to TOLF quickly tion as shown below. • excludes most of the points from the outlier candidate set without Cuto Threshold Observation. To detect the Top-n outliers, computing their LOF scores or even running any kNN search for it is not necessary to conduct a two step process, namely rst to them. compute the LOF score for each point and then second to sort the We design a data-driven cell generation strategy as well as points based on their LOF scores. Instead the TOP-n outliers can • the density-aware indexing mechanism that together ensure the be directly acquired in one step as described below. eectiveness of the pruning strategy and of the kNN search on Since there will be at most n top outliers, during the computation datasets with diverse distributions. process TOLF maintains an outlier candidate set C with n highest Experiments on real OpenStreetMap, SDSS and TIGER datasets scored outliers seen so far. The elements in C are sorted based on • demonstrate that TOLF outperforms the state-of-the-art up to 35 their scores. The score of the smallest point pn in C is used as a times in processing time. cuto threshold ct. Then given a new point q, if q’s score is smaller than the threshold ct, q cannot be in the Top-n list and therefore 2 PRELIMINARIES: TOP-N LOF SEMANTICS is discarded immediately. On the other hand, if q’s score is larger C Local Outlier Factor (LOF) [6] introduces the notion of local outliers than ct, q is inserted into . The nth point pn is then replaced with important for many applications. More precisely, for each point the current smallest scored point in C. ct is updated accordingly. p, LOF computes the ratio between its local density and the local As more points are processed, larger scored points will be found. density around its neighboring points. This ratio assigned to p as its The top-n outlier set will be nalized after all points have been local outlier factor (LOF score) denotes its degree of outlierness. LOF processed. depends on a parameter k. For each point p in dataset D, k is used to Our observation here is that given a new point q, to prove it is determine k-distance and neighborhood of p. The k points closest to not a Top-n outlier, we do not have to know its exact LOF score. In- p are the k-nearest neighbors (kNN) of p, also called k-neighborhood stead if we could eciently estimate that the LOF score of q will be of p. k-distance of p is the distance to its kth nearest neighbor. The smaller than the given threshold ct, then q is guaranteed to not be LOF score below depends on the points in its k-neighborhood.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us