
KDD 2017 Research Paper KDD’17, August 13–17, 2017, Halifax, NS, Canada Distributed Local Outlier Detection in Big Data Yizhou Yan∗ Lei Cao∗ Worcester Polytechnic Institute Massachusetts Institute of Technology [email protected] [email protected] Caitlin Kuhlman Elke Rundensteiner Worcester Polytechnic Institute Worcester Polytechnic Institute [email protected] [email protected] ABSTRACT applications including credit fraud prevention, network intru- In this work, we present the rst distributed solution for the Local sion detection, stock investment, tactical planning, and disastrous Outlier Factor (LOF) method – a popular outlier detection technique weather forecasting. Outlier detection facilitates the discovery of shown to be very eective for datasets with skewed distributions. abnormal phenomena that may exist in the data, namely values As datasets increase radically in size, highly scalable LOF algo- that deviate signicantly from a common trend in the data [12]. rithms leveraging modern distributed infrastructures are required. One popular outlier detection method, the Local Outlier Factor This poses signicant challenges due to the complexity of the LOF (LOF) [7], addresses challenges caused when data is skewed and denition, and a lack of access to the entire dataset at any individual outliers may have very dierent characteristics across data regions. compute machine. Our solution features a distributed LOF pipeline Traditional outlier detection techniques such as distance [14] and framework, called DLOF. Each stage of the LOF computation is neighbor-based methods [5] tend to fail in such cases, because they conducted in a fully distributed fashion by leveraging our invari- assume that the input dataset exhibits a uniform distribution. Thus ant observation for intermediate value management. Furthermore, they detect outliers based on the absolute density of each point (the we propose a data assignment strategy which ensures that each distance to its neighbors). LOF is able to better detect outliers in machine is self-sucient in all stages of the LOF pipeline, while real world datasets which tend to be skewed [18], outperforming minimizing the number of data replicas. Based on the convergence other algorithms in a broad range of applications [3, 15]. property derived from analyzing this strategy in the context of real LOF is a complex multi-phase technique. It detects outliers by world datasets, we introduce a number of data-driven optimization identifying unusual phenomena in relation to other data observa- strategies. These strategies not only minimize the computation tions around them. Specically, a point p is considered to be an costs within each stage, but also eliminate unnecessary communi- outlier if its local density signicantly diers from the local density cation costs by aggressively pushing the LOF computation into the of its k nearest neighbors (kNN). To determine this, a number of early stages of the DLOF pipeline. Our comprehensive experimental intermediate values must be computed for p and its kNN. The k- study using both real and synthetic datasets conrms the eciency distance and reachability distance values are used to compute the and scalability of our approach to terabyte level data. local reachability density (LRD) of each point, and in turn this is used to compute an outlierness score for p, denoted as the LOF score. KEYWORDS Unfortunately, the centralized LOF algorithm [7] can no longer satisfy the stringent response time requirements of modern appli- Local outlier; Distributed processing; Big data cations, especially now that the data itself is inherently becoming ACM Reference format: more distributed. Therefore, the development of distributed solu- Yizhou Yan, Lei Cao, Caitlin Kuhlman, and Elke Rundensteiner. 2017. Dis- tions for LOF is no longer an option, but a necessity. Nevertheless, tributed Local Outlier Detection in Big Data. In Proceedings of KDD ’17, to the best of our knowledge, no distributed LOF work has been pro- Halifax, NS, Canada, August 13-17, 2017, 10 pages. https://doi.org/10.1145/3097983.3098179 posed. In this work we focus on designing LOF algorithms that are inherently parallel and work in virtually any distributed computing paradigm. This helps assure ease of adoption by others on popular 1 INTRODUCTION open-source distributed infrastructures such as MapReduce [1] and Motivation. Outlier detection is recognized as an important data Spark [10]. mining technique [3]. It plays a crucial role in many wide-ranging Challenges. Designing an ecient distributed LOF approach is ∗Authors contributed equally to this work. challenging because of the complex denition of LOF. In particular, we observe that the LOF score of each single point p is determined Permission to make digital or hard copies of all or part of this work for personal or by many points, namely its k nearest neighbors (kNN), its kNN’s classroom use is granted without fee provided that copies are not made or distributed kNN, and its kNN’s kNN’s kNN in total k + k2 + k3 points. In a for prot or commercial advantage and that copies bear this notice and the full citation − on the rst page. Copyrights for components of this work owned by others than ACM distributed system with a shared nothing architecture, the input must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specic permission and/or a dataset must be partitioned and sent to dierent machines. To fee. Request permissions from [email protected]. identify for each point p all the points upon which it depends for KDD ’17, August 13-17, 2017, Halifax, NS, Canada LOF computation and send them all to the same machine appears to © 2017 Association for Computing Machinery. be a sheer impossibility. It eectively requires us to solve the LOF ACM ISBN 978-1-4503-4887-4/17/08...$15.00 https://doi.org/10.1145/3097983.3098179 problem before we can even begin to identify an ideal partition. 1225 KDD 2017 Research Paper KDD’17, August 13–17, 2017, Halifax, NS, Canada One option to reduce the complexity of distributed LOF compu- Our data-driven strategy DDLOF eectively minimizes the • tation is to calculate LOF in a step by step manner. That is,rst number of support points using insights derived from the multi- only the kNN of each point are calculated and materialized. Then, phase search process itself. the kNN values are used in two successive steps to compute the Driven by the convergence observation, we optimize the DDLOF • LRD and LOF values respectively. For each of these steps, the in- solution by aggressively applying an early termination mechanism termediate values would need to be updated for the next step of to reduce the communication and computation costs. computation. In a centralized environment such data can be indexed Experiments demonstrate the eectiveness of our proposed • in a global database table and eciently accessed and updated. In a optimization strategies and the scalability to terabyte level datasets. shared-nothing architecture common for modern distributed sys- tems, even if it were possible to eciently compute the kNN of each 2 PRELIMINARIES data point, no global table exists that can accommodate this huge Local Outlier Factor (LOF) [7] introduces the notion of local outliers amount of data nor support continuous access and update by many based on the observation that dierent portions of a dataset may machines. Therefore an eective distributed mechanism must be exhibit very dierent characteristics. It is thus often more mean- designed to manage these intermediate values stored across the ingful to decide on the outlier status of a point based on the points compute cluster. in its neighborhood, rather than some strict global criteria. LOF Proposed Approach. In this work, we propose the rst dis- depends on a single parameter k which indicates the number of tributed LOF computation solution, called DLOF. As foundation, we nearest neighbors to consider. rst design a distributed LOF framework that conducts each step Denition 2.1. The k-distance of a point p D is the distance of the LOF computation in a highly distributed fashion. The DLOF 2 d p, q between p and a point q D such that for at least k points framework is built on the critical invariant observation. Namely, ( ) 2 q D p, d p, q d p, q and for at most k-1 points q D, in the LOF computation process of point p, although each step 0 2 − ( 0) ( ) 0 2 d p, q < d p, q . requires dierent types of intermediate values, these values are ( 0) ( ) always associated with a xed set of data points. Leveraging this The k points closest to p are the k-nearest neighbors (kNN) of p, observation, our support-aware assignment strategy ensures the and k-distance of p is the distance to its kth nearest neighbor. input data and required intermediate values are co-located on the same machine in the computation pipeline. `V:H.RR1) ^ _ Second, we propose a data-driven approach named DDLOF to bound the support points, or potential kNN, of the core points in each data partition Pi . DDLOF eectively minimizes the number of support points that introduce data duplication, while still guarantee- ing the correctness of kNN search. DDLOF defeats the commonly accepted understanding in the literature that ecient distributed algorithms should complete the analytics task in as few rounds as `V:H.RR1) ^ _--`V:H.RR1) ^ _ possible [2]. It instead adopts a multi-round strategy that decom- poses kNN search into multiple rounds, providing an opportunity Figure 1: LOF denition to dynamically bound each partition using data-driven insights detected during the search process itself. DDLOF reduces the data Denition 2.2. Given points p, q D where q kNN p , the duplication rate from more than 20x the size of the original dataset 2 2 ( ) of p w.r.t.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-