A Review of Machine Learning and Deep Learning Techniques for Anomaly Detection in Iot Data

Total Page:16

File Type:pdf, Size:1020Kb

A Review of Machine Learning and Deep Learning Techniques for Anomaly Detection in Iot Data applied sciences Review A Review of Machine Learning and Deep Learning Techniques for Anomaly Detection in IoT Data Redhwan Al-amri 1,* , Raja Kumar Murugesan 1,* , Mustafa Man 2,* , Alaa Fareed Abdulateef 3 , Mohammed A. Al-Sharafi 4,* and Ammar Ahmed Alkahtani 5 1 School of Computer Science and Engineering, Taylor’s University, Subang Jaya 47500, Selangor, Malaysia 2 Faculty of Ocean Engineering Technology & Informatics, Universiti Malaysia Terengganu (UMT), Kuala Nerus 21030, Terengganu, Malaysia 3 School of Computing, Universiti Utara Malaysia, Sintok 06010, Kedah, Malaysia; [email protected] 4 Department of Information Systems, Azman Hashim International Business School, Universiti Teknologi Malaysia, Skudai 81310, Johor, Malaysia 5 Institute of Sustainable Energy (ISE), Universiti Tenaga Nasional (UNITEN), Kajang 43000, Malaysia; [email protected] * Correspondence: [email protected] (R.A.-a.); [email protected] (R.K.M.); [email protected] (M.M.); alsharafi@ieee.org (M.A.A.-S.) Abstract: Anomaly detection has gained considerable attention in the past couple of years. Emerging technologies, such as the Internet of Things (IoT), are known to be among the most critical sources of data streams that produce massive amounts of data continuously from numerous applications. Examining these collected data to detect suspicious events can reduce functional threats and avoid unseen issues that cause downtime in the applications. Due to the dynamic nature of the data Citation: Al-amri, R.; Murugesan, stream characteristics, many unresolved problems persist. In the existing literature, methods have R.K.; Man, M.; Abdulateef, A.F.; been designed and developed to evaluate certain anomalous behaviors in IoT data stream sources. Al-Sharafi, M.A.; Alkahtani, A.A. A Review of Machine Learning and However, there is a lack of comprehensive studies that discuss all the aspects of IoT data processing. Deep Learning Techniques for Thus, this paper attempts to fill this gap by providing a complete image of various state-of-the-art Anomaly Detection in IoT Data. Appl. techniques on the major problems and core challenges in IoT data. The nature of data, anomaly Sci. 2021, 11, 5320. https://doi.org/ types, learning mode, window model, datasets, and evaluation criteria are also presented. Research 10.3390/app11125320 challenges related to data evolving, feature-evolving, windowing, ensemble approaches, nature of input data, data complexity and noise, parameters selection, data visualizations, heterogeneity of Academic Editor: Gabriella Tognola data, accuracy, and large-scale and high-dimensional data are investigated. Finally, the challenges that require substantial research efforts and future directions are summarized. Received: 4 May 2021 Accepted: 4 June 2021 Keywords: anomaly detection; data stream; deep learning; Internet of Things; machine learning Published: 8 June 2021 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in 1. Introduction published maps and institutional affil- iations. The advent of the Internet has revolutionized communication between humans. Simi- larly, Internet of Things (IoT) devices are reshaping how humans perceive and interact with the physical world. By 2025, IoT systems are expected to cross nearly 75 billion connected devices, tripling the global population [1]. The IoT is a network of heterogeneous objects, such as smartphones, laptops, intel- Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland. ligent devices, and sensors, connected to the Internet through various technologies [2]. This article is an open access article The IoT enables various sensors and devices to communicate with each other directly distributed under the terms and without user interaction [3]. IoT has become one of the biggest data sources in the past few conditions of the Creative Commons years [4]. Methods such as machine learning algorithms can be used to extract meaningful Attribution (CC BY) license (https:// information from these data. creativecommons.org/licenses/by/ IoT remains a significant challenge. One technique that effectively analyses the 4.0/). collected data stream is anomaly detection [5,6]. These unexplained phenomena could be Appl. Sci. 2021, 11, 5320. https://doi.org/10.3390/app11125320 https://www.mdpi.com/journal/applsci Appl. Sci. 2021, 11, x FOR PEER REVIEW 2 of 24 Appl. Sci. 2021, 11, 5320 2 of 23 IoT remains a significant challenge. One technique that effectively analyses the collected data stream is anomaly detection [5,6]. These unexplained phenomena could be outliers, anomalies, cyber-attack, novelties, exceptions, deviations, surprises, or noise outliers,[7,8], where anomalies, outliers cyber-attack, are the data novelties,points that exceptions, are considered deviations, out of surprises,the ordinary or noise. Detection [7,8], whereof these outliers points are can the be data done points using that outlier are considered detection out methods of the ordinary. [9]. The Detection anomalies of theseare a pointsspecial can kind be doneof outlier using outlierthat has detection actionable methods piece [9].s Theof anomaliesinformation are which a special could kind ofbe outliermeaningful. that has To actionable detect these pieces points of information, anomaly detection which could method be meaningful.s are used To[10] detect. Similarly, these points,fault detection anomaly is detection used to methodsdetect noise, are usedwhich [10 is]. unwanted Similarly,, fault and detectionwrong data is used that tohas detect to be noise,removed which [11] is. unwanted,Cyber attack ands, wrongon the dataother that hand has, toare be more removed sophisticated [11]. Cyber; they attacks, can onbe thehidden other between hand, are the more data sophisticated;points and hard they to detect can be [12] hidden. Figure between 1 illustrate the datas the points difference and hardbetween to detect the above [12].- Figurementioned1 illustrates terms. the difference between the above-mentioned terms. FigureFigure 1.1.Outliers Outliers inin thethe datadata stream.stream. InIn manymany cases, cases, IoT IoT real-time real-time applications applications generate generate infinitely infinitely massive massive data data streams streams that posethat imposepose impose unique unique limitations limitations and obstacles and obstacles to machine to machine learning learning algorithms algorithms [13]. These [13]. challengesThese challenges require require a careful a designcareful of design the algorithm of the algorithm to process to theseprocess data these [14]. data Most [14] existing. Most dataexisting stream data algorithms stream algorithms are less are efficient less efficien and havet and limited have limited capability capability requirements requirements [15]. Many[15]. Many studies studies have investigatedhave investigated the techniques the techniques used for used anomaly for anomaly detection, detection, such as [such16–19 as] that[16– address19] that staticaddress data static and data data stream and data using stream both statistical using both and machinestatistical learning and machine meth- ods.learning However, methods. these However, studies have these not studies focused have on not evolving focused data on streams.evolving data streams. SomeSome studiesstudies have focused on on the the detection detection of of anomalies anomalies in in the the data data stream, stream, such such as as[16] [16. However,]. However, previous previous studies studies have have not not addressed addressed all all the the requirements that havehave toto bebe availableavailable inin thethe algorithmalgorithm toto processprocess IoTIoT datadata streamsstreamsand and thethe mainmain challengeschallengesfor forchoosing choosing anan excellentexcellent algorithmalgorithm thatthat suitssuits thethe IoTIoT datadata characteristics.characteristics. ForFor anomalyanomaly detection,detection, manymany algorithmsalgorithms cancan bebe usedused toto detectdetect anomaliesanomalies inin thethedata data stream.stream. AAgood good anomaly anomaly detectiondetection algorithmalgorithm shouldshould considerconsider the following restrictions related related t too data data streams: streams: •• DataData pointspoints are are pushed pushed out out continuously, continuously, and and the the speed speed of arrivalof arrival of dataof data depends depends on theon datathe data source. source. Thus, Thus, it could it could be fast be fast or slow. or slow. •• TheThe datadata streamstream couldcould bebe potentiallypotentially infinite,infinite, whichwhich meansmeans therethere couldcould bebe nono endend toto thethe incomingincoming data.data. • • FeaturesFeatures and/orand/or characteristics of the arriving data points may evolve.evolve. • • DataData pointspoints areare potentiallypotentially one-pass,one-pass, i.e.,i.e., thethe datadata pointspoints cancan bebe usedused onlyonly once,once, andand discardeddiscardedafter. after.Thus, Thus, fetchingfetching important important data data characteristics characteristics is is important. important. To have good quality anomaly detection, algorithms must have the ability to handle To have good quality anomaly detection, algorithms must have the ability to handle the following challenges before they can be used effectively: the following challenges before they can be used effectively: • Ability to handle fast data—the anomaly detection algorithm must be able to handle • Ability to handle fast data—the anomaly detection
Recommended publications
  • A Review on Outlier/Anomaly Detection in Time Series Data
    A review on outlier/anomaly detection in time series data ANE BLÁZQUEZ-GARCÍA and ANGEL CONDE, Ikerlan Technology Research Centre, Basque Research and Technology Alliance (BRTA), Spain USUE MORI, Intelligent Systems Group (ISG), Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Spain JOSE A. LOZANO, Intelligent Systems Group (ISG), Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Spain and Basque Center for Applied Mathematics (BCAM), Spain Recent advances in technology have brought major breakthroughs in data collection, enabling a large amount of data to be gathered over time and thus generating time series. Mining this data has become an important task for researchers and practitioners in the past few years, including the detection of outliers or anomalies that may represent errors or events of interest. This review aims to provide a structured and comprehensive state-of-the-art on outlier detection techniques in the context of time series. To this end, a taxonomy is presented based on the main aspects that characterize an outlier detection technique. Additional Key Words and Phrases: Outlier detection, anomaly detection, time series, data mining, taxonomy, software 1 INTRODUCTION Recent advances in technology allow us to collect a large amount of data over time in diverse research areas. Observations that have been recorded in an orderly fashion and which are correlated in time constitute a time series. Time series data mining aims to extract all meaningful knowledge from this data, and several mining tasks (e.g., classification, clustering, forecasting, and outlier detection) have been considered in the literature [Esling and Agon 2012; Fu 2011; Ratanamahatana et al.
    [Show full text]
  • Fastlof: an Expectation-Maximization Based Local Outlier Detection Algorithm
    FastLOF: An Expectation-Maximization based Local Outlier Detection Algorithm Markus Goldstein German Research Center for Artificial Intelligence (DFKI), Kaiserslautern www.dfki.de [email protected] Introduction Performance Improvement Attempts Anomaly detection finds outliers in data sets which Space partitioning algorithms (e.g. search trees): Require time to build the tree • • – only occur very rarely in the data and structure and can be slow when having many dimensions – their features significantly deviate from the normal data Locality Sensitive Hashing (LSH): Approximates neighbors well in dense areas but • Three different anomaly detection setups exist [4]: performs poor for outliers • 1. Supervised anomaly detection (labeled training and test set) FastLOF Idea: Estimate the nearest neighbors for dense areas approximately and compute • 2. Semi-supervised anomaly detection exact neighbors for sparse areas (training with normal data only and labeled test set) Expectation step: Find some (approximately correct) neighbors and estimate • LRD/LOF based on them Maximization step: For promising candidates (LOF > θ ), find better neighbors • 3. Unsupervised anomaly detection (one data set without any labels) Algorithm 1 The FastLOF algorithm 1: Input 2: D = d1,...,dn: data set with N instances 3: c: chunk size (e.g. √N) 4: θ: threshold for LOF In this work, we present an unsupervised algorithm which scores instances in a 5: k: number of nearest neighbors • Output given data set according to their outlierliness 6: 7: LOF = lof1,...,lofn:
    [Show full text]
  • Incremental Local Outlier Detection for Data Streams
    IEEE Symposium on Computational Intelligence and Data Mining (CIDM), April 2007 Incremental Local Outlier Detection for Data Streams Dragoljub Pokrajac Aleksandar Lazarevic Longin Jan Latecki CIS Dept. and AMRC United Tech. Research Center CIS Department. Delaware State University 411 Silver Lane, MS 129-15 Temple University Dover DE 19901 East Hartford, CT 06108, USA Philadelphia, PA 19122 Abstract. Outlier detection has recently become an important have labeled data, which can be extremely time consuming for problem in many industrial and financial applications. This real life applications, and (2) inability to detect new types of problem is further complicated by the fact that in many cases, rare events. In contrast, unsupervised learning methods outliers have to be detected from data streams that arrive at an typically do not require labeled data and detect outliers as data enormous pace. In this paper, an incremental LOF (Local Outlier points that are very different from the normal (majority) data Factor) algorithm, appropriate for detecting outliers in data streams, is proposed. The proposed incremental LOF algorithm based on some measure [3]. These methods are typically provides equivalent detection performance as the iterated static called outlier/anomaly detection techniques, and their success LOF algorithm (applied after insertion of each data record), depends on the choice of similarity measures, feature selection while requiring significantly less computational time. In addition, and weighting, etc. They have the advantage of detecting new the incremental LOF algorithm also dynamically updates the types of rare events as deviations from normal behavior, but profiles of data points. This is a very important property, since on the other hand they suffer from a possible high rate of false data profiles may change over time.
    [Show full text]
  • Image Anomaly Detection Using Normal Data Only by Latent Space Resampling
    applied sciences Article Image Anomaly Detection Using Normal Data Only by Latent Space Resampling Lu Wang 1 , Dongkai Zhang 1 , Jiahao Guo 1 and Yuexing Han 1,2,* 1 School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China; [email protected] (L.W.); [email protected] (D.Z.); [email protected] (J.G.) 2 Shanghai Institute for Advanced Communication and Data Science, Shanghai University, 99 Shangda Road, Shanghai 200444, China * Correspondence: [email protected] Received: 29 September 2020; Accepted: 30 November 2020; Published: 3 December 2020 Abstract: Detecting image anomalies automatically in industrial scenarios can improve economic efficiency, but the scarcity of anomalous samples increases the challenge of the task. Recently, autoencoder has been widely used in image anomaly detection without using anomalous images during training. However, it is hard to determine the proper dimensionality of the latent space, and it often leads to unwanted reconstructions of the anomalous parts. To solve this problem, we propose a novel method based on the autoencoder. In this method, the latent space of the autoencoder is estimated using a discrete probability model. With the estimated probability model, the anomalous components in the latent space can be well excluded and undesirable reconstruction of the anomalous parts can be avoided. Specifically, we first adopt VQ-VAE as the reconstruction model to get a discrete latent space of normal samples. Then, PixelSail, a deep autoregressive model, is used to estimate the probability model of the discrete latent space. In the detection stage, the autoregressive model will determine the parts that deviate from the normal distribution in the input latent space.
    [Show full text]
  • Unsupervised Network Anomaly Detection Johan Mazel
    Unsupervised network anomaly detection Johan Mazel To cite this version: Johan Mazel. Unsupervised network anomaly detection. Networking and Internet Architecture [cs.NI]. INSA de Toulouse, 2011. English. tel-00667654 HAL Id: tel-00667654 https://tel.archives-ouvertes.fr/tel-00667654 Submitted on 8 Feb 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. 5)µ4& &OWVFEFMPCUFOUJPOEV %0$503"5%&-6/*7&34*5²%&506-064& %ÏMJWSÏQBS Institut National des Sciences Appliquées de Toulouse (INSA de Toulouse) $PUVUFMMFJOUFSOBUJPOBMFBWFD 1SÏTFOUÏFFUTPVUFOVFQBS Mazel Johan -F lundi 19 décembre 2011 5J tre : Unsupervised network anomaly detection ED MITT : Domaine STIC : Réseaux, Télécoms, Systèmes et Architecture 6OJUÏEFSFDIFSDIF LAAS-CNRS %JSFDUFVS T EFʾÒTF Owezarski Philippe Labit Yann 3BQQPSUFVST Festor Olivier Leduc Guy Fukuda Kensuke "VUSF T NFNCSF T EVKVSZ Chassot Christophe Vaton Sandrine Acknowledgements I would like to first thank my PhD advisors for their help, support and for letting me lead my research toward my own direction. Their inputs and comments along the stages of this thesis have been highly valuable. I want to especially thank them for having been able to dive into my work without drowning, and then, provide me useful remarks.
    [Show full text]
  • Machine Learning and Extremes for Anomaly Detection — Apprentissage Automatique Et Extrêmes Pour La Détection D’Anomalies
    École Doctorale ED130 “Informatique, télécommunications et électronique de Paris” Machine Learning and Extremes for Anomaly Detection — Apprentissage Automatique et Extrêmes pour la Détection d’Anomalies Thèse pour obtenir le grade de docteur délivré par TELECOM PARISTECH Spécialité “Signal et Images” présentée et soutenue publiquement par Nicolas GOIX le 28 Novembre 2016 LTCI, CNRS, Télécom ParisTech, Université Paris-Saclay, 75013, Paris, France Jury : Gérard Biau Professeur, Université Pierre et Marie Curie Examinateur Stéphane Boucheron Professeur, Université Paris Diderot Rapporteur Stéphan Clémençon Professeur, Télécom ParisTech Directeur Stéphane Girard Directeur de Recherche, Inria Grenoble Rhône-Alpes Rapporteur Alexandre Gramfort Maitre de Conférence, Télécom ParisTech Examinateur Anne Sabourin Maitre de Conférence, Télécom ParisTech Co-directeur Jean-Philippe Vert Directeur de Recherche, Mines ParisTech Examinateur List of Contributions Journal Sparse Representation of Multivariate Extremes with Applications to Anomaly Detection. (Un- • der review for Journal of Multivariate Analysis). Authors: Goix, Sabourin, and Clémençon. Conferences On Anomaly Ranking and Excess-Mass Curves. (AISTATS 2015). • Authors: Goix, Sabourin, and Clémençon. Learning the dependence structure of rare events: a non-asymptotic study. (COLT 2015). • Authors: Goix, Sabourin, and Clémençon. Sparse Representation of Multivariate Extremes with Applications to Anomaly Ranking. (AIS- • TATS 2016). Authors: Goix, Sabourin, and Clémençon. How to Evaluate the Quality of Unsupervised Anomaly Detection Algorithms? (to be submit- • ted). Authors: Goix and Thomas. One-Class Splitting Criteria for Random Forests with Application to Anomaly Detection. (to be • submitted). Authors: Goix, Brault, Drougard and Chiapino. Workshops Sparse Representation of Multivariate Extremes with Applications to Anomaly Ranking. (NIPS • 2015 Workshop on Nonparametric Methods for Large Scale Representation Learning). Authors: Goix, Sabourin, and Clémençon.
    [Show full text]
  • Machine Learning for Anomaly Detection and Categorization In
    Machine Learning for Anomaly Detection and Categorization in Multi-cloud Environments Tara Salman Deval Bhamare Aiman Erbad Raj Jain Mohammed SamaKa Washington University in St. Louis, Qatar University, Qatar University, Washington University in St. Louis, Qatar University, St. Louis, USA Doha, Qatar Doha, Qatar St. Louis, USA Doha, Qatar [email protected] [email protected] [email protected] [email protected] [email protected] Abstract— Cloud computing has been widely adopted by is getting popular within organizations, clients, and service application service providers (ASPs) and enterprises to reduce providers [2]. Despite this fact, data security is a major concern both capital expenditures (CAPEX) and operational expenditures for the end-users in multi-cloud environments. Protecting such (OPEX). Applications and services previously running on private environments against attacks and intrusions is a major concern data centers are now being migrated to private or public clouds. in both research and industry [3]. Since most of the ASPs and enterprises have globally distributed user bases, their services need to be distributed across multiple Firewalls and other rule-based security approaches have clouds, spread across the globe which can achieve better been used extensively to provide protection against attacks in performance in terms of latency, scalability and load balancing. the data centers and contemporary networks. However, large The shift has eventually led the research community to study distributed multi-cloud environments would require a multi-cloud environments. However, the widespread acceptance significantly large number of complicated rules to be of such environments has been hampered by major security configured, which could be costly, time-consuming and error concerns.
    [Show full text]
  • CSE601 Anomaly Detection
    Anomaly Detection Jing Gao SUNY Buffalo 1 Anomaly Detection • Anomalies – the set of objects are considerably dissimilar from the remainder of the data – occur relatively infrequently – when they do occur, their consequences can be quite dramatic and quite often in a negative sense “Mining needle in a haystack. So much hay and so little time” 2 Definition of Anomalies • Anomaly is a pattern in the data that does not conform to the expected behavior • Also referred to as outliers, exceptions, peculiarities, surprise, etc. • Anomalies translate to significant (often critical) real life entities – Cyber intrusions – Credit card fraud 3 Real World Anomalies • Credit Card Fraud – An abnormally high purchase made on a credit card • Cyber Intrusions – Computer virus spread over Internet 4 Simple Example Y • N1 and N2 are N regions of normal 1 o1 behavior O3 • Points o1 and o2 are anomalies o2 • Points in region O 3 N2 are anomalies X 5 Related problems • Rare Class Mining • Chance discovery • Novelty Detection • Exception Mining • Noise Removal 6 Key Challenges • Defining a representative normal region is challenging • The boundary between normal and outlying behavior is often not precise • The exact notion of an outlier is different for different application domains • Limited availability of labeled data for training/validation • Malicious adversaries • Data might contain noise • Normal behaviour keeps evolving 7 Aspects of Anomaly Detection Problem • Nature of input data • Availability of supervision • Type of anomaly: point, contextual, structural
    [Show full text]
  • A Two-Level Approach Based on Integration of Bagging and Voting for Outlier Detection
    Research Paper A Two-Level Approach based on Integration of Bagging and Voting for Outlier Detection Alican Dogan1, Derya Birant2† 1The Graduate School of Natural and Applied Sciences, Dokuz Eylul University, Izmir, Turkey 2Department of Computer Engineering, Dokuz Eylul University, Izmir, Turkey Citation: Dogan, Alican and Derya Birant. “A two- level approach based on Abstract integration of bagging and voting for outlier Purpose: The main aim of this study is to build a robust novel approach that is able to detect detection.” Journal of outliers in the datasets accurately. To serve this purpose, a novel approach is introduced to Data and Information determine the likelihood of an object to be extremely different from the general behavior of Science, vol. 5, no. 2, 2020, pp. 111–135. the entire dataset. https://doi.org/10.2478/ Design/methodology/approach: This paper proposes a novel two-level approach based jdis-2020-0014 on the integration of bagging and voting techniques for anomaly detection problems. The Received: Dec. 13, 2019 proposed approach, named Bagged and Voted Local Outlier Detection (BV-LOF), benefits Revised: Apr. 27, 2020 Accepted: Apr. 29, 2020 from the Local Outlier Factor (LOF) as the base algorithm and improves its detection rate by using ensemble methods. Findings: Several experiments have been performed on ten benchmark outlier detection datasets to demonstrate the effectiveness of the BV-LOF method. According to the results, the BV-LOF approach significantly outperformed LOF on 9 datasets of 10 ones on average. Research limitations: In the BV-LOF approach, the base algorithm is applied to each subset data multiple times with different neighborhood sizes (k) in each case and with different ensemble sizes (T).
    [Show full text]
  • Anomaly Detection in Data Mining: a Review Jagruti D
    Volume 7, Issue 4, April 2017 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Anomaly Detection in Data Mining: A Review Jagruti D. Parmar Prof. Jalpa T. Patel M.E IT, SVMIT, Bharuch, Gujarat, CSE & IT, SVMIT Bharuch, Gujarat, India India Abstract— Anomaly detection is the new research topic to this new generation researcher in present time. Anomaly detection is a domain i.e., the key for the upcoming data mining. The term ‘data mining’ is referred for methods and algorithms that allow extracting and analyzing data so that find rules and patterns describing the characteristic properties of the information. Techniques of data mining can be applied to any type of data to learn more about hidden structures and connections. In the present world, vast amounts of data are kept and transported from one location to another. The data when transported or kept is informed exposed to attack. Though many techniques or applications are available to secure data, ambiguities exist. As a result to analyze data and to determine different type of attack data mining techniques have occurred to make it less open to attack. Anomaly detection is used the techniques of data mining to detect the surprising or unexpected behaviour hidden within data growing the chances of being intruded or attacked. This paper work focuses on Anomaly Detection in Data mining. The main goal is to detect the anomaly in time series data using machine learning techniques. Keywords— Anomaly Detection; Data Mining; Time Series Data; Machine Learning Techniques.
    [Show full text]
  • Anomaly Detection in Raw Audio Using Deep Autoregressive Networks
    ANOMALY DETECTION IN RAW AUDIO USING DEEP AUTOREGRESSIVE NETWORKS Ellen Rushe, Brian Mac Namee Insight Centre for Data Analytics, University College Dublin ABSTRACT difficulty to parallelize backpropagation though time, which Anomaly detection involves the recognition of patterns out- can slow training, especially over very long sequences. This side of what is considered normal, given a certain set of input drawback has given rise to convolutional autoregressive ar- data. This presents a unique set of challenges for machine chitectures [24]. These models are highly parallelizable in learning, particularly if we assume a semi-supervised sce- the training phase, meaning that larger receptive fields can nario in which anomalous patterns are unavailable at training be utilised and computation made more tractable due to ef- time meaning algorithms must rely on non-anomalous data fective resource utilization. In this paper we adapt WaveNet alone. Anomaly detection in time series adds an additional [24], a robust convolutional autoregressive model originally level of complexity given the contextual nature of anomalies. created for raw audio generation, for anomaly detection in For time series modelling, autoregressive deep learning archi- audio. In experiments using multiple datasets we find that we tectures such as WaveNet have proven to be powerful gener- obtain significant performance gains over deep convolutional ative models, specifically in the field of speech synthesis. In autoencoders. this paper, we propose to extend the use of this type of ar- The remainder of this paper proceeds as follows: Sec- chitecture to anomaly detection in raw audio. In experiments tion 2 surveys recent related work on the use of deep neu- using multiple audio datasets we compare the performance of ral networks for anomaly detection; Section 3 describes the this approach to a baseline autoencoder model and show su- WaveNet architecture and how it has been re-purposed for perior performance in almost all cases.
    [Show full text]
  • Detecting Anomalies in System Log Files Using Machine Learning Techniques
    Institute of Software Technology University of Stuttgart Universitätsstraße 38 D–70569 Stuttgart Bachelor’s Thesis 148 Detecting Anomalies in System Log Files using Machine Learning Techniques Tim Zwietasch Course of Study: Computer Science Examiner: Prof. Dr. Lars Grunske Supervisor: M. Sc. Teerat Pitakrat Commenced: 2014/05/28 Completed: 2014/10/02 CR-Classification: B.8.1 Abstract Log files, which are produced in almost all larger computer systems today, contain highly valuable information about the health and behavior of the system and thus they are consulted very often in order to analyze behavioral aspects of the system. Because of the very high number of log entries produced in some systems, it is however extremely difficult to find relevant information in these files. Computer-based log analysis techniques are therefore indispensable for the process of finding relevant data in log files. However, a big problem in finding important events in log files is, that one single event without any context does not always provide enough information to detect the cause of the error, nor enough information to be detected by simple algorithms like the search with regular expressions. In this work, three different data representations for textual information are developed and evaluated, which focus on the contextual relationship between the data in the input. A new position-based anomaly detection algorithm is implemented and compared to various existing algorithms based on the three new representations. The algorithms are executed on a semantically filtered set of a labeled BlueGene/L log file and evaluated by analyzing the correlation between the labels contained in the log file and the anomalous events created by the algorithms.
    [Show full text]