
1 AEGR: A simple approach to gradient reversal in autoencoders for network anomaly detection Kasra Babaei, ZhiYuan Chen, Tomas Maul Abstract—Anomaly detection is referred to as a process from the pattern of the majority of data instances is in which the aim is to detect data points that follow referred to as anomaly detection [?]. Depending on the a different pattern from the majority of data points. application domain, anomalies can occur due to various Anomaly detection methods suffer from several well- causes such as human error, system failure, or fraudulent known challenges that hinder their performance such activities. as high dimensionality. Autoencoders are unsupervised neural networks that have been used for the purpose Traditional anomaly detection methods, e.g., density- of reducing dimensionality and also detecting network based or distance-based, are found to be less effec- anomalies in large datasets. The performance of autoen- tive and efficient when dealing with large-scale, high- coders debilitates when the training set contains noise and dimensional and time-series datasets [1][2][3]. Moreover, anomalies. In this paper, a new gradient-reversal method most of these approaches in the literature often require is proposed to overcome the influence of anomalies on large storage space for the intermediate generated data the training phase for the purpose of detecting network [4]. anomalies. The method is different from other approaches Moreover, parameter tuning for classical approaches as it does not require an anomaly-free training set and is such as clustering models in large-scale datasets is a based on reconstruction error. Once latent variables are extracted from the network, Local Outlier Factor is used difficult task [5]. Consequently, the dimensionality of to separate normal data points from anomalies. A simple such datasets should be reduced before applying anomaly pruning approach and data augmentation is also added detection methods. This is achieved by performing a to further improve performance. The experimental results pre-processing step known as dimensionality reduction, show that the proposed model can outperform other well- which tries to remove irrelevant data while keeping know approaches. important variables and transforming the dataset into a Keywords— network anomaly detection, high di- lower dimension. Besides high dimensionality, lack of mensionality, autoencoders (AEs), Local Outlier Factor labelled datasets is another problem in this context. Many (LOF), Gradient Reversal supervised learning algorithms have been employed to detect anomalies; however, an essential requirement of a supervised learning approach is labelled data. While I. INTRODUCTION availability of labelled datasets is often a problem, la- In many real-world problems such as detecting fraud- belling can also be costly, time-consuming, and requires ulent activities or detecting failure in aircraft engines, a field expert [6]. Lastly, in many application domains, arXiv:1912.13387v2 [cs.LG] 28 Feb 2020 there is a pressing need to identify observations that such as telecommunication fraud and computer network have a striking dissimilarity compared to the majority. intrusion, companies are often very conservative and In medicine for instance, this discovery can lead to protective of their data due to privacy issues and tend early detection of lung cancer or breast cancer. One area to resist providing their data [4]. that is growing really fast is computer networks that In this paper, the aim is to tackle the mentioned plays a pivotal role in our daily life. Protecting networks problems by employing an unsupervised approach in from various threats such as network intruders is crucial. which high-dimensional data is compressed into lower By using machine learning algorithms, it is possible to dimensionality, and a density-based method is then ap- monitor and analyse the network and detect these threats plied to the transformed data in order to detect network almost instantly. However, when the number of observa- anomalies. In particular, a deep autoencoder (AE) net- tions that the method aims to detect is very small with work is used to create low-dimensional representations, respect to the whole data, these methods start to struggle, and next, Local Outlier Factor (LOF) [7] is utilised i.e., their performance debilitates. These observations are for separating normal data instances from anomalies. called anomalies (also known as outliers). The process Although autoencoders have shown promising results, whereby the aim is to detect data instances that deviate their performance weakens when the dataset contains noise and anomalies [8]. To overcome this issue, var- an acceptable result when applied to large and high- ious approaches have been proposed such as denoising dimensional datasets, and moreover they tend to require autoencoders or using a loss function which is insensitive large storage capacity in this context [3][2]. to anomalies and noise [9]. To overcome the aforementioned problems, several ap- The main contributions of this paper, as will be proaches have been proposed in which high-dimensional elaborated in subsequent sections, are the following: data is transformed into a low-dimensional space while trying to avoid loss of crucial information. Next, anomaly • First, unlike other similar approaches that require a noise and anomaly-free training set, the proposed detection is carried out on the low-dimensional data. model in this paper is insensitive to anomalies; Recently, autoencoders have been widely employed therefore, our approach is needless of anomaly-free for the purpose of reducing dimensionality of large training sets. datasets. An autoencoder refers to an unsupervised neural network that tries to reconstruct its input at the output • Second, the proposed method is robust to large datasets, and is particularly good with high- layer [12]. It is made of two main parts, namely an dimensional network datasets. encoder and a decoder. The encoder tries to convert the input features into a different space with lower dimension • Third, the method is capable of working with unla- belled datasets. while the decoder attempts to reconstruct the original feature space using the output of the decoder, which is The proposed model is tested on 8 different datasets known as the bottleneck or discriminative layer. In terms including 5 well-known network datasets. Evaluation of of network structure, autoencoders come in various types experimental results show that the proposed model can such as under-complete, over-complete, shallow or deep. improve the performance significantly and is superior A comprehensive review can be found in [13]. Because to the stand-alone LOF and two other state-of-the-art of its ability to encode data without losing information approaches. The rest of this paper is organised as follows. it has been widely employed in the literature. Unlike Section II reviews previous works, while section III other dimensionality reduction methods such as Principal explains the proposed model. Experimental results are Component Analysis (PCA) that use linear combina- presented and discussed in section IV, and subsequently tions, AEs generally perform nonlinear dimensionality the paper concludes with section V. reduction and, according to the literature, perform better than PCAs [14]. Authors in [15] used AE for anomaly II. RELATED STUDIES detection and compared its performance to linear PCA and kernel PCA on both synthetic and real data, and There are various methods for anomaly detection based on the result, they concluded that AE can extract and several studies have categorised them into dif- more subtle anomalies than PCA. Another disadvantage ferent groups. For instance, in [10], authors cate- of statistical algorithms such as PCA or Zero Component gorised them into the following four groups: distribution- Analysis (ZCA) is that as the dimensionality increases, based, distance-based, clustering-based, and density- more memory is required to calculate the covariance based methods. Arguably, the most widely accepted matrix [16]. categorisation is based on the type of supervision used, A well-trained autoencoder should generate small re- i.e., unsupervised, semi-supervised and supervised [1]. construction errors for each data point; however, autoen- Except for unsupervised methods, labelled data is re- coders fail at replicating anomalies because their patterns quired for training models that are semi-supervised or deviate from the pattern that the majority of data in- supervised, and as explained earlier, labelling data brings stances follow. In other words, the Reconstruction Error various challenges; therefore, unsupervised models are (RE) of anomalies is greatly above the RE of normal more favourable [6]. data. In some studies such as [17], the authors used Traditional anomaly detection methods try to search a threshold-based classification using the reconstruction the entire dataset and detect anomalies which will result error as a score to separate normal data from anomalies, in discovering global anomalies. However, in many real- i.e., the data instances that have a reconstruction error world problems the data is incomplete and often the above the threshold are identified as anomalies while application requires a local neighbourhood search for anything below the threshold is considered as normal. In identifying anomalies [11]. One of the approaches that another similar approach, the authors in [18] generated
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-