Anomaly Detection by Combining Decision Trees and Parametric Densities

Anomaly Detection by Combining Decision Trees and Parametric Densities

Anomaly Detection by Combining Decision Trees and Parametric Densities Matthias Reif, Markus Goldstein, Armin Stahl German Research Center for Artificial Intelligence (DFKI) freif,goldstein,[email protected] Thomas M. Breuel Technical University of Kaiserslautern, Department of Computer Science [email protected] Abstract classifier can be trained to separate the regular class (or classes) and the anomalous class. These methods per- In this paper a modified decision tree algorithm for form very well, but the sampling of the anomalous class anomaly detection is presented. During the tree build- can be a tricky task, especially when dealing with high ing process, densities for the outlier class are used di- dimensionality. Abe et al. [1] apply Active Learning rectly in the split point determination algorithm. No and Lazarevic et al. [7] apply Feature Bagging to ad- artificial counter-examples have to be sampled from dress this problem using a decision tree as classifier. the unknown class, which yields to more precise deci- In this paper we propose an extension for standard sion boundaries and a deterministic classification re- decision tree algorithms like C4.5 [10] or CART [3] for sult. Furthermore, the prior of the outlier class can be anomaly detection that can deal with continuous as well used to adjust the sensitivity of the anomaly detector. as with symbolic features. With the presented method, The proposed method combines the advantages of clas- there is no need of generating artificial samples of the sification trees with the benefit of a more accurate rep- missing class into the training set. Instead, it uses a resentation of the outliers. For evaluation, we compare parametric distribution of the outlier class during the our approach with other state-of-the-art anomaly de- determination of the split points. This avoids problems tection algorithms on four standard data sets including like the trade-off between the precision of sampling and the KDD-Cup 99. The results show that the proposed the prior of the classes. Furthermore, split points are method performs as well as more complex approaches more accurate and training is faster due to fewer sam- and is even superior on three out of four data sets. ples. 1.1 Decision Trees 1 Introduction Decision trees have several advantages compared to other classification methods, which make them more Anomaly and outlier detection have become an im- suitable for outlier detection. In particular they have portant task in pattern recognition for a variety of ap- an easily interpretable structure and they are also less plications like network intrusion detection, fraud detec- susceptible to the curse of dimensionality [5]. tion, medical monitoring or manufacturing. Outliers are In a decision tree, every node divides the feature data records which are very different from the normal space from its parent node into two or more disjoint data but occur only rarely. If they are completely miss- subspaces and the root node splits the complete feature ing within the training data the problem is also called space. The tree building process selects at each node one-class classification or novelty detection. that split point, which divides the given subspace and Many different approaches have been proposed for the training data best according to some impurity mea- solving the outlier detection problem: statistical meth- sure. One example impurity measure of a node t and its ods, distance and model based approaches as well as range in feature space over a set of classes C uses the profiling methods. Surveys of the diverse categories and number of instances of a class Nc(t) and the total num- methodologies are given in [5], [8] and [9]. ber of instances N(t) at node t in the training data [3]: Generating artificial counter-examples from the un- X Nc(t) Nc(t) known class is also used for outlier detection [1][2][4]. i(t) = − log (1) N(t) 2 N(t) The advantage of this method is that a standard generic c2C The best split at a node is defined as the split with the Like the number of symbols jSj for symbolic features, highest decrease of impurity of its child nodes [3]. The rmin and rmax have to be defined for all continu- decrease of impurity of a split s at node t with the child ous attributes separately before the training such that min max nodes tL and tR is calculated for binary trees as follows: P (X 2 [r ; r ]) = 1. In general it is a good idea that rmin and rmax do not coincide with the smallest N(tL) N(tR) ∆i(s; t) = i(t) − i(tL) − i(tR) (2) and largest value of the training set since we want the N(t) N(t) uniform distribution to have a “border” around the data Since we avoid the sampling of the outlier class cA, we in order to find optimal split points. There are many different approaches conceivably for defining the bor- have to use a density distribution for estimating NcA instead. In general, it is possible to use any distribution, der values like adding (subtracting respectively) a fixed e.g. if there is some background knowledge available rate to the biggest/smallest value occurring in the train- like dependencies of features. In the following, we use ing set [11], e.g. 10% of the difference between both. for the sake of generality a uniform distribution. A more convincing method is to use a value, that is de- rived from the actual distribution of the given data, like 2 Uniform Outlier Distributions a multiple of the variance. In this section we introduce two uniform distribu- 2.3 Joint Distribution tions used for the outlier class density estimation for the two types of attributes: discrete and continuous. For testing a split, a decision tree algorithm needs to know the number of instances which fall into the result- 2.1 Discrete Uniform Distribution ing multidimensional subspaces. In our approach, the number for the class representing the anomaly is deter- A discrete uniform distribution is defined over a fi- mined based on a uniform probability distribution and nite set S of possible values and all of these values are not on previously generated data points. equally probable. If there are jSj possible values, the We can compute the probability that a data point of probability of a single one being picked is 1 . Hence, a uniform distribution over the limited space lies in- jSj side such a subspace Q, that is defined by its intervals the probability that a feature has a value out of a set [a ; b ] ⊂ [rmin; rmax] for all k continuous features M ⊂ S is: i i i i c jMj and its subsets Mj ⊂ Sj of all ks symbolic features. P (X 2 M) = (3) Under the assumption that the features are independent jSj variables this is the joint probability of its single proba- This requires that the amount of possible symbols is bilities from Equation (3) and (5): known at the training. Typically it can be derived di- rectly from the training set since each symbol occurs at kc kc+ks Y Y least once in it. However, if jSj is undersized (a new P (X 2 Q) = P (Xi 2 [ai; bi]) P (Xj 2 Mj) value of a discrete feature occurs for the first time at i=1 j=kc+1 classification) and consequently P (X 2 S) < 1, the ac- (6) cording data point can be classified as novel with a high Having Nn regular training samples in total, we can confidence. In this case the probabilities were not esti- calculate the expected number of instances NcA (t) of mated completely correct, but with a sufficient accuracy the anomaly class cA within the subspace Qt of node t for our task. to use it in Equation (1) by using their prior probability P (cA): 2.2 Continuous Uniform Distribution P (cA) Nc (t) = Nc P (X 2 Qt) = NnP (X 2 Qt) (7) A A 1 − P (c ) A continuous uniform distribution defines a constant A min max probability density over a finite interval [r ; r ] The prior of the anomalous class P (cA) mainly con- with rmin < rmax: trols the trade-off between detection rate (correctly clas- 1 x 2 [rmin; rmax] sifying outliers as novel) and false alarm rate (wrongly f(x) = rmax−rmin (4) 0 otherwise classifying normal instances as novel). Varying this pa- rameter results in the ROC curve as illustrated in Fig- Since decision trees use intervals within the contin- ure 2. uous feature spaces, we are interested in the probability that a data point is located inside such a specific interval 3 Methodology [a; b] ⊂ [rmin; rmax]: Z b b − a Wherever in the decision tree algorithm the number P (X 2 [a; b]) = f(x)dx = (5) max min of instances of the unknown class is needed, we can a r − r now use Equation (7) to estimate it without the need for artificially generated samples. There is also no need for major changes in the procedure of finding the next best split, which makes the proposed method applicable to arbitrary decision tree algorithms. Efficient techniques to calculate the quality of a split as proposed in [12] are also useable. However, there are several issues when dealing with the proposed method, which we consider in the following subsections. (a) lower P (cA) (b) higher P (cA) 3.1 Suitable Split Points Figure 1. Different decision boundaries on A decision tree algorithm usually checks a lot of po- synthetic 2D example by varying the out- tential split points in order to find the one, which di- lier prior vides the training data best.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us