
101 Comparing Bayesian Network Classifiers Jie Cheng Russell Greiner Department of Computing Science University of Alberta Edmonton, Alberta T6G 2Hl Canada Email: {jcheng, greiner } @cs.ualberta.ca Abstract This paper further explores this role of BNs. Section 2 provides the framework of our research, describing In this paper, we empirically evaluate standard approaches to learning simple Bayesian algorithms for learning four types of Bayesian networks, then motivating our exploration for more network (BN) classifiers - Nai've-Bayes, tree effective extensions. Section 3 defines four classes of augmented Nai've-Bayes, BN augmented BNs - Nai've-Bayes, tree augmented Nai've-Bayes Nai've-Bayes and general BNs, where the latter (TANs), BN augmented Nai've-Bayes (BANs) and two are learned using two variants of a general BNs (GBNs) - and describes methods for conditional-independence (CI) based EN­ learning each. We implemented these learners to test learning algorithm. Experimental results show their effectiveness. Section 4 presents and analyzes the the obtained classifiers, learned using the CI experimental results, over a set of standard learning based algorithms, are competitive with (or problems. We also propose a new algorithm for superior to) the best known classifiers, based learning yet better BN classifiers, and present empirical on both Bayesian networks and other results that support this claim. formalisms; and that the computational time for learning and using these classifiers is 2 FRAMEWORK relatively small. Moreover, these results also suggest a way to learn yet more effective 2.1 BAYESIAN NETWORKS classifiers; we demonstrate empirical!y that this new algorithm does work as expected. A Bayesian network B =< N, A, 0 > is a directed Collectively, these results argue that BN acyclic graph (DAG) <N,A> with a conditional classifiers deserve more attention in machine probability distribution (CP table) for each node, learning and data mining communities. collectively represented by e. Each node n eN a e A 1 INTRODUCTION represents a domain variable, and each arc between nodes represents a probabilistic dependency Many tasks - including fault diagnosis, pattern (see Pearl 1988). In general, a BN can be used to recognition and forecasting - can be viewed as compute the conditional probability of one node, given classification, as each requires identifying the class values assigned to the other nodes; hence, a BN can be posterior probability labels for instances, each typically described by a set of used as a classifier that gives the distribution features (attributes). of the classification node given the values of other attributes. When learning Bayesian networks Learning accurate classifiers from pre-classified data is from datasets, we use nodes to represent dataset a very active research topic in machine learning and attributes. data mining. In the past two decades, many algorithms have been developed for learning decision-tree and We will later use the idea of a Markov boundary of a --; ' neural-network classifiers. While Bayesian networks node nin a BN, where n s Markov boundary is a subset n (BNs) (Pearl 1988) are powerful tools for knowledge of nodes that "shields" from being affected by any representation and inference under conditions of node outside the boundary. One of n's Markov Markov blanket, uncertainty, they were not considered as classifiers boundaries is its which is the union of ' until the discovery that Nai've-Bayes, a very simple n's parents, ns children, and the parents of n's kind of BNs that assumes the attributes are independent children. When using a BN classifier on complete data, given the classification node, are surprisingly effective the Markov blanket of the classification node forms a (Langley et a/. 1992). natural feature selection, as all features outside the Markov blanket can be safely deleted from the BN. 102 Cheng and Greiner This can often produce a much smaller BN without methods and the CI-based methods - although this compromising the classification accuracy. algorithm does find a structure with the best score, it does this by analyzing the pair-wise dependencies. 2.2 LEARNING BN'S Unfortunate! y, for unrestricted BN learning, no such connection can be found between the scoring-based The two major tasks in learning a BN are: learning the and CI-based methods. graphical structure, and then learning the parameters (CP table entries) for that structure. As it is trivial to 2.3 SIMPLE BN CLASSIFIERS learn the parameters for a given structure that are optimal for a given corpus of complete data - simply 2.3.1 Naive-Bayes use the empirical conditional frequencies from the data (Cooper and Herskovits 1992) - we will focus on A Naive-Bayes BN, as discussed in (Duda and Hart, learning the BN structure. 1973), is a simple structure that has the classification node as the parent node of all other nodes (see Figure There are two ways to view a BN, each suggesting a 1). No other connections are allowed in a Naive-Bayes particular approach to learning. First, a BN is a structure. structure that encodes the joint distribution of the attributes. This suggests that the best BN is the one that best fits the data, and leads to the scoring-based learning algorithms, that seek a structure that maximizes the Bayesian, MDL or Kullback-Leibler (KL) entropy scoring function (Heckerman 1995; Cooper and Herskovits 1992). Second, the BN structure encodes a group of conditional independence relationships among the Figure 1: A simple Naive Bayes structure nodes, according to the concept of d-separation (Pearl 1988). This suggests learning the BN structure by Naive-Bayes has been used as an effective classifier for identifying the conditional independence relationships many years. It has two advantages over many other among the nodes. Using some statistical tests (such as classifiers. First, it is easy to construct, as the structure Chi-squared test and mutual information test), we can is given a priori (and hence no structure learning find the conditional independence relationships among procedure is required). Second, the classification the attributes and use these relationships as constraints process is very efficient. Both advantages are due to its to construct a BN. These algorithms are referred as CI­ assumption that all the features are independent of each based algorithms or constraint-based algorithms other. Although this independence assumption is (Spirtes and Glymour 1996; Cheng et al. 1997a). obviously problematic, Naive-Bayes has surprisingly outperformed many sophisticated classifiers over a Heckerman et at. ( 1997) compare these two general large number of datasets, especially where the features learning, and show that the scoring-based methods are not strongly correlated (Langley et al. 1992). often have certain advantages over the Cl-based methods, in terms of modeling a distribution. However, 2.3.2 Improving on Naive-Bayes Friedman et al. (1997) show theoretically that the general scoring-based methods may result in poor In recent years, a lot of effort has focussed on classifiers since a good classifier maximizes a different improving Naive-Bayesian classifiers; following two function - viz., classification accuracy. Greiner et al. general approaches: selecting feature subset and (1997) reach the same conclusion, albeit via a different relaxing independence assumptions. analysis. Moreover, the scoring-based methods are Selecting Feature Subsets often less efficient in practice. Langley and Sage (1994) use forward selection to find This paper further demonstrates that the CI-based a good subset of attributes, then use this subset to learning algorithms do not suffer from these drawbacks construct a selective Bayesian classifier (ie, a Naive­ when learning BN classifiers. Bayesian classifier over only these variables). Kohavi We will later use the famous Chow and Liu (1968) and John (1997) use best-first search, based on algorithm for learning tree-like BNs from complete accuracy estimates, to find a subset of attributes. Their data, which provably finds the optimal (in term of log­ algorithm can wrap around any classifiers, including likelihood score) tree-structured network, using only either the decision tree classifiers or the NaiVe­ Bayesian classifier, etc. Pazzani's algorithm (Pazzani O(N2) pair-wise dependency calculations - n.b., 1995) performs feature joining as well as feature without any heuristic search. Interestingly, this selection to improve the Naive-Bayesian classifier. algorithm has the features of both scoring-based Comparing Bayesian Network Classifiers 103 Relaxing Independence Assumption In this paper, we investigate these questions using an empirical study. We use two variants of a general EN­ Kononenko' s algorithm (Kononenko 199 I) partitions learning algorithm (based on conditional-independence the attributes into disjoint groups and assumes tests) to learn GBNs and BANs. We empirically independence only between attributes of different compared these classifiers with TAN and Nalve-Bayes groups. Friedman et al. (1997) studied TAN, which using eight datasets from the UCI Machine Learning allows tree-like structures to be used to represent Repository (Murphy and Aha, 1995). Our results dependencies among attributes (see Section 3.2). Their motivate a new type of classifier (a wrapper of the experiments show that TAN outperforms Nalve-Bayes, GBN and the BAN), which is also empirically while maintaining computational simplicity on evaluated. learning. Their paper
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-