
ESTYLF08, Cuencas Mineras (Mieres - Langreo), 17 - 19 de Septiembre de 2008 Or gate Bayesian networks for text classification: A discriminative alternative approach to multinomial naive Bayes Luis M. de Campos Juan M. Fern´andez-Luna Juan F. Huete Alfonso E. Romero Departamento de Ciencias de la Computaci´on e Inteligencia Artificial E.T.S.I. Inform´atica y de Telecomunicaci´on, Universidad de Granada, 18071 – Granada, Spain {lci,jmfluna,jhg,aeromero}@decsai.ugr.es Abstract probabilistic classifiers model the posterior probabili- ties p(ci|dj) directly. We propose a simple Bayesian network-based The naive Bayes classifier is the simplest genera- text classifier, which may be considered as tive probabilistic classification model that, despite its a discriminative counterpart of the genera- strong and often unrealistic assumptions, performs fre- tive multinomial naive Bayes classifier. The quently surprisingly well. It assumes that all the at- method relies on the use of a fixed net- tribute variables are conditionally independent on each work topology with the arcs going form term other given the class variable. In fact, the naive Bayes nodes to class nodes, and also on a network classifier can be considered as a Bayesian network- parametrization based on noisy or gates. based classifier [1], where the network structure is fixed Comparative experiments of the proposed and contains only arcs from the class variable to the method with naive Bayes and Rocchio algo- attribute variables, as shown in Figure 1. rithms are carried out using three standard document collections. Keywords: Bayesian network, noisy or gate, T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 multinomial naive Bayes, text classification Ci 1 Introduction: Probabilistic Methods Figure 1: Network structure of the naive Bayes classi- for Text Classification fier. The classical approach to probabilistic text classifica- In this paper we are going to propose another simple tion may be stated as follows: We have a class variable Bayesian network-based classifier, which may be con- C taking values in the set {c1, c2, . , cn} and, given a sidered as a discriminative counterpart of naive Bayes, document dj to be classified (described by a set of at- in the following senses: (1) it is based on a type of tribute variables, which usually are the terms appear- Bayesian network similar to that of naive Bayes, but ing in the document), the posterior probability of each with the arcs in the network going in the opposite di- class, p(ci|dj), is computed in some way, and the doc- rection; (2) it requires the same set of simple sufficient ument is assigned to the class having the greatest pos- statistics than naive Bayes, so that the complexity of terior probability. Learning methods for probabilis- the training step in both methods is the same, namely tic classifiers are often characterized as being gener- linear with the number of attribute variables; the com- ative or discriminative. Generative methods estimate plexity of the classification step is also identical. the joint probabilities of all the variables, p(ci, dj), and p(ci|dj) is computed according to the Bayes formula: The rest of the paper is organized in the following way: in Sections 2 and 3 we describe the two probabilistic p(ci, dj) p(ci)p(dj |ci) text classifiers we are considering, naive Bayes and the p(ci|dj) = = ∝ p(ci)p(dj |ci) . p(dj ) p(dj) proposed new model, respectively. Section 4 is focused (1) on the experimental results. Finally, Section 5 con- The problem in this case is how to estimate the prob- tains the concluding remarks and some proposals for abilities p(ci) and p(dj|ci). In contrast, discriminative future work. XIV Congreso Español sobre Tecnologías y Lógica fuzzy 385 ESTYLF08, Cuencas Mineras (Mieres - Langreo), 17 - 19 de Septiembre de 2008 2 The Multinomial Naive Bayes variable C having n values, we can decompose the Classifier problem using n binary class variables Ci taking its values in the sets {ci, ci}. This is a quite common In the context of text classification, there exist two transformation in text classification [9], especially for different models called naive Bayes, the multivariate multilabel problems, where a document may be as- Bernoulli naive Bayes model [2, 4, 8] and the multi- sociated to several classes. In this case n naive nomial naive Bayes model [5, 6]. In this paper we Bayes classifiers are built, each one giving a poste- shall only consider the multinomial model. In this rior probability pi(ci|dj) for each document. In the model a document is an ordered sequence of words or case that each document may be assigned to only ∗ terms drawn from the same vocabulary, and the naive one class (single-label problems), the class c (dj ) such ∗ Bayes assumption here means that the occurrences of that c (dj ) = arg maxci {pi(ci|dj)} is selected. Notice the terms in a document are conditionally indepen- that in this case, as the term pi(dj) in the expression dent given the class, and the positions of these terms pi(ci|dj ) = pi(dj|ci)pi(ci)/pi(dj ) is not necessarily the in the document are also independent given the class1. same for all the class values, we need to compute it Thus, each document dj is drawn from a multinomial explicitly through distribution of words with as many independent trials pi dj pi dj ci pi ci pi dj ci pi ci . as the length of dj. Then, ( ) = ( | ) ( ) + ( | )(1 − ( )) |dj|! n This means that we have also to compute pi(dj |ci). p(dj|ci) = p(|dj |) p(tk|ci) jk , (2) t ∈d njk! This value is estimated using the corresponding coun- k j tkY∈dj Q terparts of eqs. (3) and (4), where where tk are the distinct words in dj, njk is the number N•k − Nik + 1 of times the word tk appears in the document dj and pˆ(tk|ci) = . (6) N − Ni• + M j jk j |d | = tk ∈dj n is the number of words in d . As |dj |! p(|dj|) P does not depend on the class, we can N•k is the numbers of times that the term tk appears njk ! t ∈d k j in the training documents, i.e. N•k = c Nik, and N omit itQ from the computations, so that we only need i is the total number of words in the trainingP documents. to calculate njk p(dj|ci) ∝ p(tk|ci) . (3) 3 The OR Gate Bayesian Network tkY∈dj Classifier The estimation of the term probabilities given the The document classification method that we are go- class,p ˆ(tk|ci), is usually carried out by means of the Laplace estimation: ing to propose is based on another restricted type of Bayesian network with the following topology: Each Nik + 1 term tk appearing in the training documents (or a sub- pˆ(tk|ci) = , (4) Ni• + M set of these terms in the case of using some method for feature selection) is associated to a binary variable where Nik is the number of times the term tk appears Tk taking its values in the set {tk, tk}, which in turn is in documents of class ci, Ni• is the total number of represented in the network by the corresponding node. words in documents of class ci, i.e. Ni• = t Nik, k There are also n binary variables Ci taking its values and M is the size of the vocabulary (the numberP of in the sets {ci, ci} (as in the previous binary version distinct words in the documents of the training set). of the naive Bayes model) and the corresponding class The estimation of the prior probabilities of the classes, nodes. The network structure is fixed, having an arc pˆ(ci), is usually done by maximum likelihood, i.e.: going from each term node Tk to the class node Ci if the term tk appears in training documents which are Ni,doc pˆ(ci) = , (5) of class ci. Let nti be the number of different terms Ndoc appearing in documents of class ci. In this way we where Ndoc is the number of documents in the train- have a network topology with two layers, where the term nodes are the “causes” and the class nodes are ing set and Ni,doc is the number of documents in the n training set which are assigned to class ci. the “effects”, having a total of i=1 nti arcs. An ex- ample of this network topologyP is displayed in Figure The multinomial naive Bayes model can also be used 2. It should be noticed that the proposed topology, in another way: instead of considering only one class with arcs going from attribute nodes to class nodes, is 1 The length of the documents is also assumed to be the opposite of the one associated to the naive Bayes independent on the class. model. 386 XIV Congreso Español sobre Tecnologías y Lógica fuzzy ESTYLF08, Cuencas Mineras (Mieres - Langreo), 17 - 19 de Septiembre de 2008 T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 p(ci|dj) = 1 − (1 − w(Tk,Ci) × p(tk|dj)) C1 C2 C3 C4 C5 Tk∈YP a(Ci) = 1 − (1 − w(Tk,Ci)) . (9) Figure 2: The OR gate classifier. Tk∈PY a(Ci)∩dj In order to take into account the number of times a It should also be noticed that this network topology word tk occurs in a document dj, njk, we can replicate explicitly requires modeling the “discriminative” con- each node Tk njk times, so that the posterior proba- ditional probabilities p(ci|pa(Ci)), where P a(Ci) is the bilities then become set of parents of node Ci in the network (the set of njk terms appearing in documents of class ci) and pa(Ci) p(ci|dj) = 1 − (1 − w(Tk,Ci)) .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-