CS 391L: Machine Learning: Decision Tree Learning Raymond J. Mooney

CS 391L: Machine Learning: Decision Tree Learning Raymond J. Mooney

Decision Trees • Tree-based classifiers for instances represented as feature-vectors. Nodes test features, there is one branch for each value of the feature, and leaves specify the category. color color green red green CS 391L: Machine Learning: red blue blue shape shape neg pos B C Decision Tree Learning circle circle square triangle square triangle pos neg neg A B C • Can represent arbitrary conjunction and disjunction. Can represent any classification function over discrete feature vectors. • Can be rewritten as a set of rules, i.e. disjunctive normal form (DNF). – red ∧ circle → pos Raymond J. Mooney – red ∧ circle → A blue → B; red ∧ square → B University of Texas at Austin green → C; red ∧ triangle → C 1 2 Properties of Decision Tree Learning Top-Down Decision Tree Induction • Continuous (real-valued) features can be handled by • Recursively build a tree top-down by divide and conquer. allowing nodes to split a real valued feature into two ≥ <big, red, circle>: + <small, red, circle>: + ranges based on a threshold (e.g. length < 3 and length 3) <small, red, square>: − <big, blue, circle>: − • Classification trees have discrete class labels at the leaves, color regression trees allow real-valued outputs at the leaves. red blue green • Algorithms for finding consistent trees are efficient for processing large amounts of training data for data mining <big, red, circle>: + <small, red, circle>: + tasks. <small, red, square>: − • Methods developed for handling noisy training data (both class and feature noise). • Methods developed for handling missing feature values. 3 4 Top-Down Decision Tree Induction Decision Tree Induction Pseudocode • Recursively build a tree top-down by divide and conquer. DTree( examples , features ) returns a tree <big, red, circle>: + <small, red, circle>: + If all examples are in one category, return a leaf node with that category label. <small, red, square>: − <big, blue, circle>: − Else if the set of features is empty, return a leaf node with the category label that color is the most common in examples. Else pick a feature F and create a node R for it red blue green <big, red, circle>: + shape For each possible value v of F: <small, red, circle>: + neg neg i <big, blue, circle>: − Let examples be the subset of examples that have value v for F <small, red, square>: − circle square triangle i i Add an out-going edge E to node R labeled with the value vi. pos neg pos If examples is empty <big, red, circle>: + <small, red, square>: − i <small, red, circle>: + then attach a leaf node to edge E labeled with the category that is the most common in examples . else call DTree( examples i , features –{F}) and attach the resulting tree as the subtree under edge E. Return the subtree rooted at R. 5 6 1 Picking a Good Split Feature Entropy • Goal is to have the resulting tree be as small as possible, • Entropy (disorder, impurity) of a set of examples, S, relative to a binary per Occam’s razor. classification is: = − − • Finding a minimal decision tree (nodes, leaves, or depth) is Entropy (S) p1 log 2 ( p1) p0 log 2 ( p0 ) an NP-hard optimization problem. where p1 is the fraction of positive examples in S and p0 is the fraction • Top-down divide-and-conquer method does a greedy of negatives. search for a simple tree but does not guarantee to find the • If all examples are in one category, entropy is zero (we define smallest. 0⋅log(0)=0) – General lesson in ML: “Greed is good.” • If examples are equally mixed ( p =p =0.5), entropy is a maximum of 1. • Want to pick a feature that creates subsets of examples that 1 0 are relatively “pure” in a single class so they are “closer” • Entropy can be viewed as the number of bits required on average to to being leaf nodes. encode the class of an example in S where data compression (e.g. • There are a variety of heuristics for picking a good test, a Huffman coding) is used to give shorter codes to more likely cases. • For multi-class problems with c categories, entropy generalizes to: popular one is based on information gain that originated c with the ID3 system of Quinlan (1979). = − Entropy (S) ∑ pi log 2 ( pi ) i=1 7 8 Entropy Plot for Binary Classification Information Gain • The information gain of a feature F is the expected reduction in entropy resulting from splitting on this feature. S = − v Gain (S, F) Entropy (S) ∑ Entropy (Sv ) v∈Values ( F ) S where Sv is the subset of S having value v for feature F. • Entropy of each resulting subset weighted by its relative size. • Example: – <big, red, circle>: + <small, red, circle>: + – <small, red, square>: − <big, blue, circle>: − 2+, 2 −: E=1 2+, 2 − : E=1 2+, 2 − : E=1 size color shape big small red blue circle square 1+,1 − 1+,1 − 2+,1 − 0+,1 − 2+,1 − 0+,1 − E=1 E=1 E=0.918 E=0 E=0.918 E=0 − ⋅ ⋅ Gain=1 −(0.75 ⋅0.918 + Gain=1 −(0.75 ⋅0.918 + 9 Gain=1 (0.5 1 + 0.5 1) = 0 10 0.25 ⋅0) = 0.311 0.25 ⋅0) = 0.311 Hypothesis Space Search Bias in Decision-Tree Induction • Performs batch learning that processes all training • Information-gain gives a bias for trees with instances at once rather than incremental learning that updates a hypothesis after each example. minimal depth. • Performs hill-climbing (greedy search) that may • Implements a search (preference) bias only find a locally-optimal solution. Guaranteed to find a tree consistent with any conflict-free instead of a language (restriction) bias. training set (i.e. identical feature vectors always assigned the same class), but not necessarily the simplest tree. • Finds a single discrete hypothesis, so there is no way to provide confidences or create useful queries. 11 12 2 History of Decision-Tree Research Weka J48 Trace 1 • Hunt and colleagues use exhaustive search decision-tree data> java weka.classifiers.trees.J48 -t figure.arff -T figure.arff -U -M 1 methods (CLS) to model human concept learning in the Options: -U -M 1 1960’s. J48 unpruned tree • In the late 70’s, Quinlan developed ID3 with the ------------------ information gain heuristic to learn expert systems from color = blue: negative (1.0) examples. color = red • Simulataneously, Breiman and Friedman and colleagues | shape = circle: positive (2.0) develop CART (Classification and Regression Trees), | shape = square: negative (1.0) | shape = triangle: positive (0.0) similar to ID3. color = green: positive (0.0) • In the 1980’s a variety of improvements are introduced to handle noise, continuous features, missing features, and Number of Leaves : 5 improved splitting criteria. Various expert-system Size of the tree : 7 development tools results. • Quinlan’s updated decision-tree package (C4.5) released in Time taken to build model: 0.03 seconds 1993. Time taken to test model on training data: 0 seconds • Weka includes Java version of C4.5 called J48. 13 14 Weka J48 Trace 2 Weka J48 Trace 3 data> java weka.classifiers.trees.J48 -t figure3.arff -T figure3.arff -U -M 1 data> java weka.classifiers.trees.J48 -t contact-lenses.arff === Confusion Matrix === J48 pruned tree a b c <-- classified as Options: -U -M 1 ------------------ 5 0 0 | a = soft tear-prod-rate = reduced: none (12.0) J48 unpruned tree 0 3 1 | b = hard tear-prod-rate = normal 1 0 14 | c = none ------------------ | astigmatism = no: soft (6.0/1.0) shape = circle | astigmatism = yes | | spectacle-prescrip = myope: hard (3.0) | color = blue: negative (1.0) | | spectacle-prescrip = hypermetrope: none (3.0/1.0) | color = red: positive (2.0) === Stratified cross-validation === Number of Leaves : 4 | color = green: positive (1.0) Size of the tree : 7 Correctly Classified Instances 20 83.3333 % shape = square: positive (0.0) Incorrectly Classified Instances 4 16.6667 % Time taken to build model: 0.03 seconds Kappa statistic 0.71 shape = triangle: negative (1.0) Time taken to test model on training data: 0 seconds Mean absolute error 0.15 Root mean squared error 0.3249 === Error on training data === Relative absolute error 39.7059 % Number of Leaves : 5 Root relative squared error 74.3898 % Total Number of Instances 24 Size of the tree : 7 Correctly Classified Instances 22 91.6667 % Incorrectly Classified Instances 2 8.3333 % Kappa statistic 0.8447 === Confusion Matrix === Time taken to build model: 0.02 seconds Mean absolute error 0.0833 Root mean squared error 0.2041 Time taken to test model on training data: 0 seconds Relative absolute error 22.6257 % a b c <-- classified as Root relative squared error 48.1223 % 5 0 0 | a = soft Total Number of Instances 24 0 3 1 | b = hard 1 2 12 | c = none 15 16 Computational Complexity Overfitting • Worst case builds a complete tree where every path test • Learning a tree that classifies the training data perfectly may every feature. Assume n examples and m features. not lead to the tree with the best generalization to unseen data. F1⋅⋅ – There may be noise in the training data that the tree is erroneously ⋅⋅ Maximum of n examples spread across fitting. ⋅ all nodes at each of the m levels – The algorithm may be making poor decisions towards the leaves of the Fm tree that are based on very little data and may not reflect reliable trends. • At each level, i, in the tree, must examine the remaining • A hypothesis, h, is said to overfit the training data is there m− i features for each instance at the level to calculate info exists another hypothesis which, h´, such that h has less error gains.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us