Building More Accurate Decision Trees with the Additive Tree

Building More Accurate Decision Trees with the Additive Tree

Building more accurate decision trees with the additive tree Jose´ Marcio Lunaa,1,2, Efstathios D. Gennatasb,1,, Lyle H. Ungarc, Eric Eatonc, Eric S. Diffenderfera, Shane T. Jensend, Charles B. Simone IIe, Jerome H. Friedmanf,2, Timothy D. Solbergb, and Gilmer Valdesb,2 aDepartment of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104; bDepartment of Radiation Oncology, University of California, San Francisco, CA 94115; cDepartment of Computing and Information Science, University of Pennsylvania, Philadelphia, PA 19104; dDepartment of Statistics, University of Pennsylvania, Philadelphia, PA 19104; eDepartment of Radiation Oncology, New York Proton Center, New York, NY 10035; and fDepartment of Statistics, Stanford University, Stanford, CA 94305 Contributed by Jerome H. Friedman, August 8, 2019 (sent for review October 10, 2018; reviewed by Adele Cutler and Giles Hooker) The expansion of machine learning to high-stakes application partitioning the instance space, and labeling each partition with domains such as medicine, finance, and criminal justice, where either a predicted category (in the case of classification) or real making informed decisions requires clear understanding of the value (in the case of regression). Despite widespread use, CART model, has increased the interest in interpretable machine learn- models are often less accurate than other statistical learning ing. The widely used Classification and Regression Trees (CART) models, such as kernel methods and ensemble techniques (22). have played a major role in health sciences, due to their sim- Among the latter, boosting methods were developed as a means ple and intuitive explanation of predictions. Ensemble methods to train an ensemble that iteratively combines multiple weak like gradient boosting can improve the accuracy of decision learners (often CART models) into a high-performance pre- trees, but at the expense of the interpretability of the gener- dictive model, albeit with an increase of the number of nodes, ated model. Additive models, such as those produced by gradient therefore, harming model interpretability. In particular, gradi- boosting, and full interaction models, such as CART, have been ent boosting methods (23) iteratively optimize an ensemble’s investigated largely in isolation. We show that these models prediction to increasingly match the labeled training data. In exist along a spectrum, revealing previously unseen connections addition, some ad hoc approaches (24, 25) have been successful between these approaches. This paper introduces a rigorous at improving the accuracy of decision trees, but at the expense STATISTICS formalization for the additive tree, an empirically validated learn- of altering their topology, therefore affecting their capacity of ing technique for creating a single decision tree, and shows explanation. that this method can produce models equivalent to CART or Decision tree learning and gradient boosting have been con- gradient boosted stumps at the extremes by varying a single nected primarily through CART models used as the weak learn- parameter. Although the additive tree is designed primarily to ers in boosting. However, a rigorous analysis in ref. 26 proves that provide both the model interpretability and predictive perfor- decision tree algorithms, specifically CART and C4.5 (27), are, mance needed for high-stakes applications like medicine, it also can produce decision trees represented by hybrid models between Significance CART and boosted stumps that can outperform either of these approaches. As machine learning applications expand to high-stakes areas such as criminal justice, finance, and medicine, legitimate additive tree j decision tree j interpretable machine learning j CART j concerns emerge about high-impact effects of individual mis- gradient boosting predictions on people’s lives. As a result, there has been increasing interest in understanding general machine learning he increasing application of machine learning to high-stakes models to overcome possible serious risks. Current decision Tdomains such as criminal justice (1, 2) and medicine (3–5) trees, such as Classification and Regression Trees (CART), have has led to a surge of interest in understanding the generated played a predominant role in fields such as medicine, due models. Mispredictions in these domains can incur serious risks to their simplicity and intuitive interpretation. However, such in scenarios such as technical debt (6, 7), nondiscrimination trees suffer from intrinsic limitations in predictive power. We (8), medical outcomes (9, 10), and, recently, the right to expla- developed the additive tree, a theoretical approach to gen- nation (11), thus motivating the need for users to be able to erate a more accurate and interpretable decision tree, which examine why the model made a particular prediction. Despite reveals connections between CART and gradient boosting. recent efforts to formalize the concept of interpretability in The additive tree exhibits superior predictive performance to machine learning, there is considerable disagreement on what CART, as validated on 83 classification tasks. such a concept means and how to measure it (12, 13). Lately, 2 broad categories of approaches for interpretability have been Author contributions: J.M.L., E.D.G., L.H.U., E.E., E.S.D., C.B.S., J.H.F., T.D.S., and G.V. proposed (14), namely, post hoc simple explanations for poten- designed research; J.M.L., E.D.G., and G.V. performed research; J.M.L., E.D.G., L.H.U., E.E., tially complex models (e.g., visual and textual explanations) (15, S.T.J., J.H.F., T.D.S., and G.V. contributed new reagents/analytic tools; J.M.L., E.D.G., L.H.U., E.E., E.S.D., S.T.J., C.B.S., T.D.S., and G.V. analyzed data; and J.M.L., E.D.G., L.H.U., E.E., 16) and intuitively simple models (e.g., additive models, deci- E.S.D., S.T.J., C.B.S., J.H.F., T.D.S., and G.V. wrote the paper.y sion trees). This paper focuses on intuitively simple models, Reviewers: A.C., Utah State University; and G.H., Cornell University.y specifically decision trees, which are used widely in fields such Conflict of interest statement: J.M.L., E.E., L.H.U., C.B.S., T.D.S., and G.V. have a patent as healthcare, yet have lower predictive power than more titled “Systems and methods for generating improved decision trees,” pending status.y sophisticated models. This open access article is distributed under Creative Commons Attribution-NonCommercial- Classification and Regression Tree (CART) analysis (17) is NoDerivatives License 4.0 (CC BY-NC-ND).y a well-established statistical learning technique that has been 1 J.M.L. and E.D.G. contributed equally to this work.y adopted by numerous fields for its model interpretability, scal- 2 To whom correspondence may be addressed. Email: [email protected], jose.luna@ ability to large datasets, and connection to rule-based decision- pennmedicine.upenn.edu, or [email protected] making (18). Specifically, in fields like medicine (19–21), the This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10. aforementioned traits are considered a requirement for clinical 1073/pnas.1816748116/-/DCSupplemental.y decision support systems. CART builds a model by recursively www.pnas.org/cgi/doi/10.1073/pnas.1816748116 PNAS Latest Articles j 1 of 7 Downloaded by guest on September 29, 2021 in fact, boosting algorithms. Based on this approach, AdaTree, a tree-growing method based on AdaBoost (28), was proposed CART AddTree GBS in ref. 29. A sequence of weak classifiers on each branch of the decision tree was trained recursively using AdaBoost; there- ℎ , ℎ , ℎ fore, rendering a decision tree where each branch conforms to a strong classifier. The weak classifiers at each internal node were implemented either as linear classifiers composed with ℎ , ℎ , ℎ , ℎ , ℎ a sigmoid or as Haar-type filters with threshold functions at their output. Another approach is the Probabilistic Boosting- Tree (PBT) algorithm introduced in ref. 30, which also uses AdaBoost to build decision trees. PBT trains ensembles of strong classifiers on each branch for image classification. The strong Fig. 1. A depiction of the continuum relating CART, GBS, and our AddTree. classifiers are based on up to third-order moment histogram fea- Each algorithm has been given the same 4 training instances (blue and tures extracted from Gabor and intensity filter responses. PBT red symbols); the symbol’s size depicts its weight when used to train the also carries out a divide-and-conquer strategy in order to per- adjacent node. form data augmentation to estimate the posterior probability of each class. Recently, MediBoost, a tree-growing method based on the more general gradient boosting approach, was proposed is trained with a different weighting of the entire dataset, unlike in ref. 31. In contrast to AdaTree and PBT, MediBoost empha- CART, repeatedly emphasizing mispredicted instances at every sizes interpretability, since it was conceived to support medical round (Fig. 1). GBS or simple regression creates a pure addi- applications while exploiting the accuracy of boosting. It takes tive model in which each new ensemble member reduces the advantage of the shrinkage factor inherent to gradient boost- residual of previous members (32). Interaction terms can be ing, and introduces an acceleration parameter that controls the included in the ensemble by using more complex learners, such as observation weights during training to leverage predictive perfor- deeper trees. mance. Although the presented experimental results showed that Classifier ensembles with decision stumps as the weak learn- MediBoost exhibits better predictive performance than CART, ers, ht (x), can be trivially rewritten (31) as a complete binary no theoretical analysis was provided. tree of depth T , where the decision made at each internal node In this paper, we propose a rigorous mathematical approach at depth t−1 is given by ht (x), and the prediction at each leaf is for building single decision trees that are more accurate than tra- given by F (x).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us