Bootstrap Aggregating

Bootstrap Aggregating

Bootstrap AGGregatING Jonas Haslbeck April 19th, 2016 Statistical Learning Recab n n n n Finite samples X = fX1 ; :::; Xp g and Y . Assumption: There exists a function f ∗ such that y = f ∗(x) + The goal is to find the best guess f^(x) of f ∗(x) based on the finite samples X n and Y n. The best guess minimizes the risk: E[f ∗(x) − f^(x)]2 = Bias(f^(x))2 + Var(f^(x)) But we observe only the empirical risk: n X 2 (yi − f^(x)) i=1 How does bagging work? 1 2 3 4 5 2 5 2 1 5 2 4 4 ... 3 2 4 5 4 1 2 ::: f^1(x) f^2(x) f^B (x) B 1 X f^ (x) = f^ (x) bag B b b=1 Bootstrap AGGregatING I f^(x) estimate on full dataset I f^bag (x) bagged estimate I Claim: ∗ 2 ∗ 2 E[f (x) − f^(x)] ≥ E[f (x) − f^bag (x)] 2 2 Bias(f^(x)) + Var(f^(x)) ≥ Bias(f^bag (x)) + Var(f^bag (x)) Bagging classification trees (Elements of Statistical Learning, 2009) Bagging classification trees (2) (Dietterich, 2000) Bagging Neural Networks (Maclin & Opitz, 1997) Why does Bagging work? Recall: I f^(x) estimate on full dataset I f^bag (x) bagged estimate I Claim: ∗ 2 ∗ 2 E[f (x) − f^(x)] ≥ E[f (x) − f^bag (x)] 2 2 Bias(f^(x)) + Var(f^(x)) ≥ Bias(f^bag (x)) + Var(f^bag (x)) Approximating average over independent training samples I X1; :::; XN independent 2 I Var(X1) = ::: = Var(XN ) = σ ^ −1 PN I f (x) = N i=1 Xi ^ σ2 I Then Var(f (x) = N Averaging over observations reduces variance! If we had many training samples we could calculate: B −1 X b f^average (x) = B f^ (x) b=1 Now the argument is f^bag (x) ≈ f^average (x) (Introduction to Statistical Learning, 2015) Variance reduction argument 2 b I Ideal aggregate estimator fbag = EP f^ (x) I P Population b 2 b 2 EP [Y − f^ (x)] = EP [Y − fbag (x) + fbag (x) − f^ (x)] 2 b 2 = EP [Y − fbag (x)] + EP [fbag (x) − f^ (x)] 2 ≥ EP [Y − fbag (x)] (Elements of Statistical Learning, 2009) Original paper: Reducing Variance in unstable predictors Same theoretical argument that is based on the assumption that resampling is an approximation to independent samples. (Breiman,1996 'Bagging Predictors') A bayesian explanation? Claim: Bagging reduces a classification learner's error rate because it changes the learner's model space and/or prior distribution to one that better fits the domain. Empirical test: Find the simplest single decision tree that makes same predictions as the bagged ensemble. If the complexity of the single decision tree is larger than the one of bagged base decision tree, this is evidence, that bagging increases the model space. Result: Complexity is indeed larger in this single true. But: not very sound. And: Rao & Tibshirani (1997): bootstrap distribution is an approximation of a posterior distribution based on a symmetric Dirichlet non-informative prior. (Domingos, 1997) Reducing variance in non-linear components Taylor expansion of estimator f^0(x) f^00(x) f^(x) ≈ f^(x) + (x − µ) + (x − µ)2 + ::: 1! 2! Claim: the bagged estimator f^bag (x) is an approximation of: " # f^0(x) f^00(x) f¯(x) = f^(x) + E (x − µ) + (x − µ)2 + ::: P 1! 2! Empirical evidence! But: Limited to smooth multivariate functions (e.g. no trees) (Friedman & Hall, 1999) Smoothing of regression/classification surfaces ^ Simplest Example: f (x) = Ifd^≤xg; x 2 R For non-differentiable and discontinuous functions (e.g. trees). (Buehlmann & Yu, 2000) Negative effect of Bagging on U-Statistics U-Statistics: N N −1 X −1 X U = N A(Xi ) + N B(Xi ; Xj ) i i;j Examples: Variance, Skewness, ... Main results: I The influence of bagging on variance depends on specific U-Statistic I Bagging always increases squared bias (Buja & Stuetzle, 2000) Bagging equalizes influence Different level of analysis: influence of data points on estimator. Two steps: 1. Bagging equalizes influence 2. Equalized influence explains MSE-reduction due to bagging (Grandvalet, 2004) Bagging equalizes influence? Example: Point estimation I n = 20 draws from mixture p(x) = 1 − PN(0; 1) + PN(0; 10) I compute median or bagged (B = 100) median I P = f0;:04;:2;:7; 1g From equalized influence to prediction error Explaining bagging performance as a function of P: 8 7 Median 6 Bagged Median 5 4 Variance 3 2 1 0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Data from Grandvalet 2004 From equalized influence to prediction error Explaining bagging performance as a function of P: 8 7 Median 6 Bagged Median 5 4 Variance 3 2 1 0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Replicated Simulation From equalized influence to prediction error Example 2: Subset selection From equalized influence to prediction error? Example 2: Subset selection (Grandvalet, 2004) Summary so far Paper Base Estimator Results Breiman, 1996 - Heuristic: B reduces variance of unstable predictor Domingos, 1997 - B defines informative prior Rao & Tibshirani, 1997 - B doesn't define informative prior Friedman & Hall, 2000 Smooth B reduces variance in non-linear components Buehlmann & Yu, 2000 CART B performs asymptotic smoothing Buja & Stueckle, 2000 U-Statistics B always increases bias (sometimes also cariance) Grandvalet, 2004 Subset Selection B equalizes influence Further developments? (Yves Grandvalet) Bagging: Theory and Practice Breiman 1996 Domingos 1997 1997 Rao & Tibshirani Friedman & Hall 2000 2000 Buehlmann & Yu Buja & Stueckle 2000 2004 Grandvalet 9000 8000 7000 6000 5000 4000 3000 2000 Theoretical papers GScholar hits "bagging" 1000 0 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 Observations & Discussion Observation: 'Why does it work?' difficult question because ... I performance of bagging depends on the base predictor and the data I non-smooth estimators are hard to study Discussion: 'Mysteries in Machine Learning' - does it matter that we know so little? References I Hastie, T., Tibshirani, R., Friedman, J., & Franklin, J. (2005). The elements of statistical learning: data mining, inference and prediction. The Mathematical Intelligencer, 27(2), 83-85. I Dietterich, T. G. (2000). An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine learning, 40(2), 139-157. I Maclin, R., & Opitz, D. (1997). An empirical evaluation of bagging and boosting. AAAI/IAAI, 1997, 546-551. I Grandvalet, Y. (2004). Bagging equalizes influence. Machine Learning, 55(3), 251-270. I James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning (Vol. 112). New York: springer. I Breiman, L. (1996). Bagging predictors. Machine learning, 24(2), 123-140. I Rao, J. S., & Tibshirani, R. (1997). The out-of-bootstrap method for model averaging and selection. University of Toronto. I Friedman, J. H., & Hall, P. (2007). On bagging and nonlinear estimation. Journal of statistical planning and inference, 137(3), 669-683. I Grandvalet, Y. (2004). Bagging equalizes influence. Machine Learning, 55(3), 251-270. Contact [email protected] http://jmbh.github.io.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    27 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us