6 - Classification & Regression Trees (CART)

Total Page:16

File Type:pdf, Size:1020Kb

6 - Classification & Regression Trees (CART)

6 - Classification & Regression Trees (CART)

6.1 - Introduction and Motivation

Tree-based gbmodelling is an exploratory technique for uncovering structure in data. Specifically, the technique is useful for classification and regression problems where one has a set of predictor or classification variables and a single response . When is a factor, classification rules are determined from the data, e.g.

If

When the response is numeric, then regression tree rules for prediction are of the form:

Statistical inference for tree-based models is in its infancy and there is a definite lack of formal procedures for inference. However, the method is rapidly gaining widespread popularity as a means of devising prediction rules for rapid and repeated evaluation, as a screening method for variables, as diagnostic technique to assess the adequacy of other types of models, and simply for summarizing large multivariate data sets. Some reasons for its recent popularity are that:

1. In certain applications, especially where the set of predictors contains a mix of numeric variables and factors, tree-based models are sometimes easier to interpret and discuss than linear models.

2. Tree-based models are invariant to monotone transformations of predictor variables so that the precise form in which these appear in the model is irrelevant. As we have seen in several earlier examples this is a particularly appealing property.

3. Tree-based models are more adept at capturing nonadditive behavior; the standard linear model does not allow interactions between variables unless they are pre-specified and of a particular multiplicative form. MARS can capture this automatically by specifying a degree = 2 fit.

1 6.2 - Regression Trees

The form of the fitted surface or smooth obtained from a regression tree is where the are constants and the are regions defined a series of binary splits. If all the predictors are numeric these regions form a set of disjoint hyper-rectangles with sides parallel to the axes such that

Regardless of how the neighborhoods are defined if we use the least squares criterion for each region the best estimator of the response, , is just the average of the in the region , i.e.

.

Thus to obtain a regression tree we need to somehow obtain the neighborhoods This is accomplished by an algorithm called recursive partitioning, see Breiman et al. (1984). We present the basic idea below though an example for the case where the number of neighborhoods and the number of predictor variables The task of determining neighborhoods is solved by determining a split coordinate , i.e. which variable to split on, and split point . A split coordinate and split point define the rectangles as

The residual sum of squares (RSS) for a split determined by is

The goal at any given stage is to find the pair such that is minimal or the overall RSS is maximally reduced. This may seem overwhelming, however this only requires examining at most splits for each variable because the points in a neighborhood only change when the split point crosses an observed value. If we wish to split into three neighborhoods, i.e. split or after the first split, we have possibilities for the first split and possibilities for the second split, given the first split. In total we have operations to find the best splits for neighborhoods. In general for neighborhoods we have,

possibilities if all predictors are numeric! This gets too big for an exhaustive search, therefore we use the technique for recursively. This is the basic idea of recursive partitioning. One starts with the first split and obtains as explained above. This split stays

2 fixed and the same splitting procedure is applied recursively to the two regions . This procedure is then repeated until we reach some stopping criterion such as the nodes become homogenous or contain very few observations. The rpart function uses two such stopping criteria. A node will not be split if it contains fewer minsplit observations (default =20). Additionally we can specify the minimum number of observations in terminal node by specifying a value for minbucket (default = ).

The figures below from pg. 306 of Elements of Statistical Learning show a hypothetical tree fit based on two numeric predictors .

Let's examine these ideas using the ozone pollution data for the Los Angeles Basin discussed earlier in the course. For simplicity we consider the case where . Here we will develop a regression tree using rpart for predicting upper ozone concentration using the temperature at Sandburg Air Force base and Daggett pressure.

> library(rpart) > attach(Ozdata) > oz.rpart <- rpart(upoz ~ inbh + safb) > summary(oz.rpart) > plot(oz.rpart) > text(oz.rpart)

3 > post(oz.rpart,"Regression Tree for Upper Ozone Concentration")

Plot the fitted surface > x1 = seq(min(inbh),max(inbh),length=100) > x2 = seq(min(safb),max(safb),length=100) > x = expand.grid(inbh=x1,safb=x2) > ypred = predict(oz.rpart,newdata=x) > persp(x1,x2,z=matrix(ypred,100,100),theta=45,xlab="INBH", + ylab="SAFB",zlab="UPOZ")

> plot(oz.rpart,uniform=T,branch=1,compress=T,margin=0.05,cex=.5) > text(oz.rpart,all=T,use.n=T,fancy=T,cex=.7) > title(main="Regression Tree for Upper Ozone Concentration")

4 Example 6.1: Infant Mortality Rates for 77 Largest U.S. Cities in 2000

In this example we will examine how to build regression trees using functions in the packages rpart and tree. We will also examine use of the maptree package to plot the results.

> infmort.rpart = rpart(infmort~.,data=City,control=rpart.control(minsplit=10)) > summary(infmort.rpart) Call: rpart(formula = infmort ~ ., data = City, minsplit = 10) n= 77 CP nsplit rel error xerror xstd 1 0.53569704 0 1.00000000 1.0108944 0.18722505 2 0.10310955 1 0.46430296 0.5912858 0.08956209 3 0.08865804 2 0.36119341 0.6386809 0.09834998 4 0.03838630 3 0.27253537 0.5959633 0.09376897 5 0.03645758 4 0.23414907 0.6205958 0.11162033 6 0.02532618 5 0.19769149 0.6432091 0.11543351 7 0.02242248 6 0.17236531 0.6792245 0.11551694 8 0.01968056 7 0.14994283 0.7060502 0.11773100 9 0.01322338 8 0.13026228 0.6949660 0.11671223 10 0.01040108 9 0.11703890 0.6661967 0.11526389 11 0.01019740 10 0.10663782 0.6749224 0.11583334 12 0.01000000 11 0.09644043 0.6749224 0.11583334

Node number 1: 77 observations, complexity param=0.535697 mean=12.03896, MSE=12.31978 left son=2 (52 obs) right son=3 (25 obs) Primary splits: pct.black < 29.55 to the left, improve=0.5356970, (0 missing) growth < -5.55 to the right, improve=0.4818361, (0 missing) pct1par < 31.25 to the left, improve=0.4493385, (0 missing) precip < 36.45 to the left, improve=0.3765841, (0 missing) laborchg < 2.85 to the right, improve=0.3481261, (0 missing)

Surrogate splits: growth < -2.6 to the right, agree=0.896, adj=0.68, (0 split) pct1par < 31.25 to the left, agree=0.896, adj=0.68, (0 split) laborchg < 2.85 to the right, agree=0.857, adj=0.56, (0 split) poverty < 21.5 to the left, agree=0.844, adj=0.52, (0 split) income < 24711 to the right, agree=0.818, adj=0.44, (0 split)

Node number 2: 52 observations, complexity param=0.08865804 mean=10.25769, MSE=4.433595 left son=4 (34 obs) right son=5 (18 obs) Primary splits: precip < 36.2 to the left, improve=0.3647980, (0 missing) pct.black < 12.6 to the left, improve=0.3395304, (0 missing) pct.hisp < 4.15 to the right, improve=0.3325635, (0 missing) pct1hous < 29.05 to the left, improve=0.3058060, (0 missing) hisp.pop < 14321 to the right, improve=0.2745090, (0 missing) Surrogate splits: pct.black < 22.3 to the left, agree=0.865, adj=0.611, (0 split) pct.hisp < 3.8 to the right, agree=0.865, adj=0.611, (0 split) hisp.pop < 7790.5 to the right, agree=0.808, adj=0.444, (0 split) growth < 6.95 to the right, agree=0.769, adj=0.333, (0 split) taxes < 427 to the left, agree=0.769, adj=0.333, (0 split)

5 Node number 3: 25 observations, complexity param=0.1031095 mean=15.744, MSE=8.396064 left son=6 (20 obs) right son=7 (5 obs) Primary splits: pct1par < 45.05 to the left, improve=0.4659903, (0 missing) growth < -5.55 to the right, improve=0.4215004, (0 missing) pct.black < 62.6 to the left, improve=0.4061168, (0 missing) pop2 < 637364.5 to the left, improve=0.3398599, (0 missing) black.pop < 321232.5 to the left, improve=0.3398599, (0 missing) Surrogate splits: pct.black < 56.85 to the left, agree=0.92, adj=0.6, (0 split) growth < -15.6 to the right, agree=0.88, adj=0.4, (0 split) welfare < 22.05 to the left, agree=0.88, adj=0.4, (0 split) unemprt < 11.3 to the left, agree=0.88, adj=0.4, (0 split) black.pop < 367170.5 to the left, agree=0.84, adj=0.2, (0 split)

Node number 4: 34 observations, complexity param=0.0383863 mean=9.332353, MSE=2.830424 left son=8 (5 obs) right son=9 (29 obs) Primary splits: medrent < 614 to the right, improve=0.3783900, (0 missing) black.pop < 9584.5 to the left, improve=0.3326486, (0 missing) pct.black < 2.8 to the left, improve=0.3326486, (0 missing) income < 29782 to the right, improve=0.2974224, (0 missing) medv < 160100 to the right, improve=0.2594415, (0 missing) Surrogate splits: area < 48.35 to the left, agree=0.941, adj=0.6, (0 split) income < 35105 to the right, agree=0.941, adj=0.6, (0 split) medv < 251800 to the right, agree=0.941, adj=0.6, (0 split) popdens < 9701.5 to the right, agree=0.912, adj=0.4, (0 split) black.pop < 9584.5 to the left, agree=0.912, adj=0.4, (0 split)

Node number 5: 18 observations, complexity param=0.02242248 mean=12.00556, MSE=2.789414 left son=10 (7 obs) right son=11 (11 obs) Primary splits: july < 78.95 to the right, improve=0.4236351, (0 missing) pctmanu < 11.65 to the right, improve=0.3333122, (0 missing) pct.AIP < 1.5 to the left, improve=0.3293758, (0 missing) pctdeg < 18.25 to the left, improve=0.3078859, (0 missing) precip < 49.7 to the right, improve=0.3078859, (0 missing) Surrogate splits: oldhous < 15.05 to the left, agree=0.833, adj=0.571, (0 split) precip < 46.15 to the right, agree=0.833, adj=0.571, (0 split) area < 417.5 to the right, agree=0.778, adj=0.429, (0 split) pctdeg < 19.4 to the left, agree=0.778, adj=0.429, (0 split) unemprt < 5.7 to the right, agree=0.778, adj=0.429, (0 split)

Node number 6: 20 observations, complexity param=0.03645758 mean=14.755, MSE=4.250475

6 left son=12 (10 obs) right son=13 (10 obs) Primary splits: growth < -5.55 to the right, improve=0.4068310, (0 missing) medv < 50050 to the right, improve=0.4023256, (0 missing) pct.AIP < 0.85 to the right, improve=0.4019953, (0 missing) pctrent < 54.3 to the right, improve=0.3764815, (0 missing) pctold < 13.95 to the left, improve=0.3670365, (0 missing) Surrogate splits: pctold < 13.95 to the left, agree=0.85, adj=0.7, (0 split) laborchg < -1.8 to the right, agree=0.85, adj=0.7, (0 split) black.pop < 165806 to the left, agree=0.80, adj=0.6, (0 split) pct.black < 45.25 to the left, agree=0.80, adj=0.6, (0 split) pct.AIP < 1.05 to the right, agree=0.80, adj=0.6, (0 split)

Node number 7: 5 observations mean=19.7, MSE=5.416

Node number 8: 5 observations mean=6.84, MSE=1.5784

Node number 9: 29 observations, complexity param=0.01968056 mean=9.762069, MSE=1.79063 left son=18 (3 obs) right son=19 (26 obs) Primary splits: laborchg < 55.9 to the right, improve=0.3595234, (0 missing) growth < 61.7 to the right, improve=0.3439875, (0 missing) taxes < 281.5 to the left, improve=0.3185654, (0 missing) july < 82.15 to the right, improve=0.2644400, (0 missing) pct.hisp < 5.8 to the right, improve=0.2537809, (0 missing) Surrogate splits: growth < 61.7 to the right, agree=0.966, adj=0.667, (0 split) welfare < 3.85 to the left, agree=0.966, adj=0.667, (0 split) pct1par < 17.95 to the left, agree=0.966, adj=0.667, (0 split) ptrans < 1 to the left, agree=0.966, adj=0.667, (0 split) pop2 < 167632.5 to the left, agree=0.931, adj=0.333, (0 split)

Node number 10: 7 observations mean=10.64286, MSE=2.21102

Node number 11: 11 observations, complexity param=0.0101974 mean=12.87273, MSE=1.223802 left son=22 (6 obs) right son=23 (5 obs) Primary splits: pctmanu < 11.95 to the right, improve=0.7185868, (0 missing) july < 72 to the left, improve=0.5226933, (0 missing) black.pop < 54663.5 to the left, improve=0.5125632, (0 missing) pop2 < 396053.5 to the left, improve=0.3858185, (0 missing) pctenr < 88.65 to the right, improve=0.3858185, (0 missing) Surrogate splits: popdens < 6395.5 to the left, agree=0.818, adj=0.6, (0 split) black.pop < 54663.5 to the left, agree=0.818, adj=0.6, (0 split) taxes < 591 to the left, agree=0.818, adj=0.6, (0 split) welfare < 8.15 to the left, agree=0.818, adj=0.6, (0 split)

7 poverty < 15.85 to the left, agree=0.818, adj=0.6, (0 split)

Node number 12: 10 observations, complexity param=0.01322338 mean=13.44, MSE=1.8084 left son=24 (5 obs) right son=25 (5 obs) Primary splits: pct1hous < 30.1 to the right, improve=0.6936518, (0 missing) precip < 41.9 to the left, improve=0.6936518, (0 missing) laborchg < 1.05 to the left, improve=0.6902813, (0 missing) welfare < 13.2 to the right, improve=0.6619479, (0 missing) ptrans < 5.25 to the right, improve=0.6087241, (0 missing) Surrogate splits: precip < 41.9 to the left, agree=1.0, adj=1.0, (0 split) pop2 < 327405.5 to the right, agree=0.9, adj=0.8, (0 split) black.pop < 127297.5 to the right, agree=0.9, adj=0.8, (0 split) pctold < 11.75 to the right, agree=0.9, adj=0.8, (0 split) welfare < 13.2 to the right, agree=0.9, adj=0.8, (0 split)

Node number 13: 10 observations, complexity param=0.02532618 mean=16.07, MSE=3.2341 left son=26 (5 obs) right son=27 (5 obs) Primary splits: pctrent < 52.9 to the right, improve=0.7428651, (0 missing) pct1hous < 32 to the right, improve=0.5460870, (0 missing) pct1par < 39.55 to the right, improve=0.4378652, (0 missing) pct.hisp < 0.8 to the right, improve=0.4277646, (0 missing) pct.AIP < 0.85 to the right, improve=0.4277646, (0 missing) Surrogate splits: area < 62 to the left, agree=0.8, adj=0.6, (0 split) pct.hisp < 0.8 to the right, agree=0.8, adj=0.6, (0 split) pct.AIP < 0.85 to the right, agree=0.8, adj=0.6, (0 split) pctdeg < 18.5 to the right, agree=0.8, adj=0.6, (0 split) taxes < 560 to the right, agree=0.8, adj=0.6, (0 split)

Node number 18: 3 observations mean=7.4, MSE=1.886667

Node number 19: 26 observations, complexity param=0.01040108 mean=10.03462, MSE=1.061494 left son=38 (14 obs) right son=39 (12 obs) Primary splits: pct.hisp < 20.35 to the right, improve=0.3575042, (0 missing) hisp.pop < 55739.5 to the right, improve=0.3013295, (0 missing) pctold < 11.55 to the left, improve=0.3007143, (0 missing) pctrent < 41 to the right, improve=0.2742615, (0 missing) taxes < 375.5 to the left, improve=0.2577731, (0 missing)

Surrogate splits: hisp.pop < 39157.5 to the right, agree=0.885, adj=0.75, (0 split) pctold < 11.15 to the left, agree=0.769, adj=0.50, (0 split) pct1par < 23 to the right, agree=0.769, adj=0.50, (0 split) pctrent < 41.85 to the right, agree=0.769, adj=0.50, (0 split) precip < 15.1 to the left, agree=0.769, adj=0.50, (0 split)

Node number 22: 6 observations mean=12.01667, MSE=0.4013889

8 Node number 23: 5 observations mean=13.9, MSE=0.276

Node number 24: 5 observations mean=12.32, MSE=0.6376

Node number 25: 5 observations mean=14.56, MSE=0.4704

Node number 26: 5 observations mean=14.52, MSE=0.9496

Node number 27: 5 observations mean=17.62, MSE=0.7136

Node number 38: 14 observations mean=9.464286, MSE=0.7994388

Node number 39: 12 observations mean=10.7, MSE=0.545

> plot(infmort.rpart) > text(infmort.rpart)

> path.rpart(infmort.rpart)  clicking on 3 leftmost terminal nodes, right-click to stop

node number: 8 root pct.black< 29.55 precip< 36.2 medrent>=614

9 node number: 18 root pct.black< 29.55 precip< 36.2 medrent< 614 laborchg>=55.9

node number: 38 root pct.black< 29.55 precip< 36.2 medrent< 614 laborchg< 55.9 pct.hisp>=20.35

The package maptree has a function draw.tree() that plots trees slightly differently.

> draw.tree(infmort.rpart)

10 Examining cross-validation results

> printcp(infmort.rpart)

Regression tree: rpart(formula = infmort ~ ., data = City, minsplit = 10)

Variables actually used in tree construction: [1] growth july laborchg medrent pct.black pct.hisp [7] pct1hous pct1par pctmanu pctrent precip

Root node error: 948.62/77 = 12.32 n= 77

CP nsplit rel error xerror xstd 1 0.535697 0 1.00000 1.01089 0.187225 2 0.103110 1 0.46430 0.59129 0.089562 3 0.088658 2 0.36119 0.63868 0.098350 4 0.038386 3 0.27254 0.59596 0.093769 5 0.036458 4 0.23415 0.62060 0.111620 6 0.025326 5 0.19769 0.64321 0.115434 7 0.022422 6 0.17237 0.67922 0.115517 8 0.019681 7 0.14994 0.70605 0.117731 9 0.013223 8 0.13026 0.69497 0.116712 10 0.010401 9 0.11704 0.66620 0.115264 11 0.010197 10 0.10664 0.67492 0.115833 12 0.010000 11 0.09644 0.67492 0.115833

> plotcp(infmort.rpart)

The 1-SE rule for choosing a tree-size 1. Find the smallest xerror and add the corresponding xsd to it. 2. Choose the first tree size that has a xerror smaller than the result from step 1.

A very small tree (2 splits or 3 terminal nodes) is suggested by cross-validation, but the larger trees cross-validate reasonably well so we might choose a larger tree just because it is more interesting from a practical standpoint.

11 > plot(City$infmort,predict(infmort.rpart)) > row.names(City) [1] "New.York.NY" "Los.Angeles.CA" "Chicago.IL" "Houston.TX" "Philadelphia.PA" [6] "San.Diego.CA" "Dallas.TX" "Phoenix.AZ" "Detroit.MI" "San.Antonio.TX" [11] "San.Jose.CA" "Indianapolis.IN" "San.Francisco.CA" "Baltimore.MD" "Jacksonville.FL" [16] "Columbus.OH" "Milwaukee.WI" "Memphis.TN" "Washington.DC" "Boston.MA" [21] "El.Paso.TX" "Seattle.WA" "Cleveland.OH" "Nashville.Davidson.TN" "Austin.TX" [26] "New.Orleans.LA" "Denver.CO" "Fort.Worth.TX" "Oklahoma.City.OK" "Portland.OR" [31] "Long.Beach.CA" "Kansas.City.MO" "Virginia.Beach.VA" "Charlotte.NC" "Tucson.AZ" [36] "Albuquerque.NM" "Atlanta.GA" "St.Louis.MO" "Sacramento.CA" "Fresno.CA" [41] "Tulsa.OK" "Oakland.CA" "Honolulu.CDP.HI" "Miami.FL" "Pittsburgh.PA" [46] "Cincinnati.OH" "Minneapolis.MN" "Omaha.NE" "Toledo.OH" "Buffalo.NY" [51] "Wichita.KS" "Mesa.AZ" "Colorado.Springs.CO" "Las.Vegas.NV" "Santa.Ana.CA" [56] "Tampa.FL" "Arlington.TX" "Anaheim.CA" "Louisville.KY" "St.Paul.MN" [61] "Newark.NJ" "Corpus.Christi.TX" "Birmingham.AL" "Norfolk.VA" "Anchorage.AK" [66] "Aurora.CO" "Riverside.CA" "St.Petersburg.FL" "Rochester.NY" "Lexington.Fayette.KY" [71] "Jersey.City.NJ" "Baton.Rouge.LA" "Akron.OH" "Raleigh.NC" "Stockton.CA" [76] "Richmond.VA" "Mobile.AL" > identify(City$infmort,predict(infmort.rpart),labels=row.names(City)) [1] 14 19 37 44 61 63  identify some interesting points > abline(0,1)  adds line to the plot

> post(infmort.rpart)  creates a postscript version of tree. You will need to download a postscript viewer add-on for Adobe Reader to open them. Google “Postscript Viewer” and grab the one off of cnet - (http://download.cnet.com/Postscript-Viewer/3000-2094_4-10845650.html)

12 Using the draw.tree function from the maptree package we can produce the following display of the full infant mortality regression tree.

> draw.tree(infmort.rpart)

Another function in the maptree library is the group.tree command that will label the observations in according to the terminal nodes they are in. This can be particularly interesting when the observations have meaningful labels or are spatially distributed.

> infmort.groups = group.tree(infmort.rpart) > infmort.groups

Here is a little function to display groups of observations in a data set given the group identifier. > groups = function(g,dframe) { ng <- length(unique(g)) for(i in 1:ng) { cat(paste("GROUP ", i)) cat("\n") cat("======\n") cat(row.names(dframe)[g == i]) cat("\n\n") } cat(" \n\n")

13 }

> groups(infmort.groups,City)

GROUP 1 ======San.Jose.CA San.Francisco.CA Honolulu.CDP.HI Santa.Ana.CA Anaheim.CA

GROUP 2 ======Mesa.AZ Las.Vegas.NV Arlington.TX

GROUP 3 ======Los.Angeles.CA San.Diego.CA Dallas.TX San.Antonio.TX El.Paso.TX Austin.TX Denver.CO Long.Beach.CA Tucson.AZ Albuquerque.NM Fresno.CA Corpus.Christi.TX Riverside.CA Stockton.CA

GROUP 4 ======Phoenix.AZ Fort.Worth.TX Oklahoma.City.OK Sacramento.CA Minneapolis.MN Omaha.NE Toledo.OH Wichita.KS Colorado.Springs.CO St.Paul.MN Anchorage.AK Aurora.CO

GROUP 5 ======Houston.TX Jacksonville.FL Nashville.Davidson.TN Tulsa.OK Miami.FL Tampa.FL St.Petersburg.FL

GROUP 6 ======Indianapolis.IN Seattle.WA Portland.OR Lexington.Fayette.KY Akron.OH Raleigh.NC

GROUP 7 ======New.York.NY Columbus.OH Boston.MA Virginia.Beach.VA Pittsburgh.PA

GROUP 8 ======Milwaukee.WI Kansas.City.MO Oakland.CA Cincinnati.OH Rochester.NY

GROUP 9 ======Charlotte.NC Norfolk.VA Jersey.City.NJ Baton.Rouge.LA Mobile.AL

GROUP 10 ======Chicago.IL New.Orleans.LA St.Louis.MO Buffalo.NY Richmond.VA

GROUP 11 ======Philadelphia.PA Memphis.TN Cleveland.OH Louisville.KY Birmingham.AL

GROUP 12 ======Detroit.MI Baltimore.MD Washington.DC Atlanta.GA Newark.NJ

The groups of cities certainly make sense intuitively.

What’s next? (1) More Examples

14 (2) Recent advances in tree-based regression models, namely Bagging, Random Forests and Boosting. Bagging and Boosting can be used with other modeling methods as well. Example 6.2: Predicting/Modeling CPU Performance

> head(cpus) name syct mmin mmax cach chmin chmax perf estperf 1 ADVISOR 32/60 125 256 6000 256 16 128 198 199 2 AMDAHL 470V/7 29 8000 32000 32 8 32 269 253 3 AMDAHL 470/7A 29 8000 32000 32 8 32 220 253 4 AMDAHL 470V/7B 29 8000 32000 32 8 32 172 253 5 AMDAHL 470V/7C 29 8000 16000 32 8 16 132 132 6 AMDAHL 470V/8 26 8000 32000 64 8 32 318 290

> Performance = cpus$perf > Statplot(Performance) > Statplot(log(Performance))

> cpus.tree = rpart(log(Performance)~.,data=cpus[,2:7],cp=.001)

By default rpart() uses a complexity penalty of cp = .01 which will prune off more terminal nodes than we might want to consider initially. I will generally use a smaller

15 value of cp (e.g. .001) to lead to a tree that is larger but will likely over fit the data. Also if you really want a large tree you can use the arguments below when calling rpart: control=rpart.control(minsplit=##,minbucket=##).

> printcp(cpus.tree)

Regression tree: rpart(formula = log(Performance) ~ ., data = cpus[, 2:7], cp = 0.001)

Variables actually used in tree construction: [1] cach chmax chmin mmax syct

Root node error: 228.59/209 = 1.0938 n= 209

CP nsplit rel error xerror xstd 1 0.5492697 0 1.00000 1.02344 0.098997 2 0.0893390 1 0.45073 0.48514 0.049317 3 0.0876332 2 0.36139 0.43673 0.043209 4 0.0328159 3 0.27376 0.33004 0.033541 5 0.0269220 4 0.24094 0.34662 0.034437 6 0.0185561 5 0.21402 0.32769 0.034732 7 0.0167992 6 0.19546 0.31008 0.031878 8 0.0157908 7 0.17866 0.29809 0.030863 9 0.0094604 9 0.14708 0.27080 0.028558 10 0.0054766 10 0.13762 0.24297 0.026055  within 1 SE (xstd) of min 11 0.0052307 11 0.13215 0.24232 0.026039 12 0.0043985 12 0.12692 0.23530 0.025449 13 0.0022883 13 0.12252 0.23783 0.025427 14 0.0022704 14 0.12023 0.23683 0.025407 15 0.0014131 15 0.11796 0.23703 0.025453 16 0.0010000 16 0.11655 0.23455 0.025286

> plotcp(cpus.tree) size of tree

1 2 3 4 5 6 7 8 10 11 12 13 14 15 16 17 2 . 1 0 . 1 8 r . o 0 r r E

e v i t a l e 6 . R 0

l a v - X 4 . 0 2 . 0

Inf 0.22 0.088 0.054 0.03 0.022 0.018 0.016 0.012 0.0072 0.0054 0.0048 0.0032 0.0023 0.0018 0.0012

cp

16 > plot(cpus.tree,uniform=T) > text(cpus.tree,cex=.7)

Prune the tree back to a 10 split, 11 terminal node tree using cp = .0055.

> cpus.tree = rpart(log(Performance)~.,data=cpus[,2:7],cp=.0055) > post(cpus.tree)

17 > plot(log(Performance),predict(cpus.tree),ylab="Fitted Values",main="Fitted Values vs. log(Performance)") > abline(0,1)

> plot(predict(cpus.tree),resid(cpus.tree),xlab="Residuals", ylab="Fitted Values",main="Residuals vs. Fitted (cpus.tree)") > abline(h=0)

18 6.3 - Bagging for Regression Trees

Suppose we are interested in predicting a numeric response variable and

For example, , might come from an OLS model selected using Mallow’s with all potential predictors or from a CART model using x with a complexity parameter cp = . 005. Letting denote , where the expectation is with respect to the distribution underlying the training sample (since, viewed as a random variable, is a function of training sample, which can be viewed as a high-dimensional random variable) and not (which is considered fixed), we have that:

Thus in theory, if our prediction could be based on instead of then we would have a smaller mean squared error for prediction (and a smaller RMSEP as well). How can we approximate ? We could take a large number of samples of size n from the population and fit the specified model to each. Then average across these samples to get an average model , more specifically we could get the average prediction from the different models for a given set of predictor values x. Of course, this is silly as we only take one sample of size n in general when conducting any study. However, we can approximate this via the bootstrap. The bootstrap remember involves taking B random samples of size n drawn with replacement from our original sample. For each bootstrap sample, b, we will obtain an estimated model and average those to obtain an estimate of , i.e.

This estimator for should in theory be better than the one obtained from the training data. This process of averaging the predicted value from a given x is called bagging. Bagging works best when the fitted models vary substantially from one bootstrap sample to the next. Modeling schemes are complicated and involve the effective estimation of a large number parameters will benefit from bagging most. Projection Pursuit, CART and MARS are examples of algorithms where this is likely to be the case.

Example 6.3: Bagging with the Upper Ozone Concentration

> Statplot(Ozdata$upoz) > tupoz = Ozdata$upoz^.333 > Statplot(tupoz)

19 > names(Ozdata) [1] "day" "v500" "wind" "hum" "safb" "inbh" "dagg" "inbt" "vis" "upoz"

> Ozdata2 = data.frame(tupoz,Ozdata[,-10]) > oz.rpart = rpart(tupoz~.,data=Ozdata2,cp=.001) > plotcp(oz.rpart)

The complexity parameter cp = .007 looks like a good choice.

> oz.rpart2 = rpart(tupoz~.,data=Ozdata2,cp=.007) > post(oz.rpart2)

20 > badRMSE = sqrt(sum(resid(oz.rpart2)^2)/330) > badRMSE [1] 0.2333304

> results = rpart.cv(oz.rpart2,data=Ozdata2,cp=.007) > mean(results) [1] 0.3279498

MCCV code for RPART > rpart.cv = function(fit,data,p=.667,B=100,cp=.01) { n <- dim(data)[1] cv <- rep(0,B) y = fit$y for (i in 1:B) { ss <- floor(n*p) sam <- sample(1:n,ss,replace=F) fit2 <- rpart(formula(fit),data=data[-sam,],cp=cp) ynew <- predict(fit2,newdata=data[sam,]) cv[i] <- sqrt(sum((y[sam]-ynew)^2)/ss) } cv }

We now we consider using bagging to improve the prediction using CART.

21 > oz.bag = bagging(tupoz~.,data=Ozdata2,cp=.007,nbagg=10,coob=T) > print(oz.bag)

Bagging regression trees with 10 bootstrap replications

Call: bagging.data.frame(formula = tupoz ~ ., data = Ozdata2, cp = 0.007, nbagg = 10, coob = T)

Out-of-bag estimate of root mean squared error: 0.2745

> oz.bag = bagging(tupoz~.,data=Ozdata2,cp=.007,nbagg=25,coob=T) > print(oz.bag)

Bagging regression trees with 25 bootstrap replications

Call: bagging.data.frame(formula = tupoz ~ ., data = Ozdata2, cp = 0.007, nbagg = 25, coob = T)

Out-of-bag estimate of root mean squared error: 0.2685

> oz.bag = bagging(tupoz~.,data=Ozdata2,cp=.007,nbagg=100,coob=T) > print(oz.bag)

Bagging regression trees with 100 bootstrap replications

Call: bagging.data.frame(formula = tupoz ~ ., data = Ozdata2, cp = 0.007, nbagg = 100, coob = T)

Out-of-bag estimate of root mean squared error: 0.2614

MCCV code for Bagging > bag.cv = function(fit,data,p=.667,B=100,cp=.01) { n <- dim(data)[1] y = fit$y cv <- rep(0,B) for (i in 1:B) { ss <- floor(n*p) sam <- sample(1:n,ss,replace=F) fit2 <- bagging(formula(fit), data=data[-sam,],control=rpart.control(cp=cp)) ynew <- predict(fit2,newdata=data[sam,]) cv[i] <- mean((y[sam]-ynew)^2) } cv }

> results = bag.cv(oz.rpart2,data=Ozdata2,cp=.007) > mean(results) [1] 0.2869069

The RMSEP estimate is lower for the bagged estimate.

22 6.4 - Random Forests

Random forests are another bootstrap-based method for building trees. In addition to the use of bootstrap samples to build multiple trees that will then be averaged to obtain predictions, random forests also includes randomness in the tree building process for a given bootstrap sample. There are two packages (randomForest and party) to fit random forest models in R and there is also an implementation in JMP.

The algorithm for random forests from Elements of Statistical Learning is presented below.

The advantages of random forests are:  It handles a very large number of input variables (e.g. QSAR data)  It estimates the importance of input variables in the model.  It returns proximities between cases, which can be useful for clustering, detecting outliers, and visualizing the data. (Unsupervised learning)  Learning is faster than using the full set of potential predictors.  Even though bootstrap sample trees will vary some, the predictions from the bootstrap trees will tend to be correlated. For example, the variables used to form the first split in bootstrap trees will tend to be the same and thus the subsequent trees will tend to be similar and thus correlated. Correlations between sums of random variables will inflate the variance, thus bagging will not in some cases decrease the variance part of MSE as much as we might think. In contrast, the trees in a random forest will vary even more than the bootstrap trees due to the random subsets of predictors being considered at each split. This will lead to trees that will tend to be less correlated and thus the benefit of averaging the random forest trees will be more pronounced than in bagging.

23 Variable Importance

To measure variable the importance do the following. For each bootstrap sample we first compute the Out-of-Bag (OOB) error rate, . Next we randomly permute the OOB values on the variable while leaving the data on all other variables unchanged. If is important, permuting its values will reduce our ability to predict the response successfully for all of the OOB observations. Then we make the predictions using the permuted values and all the other predictors unchanged to obtain , which should be larger than the error rate of the unaltered data. The raw score for can be computed by the difference between these two OOB error rates,

Finally, average the raw scores over all the B trees in the forest,

to obtain an overall measure of the importance of . This measure is called the raw permutation accuracy importance score for the variable. Assuming the B raw scores are independent from tree to tree, we can compute a straightforward estimate of the standard error by computing the standard deviation of the values. Dividing the average raw importance scores from each bootstrap by the standard error gives what is called the mean decrease in accuracy for the variable.

Example 1: L.A. Ozone Levels

> oz.rf = randomForest(tupoz~.,data=Ozdata2,importance=T) > oz.rf

Call: randomForest(formula = tupoz ~ ., data = Ozdata2, importance = T) Type of random forest: regression Number of trees: 500 No. of variables tried at each split: 3

Mean of squared residuals: 0.05886756 % Var explained: 78.11

> plot(tupoz,predict(oz.rf),xlab="Y",ylab="Fitted Values (Y-hat)")

24 Short function to display mean decrease in accuracy from a random forest fit. rfimp = function(rffit) { barplot(sort(rffit$importance[,1]),horiz=T, xlab="Mean Decreasing in Accuracy",main="Variable Importance") }

The temperature variables, inversion base temperature and Sandburg AFB temperature, are clearly the most important predictors in the random forest.

25 The random forest command with options is show below. It should be noted that x and y can be replaced by the usual formula building nomenclature, namely…

randomForest(y ~ . , data=Mydata, etc…)

From the R html help file:

Some important options have been highlighted in the generic function call above and they are summarized below:

 ntree = number of bootstrap trees in the forest  mtry = number of variables to randomly pick at each stage (m in notes).  maxnodes = maximum number of terminal nodes to use in the bootstrap trees  importance = if T means variable importances will be computed.  proximity = should proximity measures among rows be computed?  do.trace = report OOB MSE and OOB R-square for each of the ntrees.

Estimate RMSEP for random forest models can be done using the errorest function in the ipred package (i.e. the bagging one) as show below. It is rather slow so doing more than 10 is not advisable. Each replicate does a 10-fold cross-validation B = 25 times per RMSEP estimate by default. It can be used with a variety of modeling methods, see the errorest help file for examples.

> error.RF = numeric(10) > for (i in 1:10) error.RF[i] = errorest(tupoz~.,data=Ozdata2,model=randomForest)$error  this is one line. > summary(error.RF) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.2395 0.2424 0.2444 0.2448 0.2466 0.2506

26 Example 6.4: Predicting San Francisco List Prices of Single Family Homes

Using www.redfin.com it is easy to create interesting data sets regarding the list price of homes and different characteristics of the home. The map below shows all the single-family homes listed in San Francisco just south of downtown and the Golden Gate Bridge. Once you drill down to this level of detail you can download the prices and home characteristics in an Excel file. From there it is easy to load the data in JMP, edit it, and then read it into R.

> names(SFhomes) [1] "ListPrice" "BEDS" "BATHS" "SQFT" "YrBuilt" "ParkSpots" "Garage" "LATITUDE" "LONGITUDE"

> str(SFhomes) 'data.frame': 263 obs. of 9 variables: $ ListPrice: int 749000 499900 579000 1295000 688000 224500 378000 140000 530000 399000 ... $ BEDS : int 2 3 4 4 3 2 5 1 2 3 ... $ BATHS : num 1 1 1 3.25 2 1 2 1 1 2 ... $ SQFT : int 1150 1341 1429 2628 1889 995 1400 772 1240 1702 ... $ YrBuilt : int 1931 1927 1937 1937 1939 1944 1923 1915 1925 1908 ... $ ParkSpots: int 1 1 1 3 1 1 1 1 1 1 ... $ Garage : Factor w/ 2 levels "Garage","No": 1 1 1 1 1 1 1 2 1 2 ... $ LATITUDE : num 37.8 37.7 37.7 37.8 37.8 ... $ LONGITUDE: num -122 -122 -122 -122 -122 ...

- attr(*, "na.action")=Class 'omit' Named int [1:66] 7 11 23 30 32 34 36 43 55 62 ...... - attr(*, "names")= chr [1:66] "7" "11" "23" "30" ...

27

Note: I omitted missing values by using the command na.omit.

> SFhomes = na.omit(SFhomes)

> head(SFhomes) ListPrice BEDS BATHS SQFT YrBuilt ParkSpots Garage LATITUDE LONGITUDE 1 749000 2 1.00 1150 1931 1 Garage 37.76329 -122.4029 2 499900 3 1.00 1341 1927 1 Garage 37.72211 -122.4037 3 579000 4 1.00 1429 1937 1 Garage 37.72326 -122.4606 4 1295000 4 3.25 2628 1937 3 Garage 37.77747 -122.4502 5 688000 3 2.00 1889 1939 1 Garage 37.75287 -122.4857 6 224500 2 1.00 995 1944 1 Garage 37.72778 -122.3838

We will now develop a random forest model for the list price of the home as a function of the number of bedrooms, number of bathrooms, square footage, year built, number of parking spots, garage (Garage or No), latitude, and longitude.

> sf.rf = randomForest(ListPrice~.,data=SFhomes,importance=T) > rfimp(sf.rf,horiz=F)

> plot(SFhomes$ListPrice,predict(sf.rf),xlab="Y", ylab="Y-hat",main="Predict vs. Actual List Price") > abline(0,1)

28 > plot(predict(sf.rf),SFhomes$ListPrice - predict(sf.rf),xlab="Fitted Values",ylab="Residuals") > abline(h=0)

Not good, severe heteroscedasticity and outliers.

> Statplot(SFhomes$ListPrice) > logList = log(SFhomes$ListPrice) > Statplot(logList)

> SFhomes.log = data.frame(logList,SFhomes[,-1]) > names(SFhomes.log) [1] "logList" "BEDS" "BATHS" "SQFT" "YrBuilt" "ParkSpots" "Garage" "LATITUDE" "LONGITUDE"

We now resume the model building process using the log of the list price as the response. To develop models we consider different choices for (m) the number predictors chosen in each random subset.

29 > attributes(sf.rf)

$names [1] "call" "type" "predicted" "mse" "rsq" "oob.times" [7] "importance" "importanceSD" "localImportance" "proximity" "ntree" "mtry" [13] "forest" "coefs" "y" "test" "inbag" "terms"

$class [1] "randomForest.formula" "randomForest"

> sf.rf$mtry [1] 2

> myforest = function(formula,data) {randomForest(formula,data,mtry=2)} > error.RF = numeric(10) > for (i in 1:10) error.RF[i] = errorest(logList~.,data=SFhomes.log,model=myforest)$error

> mean(error.RF) [1] 0.2600688 > summary(error.RF) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.2544 0.2577 0.2603 0.2601 0.2606 0.2664

Now consider using different values for (m). m = 3 > myforest = function(formula,data) {randomForest(formula,data,mtry=3)} > for (i in 1:10) error.RF[i] = + errorest(logList~.,data=SFhomes.log,model=myforest)$error > summary(error.RF) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.2441 0.2465 0.2479 0.2486 0.2490 0.2590 m = 4 > myforest = function(formula,data) {randomForest(formula,data,mtry=4)} > for (i in 1:10) error.RF[i] = + errorest(logList~.,data=SFhomes.log,model=myforest)$error > summary(error.RF) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.2359 0.2419 0.2425 0.2422 0.2445 0.2477 m = 5 * > myforest = function(formula,data) {randomForest(formula,data,mtry=5)} > for (i in 1:10) error.RF[i] = + errorest(logList~.,data=SFhomes.log,model=myforest)$error > summary(error.RF) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.2374 0.2391 0.2403 0.2418 0.2433 0.2500 m = 6 > myforest = function(formula,data) {randomForest(formula,data,mtry=6)} > for (i in 1:10) error.RF[i] = + errorest(logList~.,data=SFhomes.log,model=myforest)$error

30 > summary(error.RF) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.2362 0.2403 0.2430 0.2439 0.2462 0.2548

> sf.final = randomForest(logList~.,data=SFhomes.log,importance=T,mtry=5) > rfimp(sf.final,horiz=F)

> plot(logList,predict(sf.final),xlab="log(List Price)", ylab="Predicted log(List Price)")

31 > plot(predict(sf.final),logList - predict(sf.final), xlab="Fitted Values",ylab="Residuals") > abline(h=0)

As with earlier methods, it is not hard to write our own MCCV function.

> rf.cv = function(fit,data,p=.667,B=100,mtry=fit$mtry,ntree=fit$ntree) { n <- dim(data)[1] cv <- rep(0,B) y = fit$y for (i in 1:B) { ss <- floor(n*p) sam <- sample(1:n,ss,replace=F) fit2 <- randomForest(formula(fit),data=data[sam,],mtry=mtry,ntree=ntree) ynew <- predict(fit2,newdata=data[-sam,]) cv[i] <- mean((y[-sam]-ynew)^2) } cv }

This function also allows you to experiment with different values for (mtry) & (ntree).

> results = rf.cv(sf.final,data=SFhomes.log,mtry=2) > mean(results) [1] 0.3059704 > results = rf.cv(sf.final,data=SFhomes.log,mtry=3) > mean(results) [1] 0.2960605 > results = rf.cv(sf.final,data=SFhomes.log,mtry=4) > mean(results) [1] 0.2945276 > results = rf.cv(sf.final,data=SFhomes.log,mtry=5)  m = 5 is optimal. > mean(results) [1] 0.2889211

32 > results = rf.cv(sf.final,data=SFhomes.log,mtry=6) > mean(results) [1] 0.289072

33 Partial Plots to Visualize Predictor Effects > names(SFhomes.log) [1] "logList" "BEDS" "BATHS" "SQFT" "YrBuilt" "ParkSpots" "Garage" "LATITUDE" "LONGITUDE" > partialPlot(sf.final,SFhomes.log,SQFT) > partialPlot(sf.final,SFhomes.log,LONGITUDE) > partialPlot(sf.final,SFhomes.log,LATITUDE) > partialPlot(sf.final,SFhomes.log,BEDS) > partialPlot(sf.final,SFhomes.log,BATHS) > partialPlot(sf.final,SFhomes.log,YrBuilt) > partialPlot(sf.final,SFhomes.log,ParkSpots)

34 6.6 - Boosting Regression Trees (R package – gbm)

Boosting, like bagging, is way to combine or “average” the results of multiple trees in order to improve their predictive ability. Boosting however does not simply average trees constructed from bootstrap samples of the original data, rather it creates a sequence of trees where the next tree in sequence essentially uses the residuals from the previous trees as the response. This type of approach is referred to as gradient boosting. Using the squared error as the measure of fit, the Gradient Tree Boosting Algorithm is given below.

Gradient Tree Boosting Algorithm (Squared Error)

1. Initialize .

2. For a) For compute which are simply the residuals from the previous tree. b) Fit a regression tree using as the response, giving terminal node regions . c) For compute the mean of the residuals in each of the terminal nodes, call these d) Update the model as follows:

The parameter is a shrinkage parameter which can be tweaked along with and to improve cross-validated predictive performance.

3. Output .

4. Stochastic Gradient Boosting uses the same algorithm as above, but takes a random subsample of the training data (without replacement), and grows the next tree using only those observations. A typical fraction for the subsamples would be ½ but smaller values could be used when n is large.

The authors of Elements of Statistical Learning recommend using for the number of terminal nodes in the regression trees grown at each of the M steps. For classification trees smaller values of are used, with 2 being optimal in many cases. A two terminal node tree is called a stump. Small values of have been found to produce superior results for regression problems, however this generally will require a large value for M. For example, for shrinkage values between .001 and .01 it is recommended that the number of iterations be between 3,000 and 10,000. Thus in terms of model development one needs to consider various combinations of Jm, and M.

The algorithm as presented above looks a bit daunting at first, however the graphic below simplifies the boosting concept considerably.

35 Models

Here M = 49, so the final model is simply,

Let’s consider some examples.

Example 6.5: Boston Housing Data

> bos.gbm = gbm(logCMEDV~.,data=Boston3, + distribution="gaussian", + n.trees=1000, + shrinkage=.1, + interaction.depth=4,  small Jm + bag.fraction=0.5, + train.fraction=0.8, + n.minobsinnode=10, + cv.folds=5, + keep.data=T, + verbose=T)

> bos.gbm = gbm(logCMEDV~.,data=Boston3,distribution="gaussian", n.trees=2000,shrinkage=.1,interaction.depth=4,bag.fraction=.5, train.fraction=.8,n.minobsinnode=10,cv.folds=5,keep.data=T,verbose=T)

CV: 1 Iter TrainDeviance ValidDeviance StepSize Improve 1 0.1206 0.1379 0.1000 0.0183 2 0.1045 0.1189 0.1000 0.0143 3 0.0901 0.1029 0.1000 0.0138 4 0.0791 0.0923 0.1000 0.0109 5 0.0698 0.0833 0.1000 0.0096 6 0.0611 0.0738 0.1000 0.0071 7 0.0550 0.0664 0.1000 0.0056 8 0.0496 0.0600 0.1000 0.0055 9 0.0453 0.0567 0.1000 0.0040 10 0.0412 0.0530 0.1000 0.0032 100 0.0078 0.0242 0.1000 -0.0001 200 0.0044 0.0230 0.1000 -0.0001 300 0.0029 0.0228 0.1000 -0.0000 400 0.0019 0.0237 0.1000 -0.0000 500 0.0014 0.0243 0.1000 -0.0000 600 0.0010 0.0242 0.1000 -0.0000

36 700 0.0007 0.0248 0.1000 -0.0000 800 0.0006 0.0248 0.1000 -0.0000 900 0.0004 0.0249 0.1000 -0.0000 1000 0.0003 0.0248 0.1000 -0.0000 1100 0.0003 0.0250 0.1000 -0.0000 1200 0.0002 0.0252 0.1000 -0.0000 1300 0.0002 0.0251 0.1000 -0.0000 1400 0.0001 0.0252 0.1000 -0.0000 1500 0.0001 0.0254 0.1000 -0.0000 1600 0.0001 0.0255 0.1000 -0.0000 1700 0.0001 0.0256 0.1000 -0.0000 1800 0.0001 0.0255 0.1000 -0.0000 1900 0.0000 0.0256 0.1000 -0.0000 2000 0.0000 0.0255 0.1000 -0.0000

CV: 2 Iter TrainDeviance ValidDeviance StepSize Improve 1 0.1198 0.1367 0.1000 0.0193 2 0.1031 0.1225 0.1000 0.0172 3 0.0880 0.1048 0.1000 0.0144 4 0.0768 0.0914 0.1000 0.0087 5 0.0675 0.0829 0.1000 0.0080 6 0.0601 0.0751 0.1000 0.0057 7 0.0533 0.0693 0.1000 0.0052 8 0.0481 0.0646 0.1000 0.0052 9 0.0432 0.0590 0.1000 0.0044 10 0.0395 0.0553 0.1000 0.0035 100 0.0070 0.0279 0.1000 -0.0001 200 0.0038 0.0272 0.1000 -0.0001 300 0.0023 0.0271 0.1000 -0.0001 400 0.0015 0.0269 0.1000 -0.0000 500 0.0010 0.0269 0.1000 -0.0000 600 0.0007 0.0268 0.1000 -0.0000 700 0.0005 0.0267 0.1000 -0.0000 800 0.0004 0.0272 0.1000 -0.0000 900 0.0003 0.0269 0.1000 -0.0000 1000 0.0002 0.0268 0.1000 -0.0000 1100 0.0002 0.0272 0.1000 -0.0000 1200 0.0001 0.0272 0.1000 -0.0000 1300 0.0001 0.0269 0.1000 -0.0000 1400 0.0001 0.0270 0.1000 -0.0000 1500 0.0001 0.0272 0.1000 -0.0000 1600 0.0000 0.0271 0.1000 -0.0000 1700 0.0000 0.0270 0.1000 -0.0000 1800 0.0000 0.0271 0.1000 -0.0000 1900 0.0000 0.0270 0.1000 -0.0000 2000 0.0000 0.0271 0.1000 -0.0000

CV: 3 Iter TrainDeviance ValidDeviance StepSize Improve 1 0.1266 0.1089 0.1000 0.0201 2 0.1106 0.0984 0.1000 0.0170 3 0.0951 0.0854 0.1000 0.0147 4 0.0821 0.0757 0.1000 0.0118 5 0.0721 0.0683 0.1000 0.0098 6 0.0642 0.0615 0.1000 0.0074 7 0.0574 0.0569 0.1000 0.0070 8 0.0517 0.0536 0.1000 0.0047 9 0.0467 0.0504 0.1000 0.0051 10 0.0414 0.0470 0.1000 0.0041 100 0.0080 0.0237 0.1000 -0.0001 200 0.0042 0.0227 0.1000 -0.0001 300 0.0025 0.0233 0.1000 -0.0001 400 0.0016 0.0241 0.1000 -0.0000 500 0.0011 0.0240 0.1000 -0.0000 600 0.0008 0.0243 0.1000 -0.0000 700 0.0006 0.0243 0.1000 -0.0000 800 0.0004 0.0244 0.1000 -0.0000 900 0.0003 0.0243 0.1000 -0.0000 1000 0.0002 0.0245 0.1000 -0.0000 1100 0.0002 0.0246 0.1000 -0.0000 1200 0.0001 0.0245 0.1000 -0.0000 1300 0.0001 0.0245 0.1000 -0.0000 1400 0.0001 0.0246 0.1000 -0.0000 1500 0.0001 0.0246 0.1000 -0.0000

37 1600 0.0001 0.0248 0.1000 -0.0000 1700 0.0000 0.0247 0.1000 -0.0000 1800 0.0000 0.0247 0.1000 -0.0000 1900 0.0000 0.0248 0.1000 -0.0000 2000 0.0000 0.0248 0.1000 -0.0000

CV: 4 Iter TrainDeviance ValidDeviance StepSize Improve 1 0.1250 0.1217 0.1000 0.0216 2 0.1064 0.1044 0.1000 0.0198 3 0.0928 0.0926 0.1000 0.0139 4 0.0825 0.0834 0.1000 0.0101 5 0.0730 0.0726 0.1000 0.0087 6 0.0632 0.0640 0.1000 0.0081 7 0.0565 0.0571 0.1000 0.0052 8 0.0513 0.0513 0.1000 0.0058 9 0.0467 0.0479 0.1000 0.0031 10 0.0421 0.0431 0.1000 0.0044 100 0.0076 0.0233 0.1000 -0.0001 200 0.0043 0.0228 0.1000 -0.0001 300 0.0027 0.0230 0.1000 -0.0000 400 0.0017 0.0232 0.1000 -0.0000 500 0.0011 0.0230 0.1000 -0.0000 600 0.0008 0.0227 0.1000 -0.0000 700 0.0006 0.0227 0.1000 -0.0000 800 0.0004 0.0234 0.1000 -0.0000 900 0.0003 0.0232 0.1000 -0.0000 1000 0.0002 0.0231 0.1000 -0.0000 1100 0.0002 0.0230 0.1000 -0.0000 1200 0.0001 0.0231 0.1000 -0.0000 1300 0.0001 0.0232 0.1000 -0.0000 1400 0.0001 0.0233 0.1000 -0.0000 1500 0.0001 0.0234 0.1000 -0.0000 1600 0.0000 0.0233 0.1000 -0.0000 1700 0.0000 0.0233 0.1000 -0.0000 1800 0.0000 0.0233 0.1000 -0.0000 1900 0.0000 0.0234 0.1000 -0.0000 2000 0.0000 0.0233 0.1000 -0.0000

CV: 5 Iter TrainDeviance ValidDeviance StepSize Improve 1 0.1261 0.1130 0.1000 0.0200 2 0.1110 0.0972 0.1000 0.0136 3 0.0958 0.0830 0.1000 0.0149 4 0.0835 0.0716 0.1000 0.0111 5 0.0728 0.0621 0.1000 0.0096 6 0.0641 0.0534 0.1000 0.0080 7 0.0576 0.0482 0.1000 0.0064 8 0.0519 0.0426 0.1000 0.0045 9 0.0471 0.0387 0.1000 0.0044 10 0.0427 0.0347 0.1000 0.0037 100 0.0084 0.0122 0.1000 -0.0002 200 0.0048 0.0128 0.1000 -0.0001 300 0.0031 0.0134 0.1000 -0.0000 400 0.0021 0.0133 0.1000 -0.0000 500 0.0014 0.0141 0.1000 -0.0000 600 0.0010 0.0137 0.1000 -0.0000 700 0.0008 0.0139 0.1000 -0.0000 800 0.0006 0.0139 0.1000 -0.0000 900 0.0004 0.0137 0.1000 -0.0000 1000 0.0003 0.0138 0.1000 -0.0000 1100 0.0002 0.0139 0.1000 -0.0000 1200 0.0002 0.0139 0.1000 -0.0000 1300 0.0001 0.0140 0.1000 -0.0000 1400 0.0001 0.0140 0.1000 -0.0000 1500 0.0001 0.0140 0.1000 -0.0000 1600 0.0001 0.0141 0.1000 -0.0000 1700 0.0001 0.0141 0.1000 -0.0000 1800 0.0000 0.0141 0.1000 -0.0000 1900 0.0000 0.0142 0.1000 -0.0000 2000 0.0000 0.0142 0.1000 -0.0000

Iter TrainDeviance ValidDeviance StepSize Improve 1 0.1246 0.2433 0.1000 0.0189 2 0.1079 0.2169 0.1000 0.0164

38 3 0.0941 0.1899 0.1000 0.0137 4 0.0823 0.1705 0.1000 0.0126 5 0.0715 0.1492 0.1000 0.0101 6 0.0641 0.1387 0.1000 0.0081 7 0.0562 0.1235 0.1000 0.0081 8 0.0506 0.1126 0.1000 0.0056 9 0.0457 0.1052 0.1000 0.0049 10 0.0414 0.0998 0.1000 0.0030 100 0.0087 0.0524 0.1000 -0.0001 200 0.0050 0.0502 0.1000 -0.0001 300 0.0033 0.0488 0.1000 -0.0000 400 0.0023 0.0483 0.1000 -0.0000 500 0.0017 0.0462 0.1000 -0.0000 600 0.0012 0.0460 0.1000 -0.0000 700 0.0009 0.0453 0.1000 -0.0000 800 0.0007 0.0447 0.1000 -0.0000 900 0.0006 0.0441 0.1000 -0.0000 1000 0.0004 0.0450 0.1000 -0.0000 1100 0.0003 0.0444 0.1000 -0.0000 1200 0.0003 0.0445 0.1000 -0.0000 1300 0.0002 0.0443 0.1000 -0.0000 1400 0.0002 0.0445 0.1000 -0.0000 1500 0.0001 0.0445 0.1000 -0.0000 1600 0.0001 0.0443 0.1000 -0.0000 1700 0.0001 0.0444 0.1000 -0.0000 1800 0.0001 0.0443 0.1000 -0.0000 1900 0.0001 0.0443 0.1000 -0.0000 2000 0.0001 0.0442 0.1000 -0.0000

Verbose obviously provides a lot of detail about the fitted model that does not look particularly useful. We can plot the CV performance results using the gbm.perf().

> gbm.perf(bos.gbm,method="OOB") [1] 45 > title(main="Out of Bag and Training Prediction Errors")

> gbm.perf(bos.gbm,method="test") [1] 869

39

> gbm.perf(bos.gbm,method="cv") [1] 159

> attach(Boston3)  in order obtain predicted values need X’s attached. > yhat = predict(bos.gbm,n.trees=869)  M = 869, probably overfitting!

40 > plot(logCMEDV,yhat)

> cor(logCMEDV,yhat) [1] 0.9719412 > cor(logCMEDV,yhat)^2  R2 = 94.47% [1] 0.9446697

Using the number of trees suggest by 5-fold cross-validation instead, i.e. smaller M. > plot(logCMEDV,predict(bos.gbm,n.trees=159))

> cor(logCMEDV,predict(bos.gbm,n.trees=159)) [1] 0.9679083 > cor(logCMEDV,predict(bos.gbm,n.trees=159))^2 [1] 0.9368466

We can consider varying the shrinkage parameter  as well. For example we consider changing the shrinkage to  for the Boston Housing data. This requires are larger number of iterations (M) as evidenced below.

41 > gbm.perf(bos.gbm,method="OOB") [1] 111 > gbm.perf(bos.gbm,method="cv") [1] 273 > gbm.perf(bos.gbm,method="test") [1] 1557

The fit is not much different from the model above, in fact it is actually a bit worse.

We might consider trying larger values for shrinkage, say .15 or .20, but we will stick with the  shrinkage choice for subsequent gradient boosting models. In the examples above we used trees with Jm = 4 terminal nodes. We might now consider increasing the model size (Jm) as well. Before exploring tree sizes however, we will look at variable importances for our current model.

> best.iter = gbm.perf(bos.gbm,method="test") > summary(bos.gbm,n.trees=best.iter) var rel.inf 1 LSTAT 37.4903576 2 RM 23.8754098 3 CRIM 9.4464051 4 DIS 8.6265412 5 TAX 4.5582532 6 B 4.4923617 7 AGE 3.3002739 8 NOX 3.1613004 9 PTRATIO 2.3503516 10 INDUS 1.2405061 11 RAD 0.7834407 12 CHAS 0.5500685 13 ZN 0.1247303

42 We can consider increasing the size of the trees being averaged through the use of the interaction depth, which determines the number of terminal nodes in each tree (Jm above).

Jm = 6 > bos.gbm2 = gbm(logCMEDV~.,data=Boston3,interaction.depth=6, n.minobsinnode=10,n.trees=5000,bag.fraction=.5,train.fraction=.8, cv.folds=5,shrinkage=.1,distribution="gaussian")

> gbm.perf(bos.gbm2,method="OOB") [1] 103 > gbm.perf(bos.gbm2,method="cv") [1] 130 > gbm.perf(bos.gbm2,method="test") [1] 205

> yhat = predict(bos.gbm2,n.trees=best.iter) > plot(logCMEDV,yhat) > cor(logCMEDV,yhat) [1] 0.9656995 > cor(logCMEDV,yhat)^2 [1] 0.9325755

Jm = 8 > bos.gbm2 = gbm(logCMEDV~.,data=Boston3,interaction.depth=8, n.minobsinnode=10,n.trees=5000, bag.fraction=.5,train.fraction=.8, cv.folds=5,shrinkage=.1,distribution="gaussian") > best.iter = gbm.perf(bos.gbm2,method="test") [1] 557 > yhat = predict(bos.gbm2,n.trees=best.iter) > cor(logCMEDV,yhat) [1] 0.9733643 > cor(logCMEDV,yhat)^2 [1] 0.9474381  Highest yet, overfit?

43 Untransformed response Surprisingly the non-transformed response model fits even better, with Jm = 4 only! The R-square is 96.84%.

> bos.gbm3 = gbm(CMEDV~.,data=Boston.working,interaction.depth=4, n.minobsinnode=10,n.trees=5000, bag.fraction=.5,train.fraction=.8, cv.folds=5,shrinkage=.1,distribution="gaussian")

> plot(Boston.working$CMEDV,yhat) > cor(Boston.working$CMEDV,yhat) [1] 0.9840717 > cor(Boston.working$CMEDV,yhat)^2 [1] 0.9683971

Plotting Results for bos.gbm3 (untransformed response) > summary(bos.gbm3,n.trees=1344)  M from “test” var rel.inf 1 RM 42.5477930 2 LSTAT 26.3242657 3 DIS 6.4239728 4 CRIM 6.0918666 5 AGE 3.8649814 6 NOX 3.1647953 7 B 2.8982479 8 PTRATIO 2.6237198 9 TAX 2.4587622 10 CHAS 1.4373884 11 INDUS 1.0359675 12 RAD 0.9889381 13 ZN 0.1393012

44 > library(plotmo) > plotmo(bos.gbm3)

grid: CRIM ZN INDUS CHAS RM AGE DIS RAD TAX PTRATIO B LSTAT NOX 0.15 0 8.14 0 6.185 77.9 3.62 4 307 17.9 0.393 0.1015 5.085

> par(mfrow=c(3,3)) > plot.gbm(bos.gbm3,1,n.trees=1344) > plot.gbm(bos.gbm3,2,n.trees=1344) > plot.gbm(bos.gbm3,3,n.trees=1344) > plot.gbm(bos.gbm3,4,n.trees=1344) > plot.gbm(bos.gbm3,5,n.trees=1344) > plot.gbm(bos.gbm3,6,n.trees=1344) > plot.gbm(bos.gbm3,7,n.trees=1344) > plot.gbm(bos.gbm3,8,n.trees=1344) > plot.gbm(bos.gbm3,9,n.trees=1344)

45

Using n.trees = 245 suggested by 5-fold cross-validation

46 Contour plots (pick two variables by number to plot)

> plot.gbm(bos.gbm3,c(12,5),n.trees=1344)  left plot > plot.gbm(bos.gbm3,c(12,5),n.trees=245)  right plot

> plot.gbm(bos.gbm3,c(12,13,5),n.trees=245)

47 Example 6.6 - L.A. Ozone Concentration

> oz.gbm = gbm(tupoz~.,data=Ozdata2,distribution="gaussian", n.trees=2000,shrinkage=.1,interaction.depth=4,bag.fraction=.5, train.fraction=.8,n.minobsinnode=10,cv.folds=5,keep.data=T,verbose=T)

> gbm.perf(oz.gbm,method="OOB") [1] 26 > gbm.perf(oz.gbm,method="cv") [1] 46 > gbm.perf(oz.gbm,method="test") [1] 413

> yhat = predict(oz.gbm,n.trees=413)  Overfit? > plot(tupoz,yhat)

> cor(tupoz,yhat) [1] 0.9618522 > cor(tupoz,yhat)^2 [1] 0.9251596

> yhat = predict(oz.gbm,n.trees=46)  CV > plot(tupoz,yhat)

> cor(tupoz,yhat) [1] 0.9618522 > cor(tupoz,yhat)^2 [1] 0.9251596

48 Jm = 6 > oz.gbm3 = gbm(tupoz~.,data=Ozdata2,distribution="gaussian", n.trees=2000,shrinkage=.1,interaction.depth=6,bag.fraction=.5, train.fraction=.8,n.minobsinnode=10,cv.folds=5,keep.data=T,verbose=T) > gbm.perf(oz.gbm2,method="OOB") [1] 44 > gbm.perf(oz.gbm2,method="cv") [1] 45 > gbm.perf(oz.gbm2,method="test") [1] 1077 > yhat = predict(oz.gbm2,n.trees=1077) > plot(tupoz,yhat)

> cor(tupoz,yhat) [1] 0.9733923 > cor(tupoz,yhat)^2 [1] 0.9474925

> summary(oz.gbm2,n.trees=1077) var rel.inf 1 safb 30.792830 2 inbt 23.339880 3 day 11.620270 4 inbh 8.588146 5 dagg 7.611727 6 hum 7.099951 7 v500 4.513893 8 vis 4.116435 9 wind 2.316868

49 Using the number of trees suggested by 5-fold Cross-validation (M = 45)

> par(mfrow=c(3,3)) > plot.gbm(oz.gbm3,1,n.trees=45) > plot.gbm(oz.gbm3,2,n.trees=45) > plot.gbm(oz.gbm3,3,n.trees=45) > plot.gbm(oz.gbm3,4,n.trees=45) > plot.gbm(oz.gbm3,5,n.trees=45) > plot.gbm(oz.gbm3,6,n.trees=45) > plot.gbm(oz.gbm3,7,n.trees=45) > plot.gbm(oz.gbm3,8,n.trees=45) > plot.gbm(oz.gbm3,9,n.trees=45)

50 > plot.gbm(oz.gbm2,c(1,5),n.trees=45)

> plot.gbm(oz.gbm2,c(5,8,1),n.trees=45)

Day is X3 in the conditioning plot plot. Warning this takes a very long time to run on even a small dataset!

51 Based upon my experimentation with gradient boosted regression trees it seems that shrinkage of is a good starting point. Experimenting with larger trees (Jm) and different shrinkage values (larger or smaller) seems to be worthwhile. For example with the L.A. ozone data a shrinkage of with Jm = 6 produced a “better” tree than either of the ones above.

A MCCV cross-validation function for gradient boosted regression trees should take information about the terminal node size (Jm), which method will be used to choose M (i.e.. OOB, CV, or Train/Test), and what shrinkage () value to use throughout as arguments. The value for M could therefore vary from one bootstrap sample to the next. We definitely do not need to see verbose output from each loop. We also want to consider relatively small values for B because computation time could be an issue.

> Ozdata2 = data.frame(tupoz=Ozdata$upoz^.333,Ozdata[,-10]) > oz.gbm = gbm(tupoz~.,data=Ozdata2,distribution="gaussian",n.trees=2000,shrinkage=.1, interaction.depth=4, bag.fraction=0.5,train.fraction=.8,n.minobsinnode=10,cv.folds=5, keep.data=T,verbose=T)

Using out-of-bag value for M > set.seed(1) > results = gbm.cv(oz.gbm,Ozdata2,B=25,method="OOB") > summary(results) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.04471 0.05670 0.06660 0.06614 0.07378 0.08713

Using 5-fold cross-validation value for M > set.seed(1) > results = gbm.cv(oz.gbm,data=Ozdata2,B=25,method=”cv”) > summary(results) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.04596 0.05724 0.06429 0.06491 0.07215 0.08326

Using train/test value for M > set.seed(1) > results = gbm.cv(oz.gbm3,data=Ozdata2,B=25,method=”test”) > summary(results) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.04669 0.05481 0.06597 0.06544 0.07415 0.08376 gbm.cv = function(fit,data,p=.667,B=10,method="cv",interaction.depth=4, shrinkage=.1,cv.folds=5) { n <- dim(data)[1] y = data[,1] #assumes response is in first column!! cv <- rep(0,B) for (i in 1:B) { ss <- floor(n*p) sam <- sample(1:n,ss,replace=F) fit2 <- gbm(formula(data[sam,]),data=data[sam,], shrinkage=shrinkage,distribution="gaussian", n.trees=fit$n.trees,bag.fraction=fit$bag.fraction, train.fraction=fit$train.fraction, interaction.depth=interaction.depth,n.minobsinnode=10, cv.folds=cv.folds,keep.data=T,verbose=F) m = gbm.perf(fit2,method=method,plot.it=F) ynew = predict(fit2,n.tree=m,newdata=data[-sam,]) cv[i] <- mean((y[-sam]-ynew)^2) } cv }

52 This MCCV function is useful for choosing values for the interaction depth ( and the shrinkage (). Also the function gbm.cv above assesses the boosted tree models based upon the method employed to choose the number of trees to be “averaged” to obtain the final fit, with method “cv” recommended.

Another approach would be to anchor down values for the shrinkage parameter , the number of terminal nodes in the trees used in the boosted fit , and the number of trees (M) to average chosen by using the cross-validation features within in the gbm function. We could then write a cross- validation function that takes these tuning parameters as arguments and use bootstrapped training/test sets to assess predictive performance.

A better approach is to set aside a validation set that is NOT used in the model development process at all and then fit models using the wide variety of approaches we have examined. We can then predict the response value for the validation set to get an idea of which method has the best predictive performance.

Formation of Training and Validation Set > names(Boston3) [1] "age" "B" "chas" "crim" "dis" "indus" "lstat" "medv" "nox" "ptratio" "rad" "rm" [13] "tax" "zn" > Boston.yt = data.frame(logmedv=log(Boston3$medv),Boston3[,-8]) > names(Boston.yt) [1] "logmedv" "age" "B" "chas" "crim" "dis" "indus" "lstat" "nox" "ptratio" "rad" "rm" [13] "tax" "zn" > dim(Boston.yt) [1] 506 14 > set.seed(1) > sam = sample(1:506,400,replace=F) > Boston.train = Boston.yt[sam,] > Boston.test = Boston.yt[-sam,] > dim(Boston.train) [1] 400 14 > dim(Boston.test) [1] 106 14

OLS (log transformed response, step-wise selection, no predictor transformations) > bos.lm = lm(logmedv~.,data=Boston.train) > bos.step = step(bos.lm) > ypred = predict(bos.step,newdata=Boston.test) > mean((ypred-Boston.test[,1])^2) [1] 0.03403975 Neural Networks (h = 10, decay = .001) > bos.nn = nnet(logmedv~.,data=Boston.train,size=10,decay=.001,skip=T,linout=T) > ypred = predict(bos.nn,newdata=Boston.test) > mean((ypred - Boston.test$logmedv)^2) [1] 0.03577034

RPART > bos.rpart = rpart(logmedv~.,data=Boston.train,cp=.018) cp chosen using plotcp() > ypred = predict(bos.rpart,newdata=Boston.test) > mean((ypred-Boston.test[,1])^2) [1] 0.04023908

Bagging (number of average trees (nbagg=40) chosen via cross-validation using the training data only) > bos.bag = bagging(logmedv~.,data=Boston.train,nbagg=40,coob=T) > ypred = predict(bos.bag,newdata=Boston.test) > mean((ypred-Boston.test[,1])^2)

53 [1] 0.02501989

Random Forests (m = 5)  chosen using the rf.cv function above. > bos.rf = randomForest(logmedv~.,data=Boston.train,mtry=5) > ypred = predict(bos.rf,newdata=Boston.test) > mean((ypred - Boston.test$logmedv)^2) [1] 0.01710355

Gradient Boosted Trees ( Jm = 5 and  , these choices were determined using the gbm.cv function above. ) > bos.gbm = gbm(logmedv~.,data=Boston.train, distribution=”gaussian”, n.trees=2000, interaction.depth=5, shrinkage=.05, bag.fraction=.5, train.fraction=.8, cv.folds=5)

> gbm.perf(bos.gbm,method="cv") [1] 328

> ypred = predict(bos.gbm,newdata=Boston.test,n.trees=328) > mean((Boston.test$logmedv-ypred)^2) [1] 0.01806277

And the winner is….. Random Forest!

Discussion: As I have mentioned previously, for kaggle.com prediction problems the standard benchmark to beat is a random forest. Thus it is not surprising that it was the winner here. It is important to note that I set aside a portion of the data to determine which model had the best predictive performance and these set aside data were not used in the model building process at all! Although not shown, I used cross-validation extensively when working with the training data in determining the values of the tuning parameters for the different methods I employed. I will demonstrate this process in class for one of the methods I used. You will notice I did not consider ridge regression, Lasso regression, PCR, or PLS for these data as the number of predictors for the Boston Housing data is not large (p = 13) and there are not any strong correlations amongst the potential predictors. Other methods I could/should have potential considered are an OLS model with predictor transformations (possible suggested by ACE/AVAS) and projection pursuit regression. Given that the neural network did not perform particularly well, I would not expect project pursuit to perform better than random forests (or boosted regression trees which were a close second).

54 6.6 - Treed Regression

In Treed Regression a tree is grown where the terminal nodes contain a traditional model. For example, in each terminal node we might fit an OLS regression model to the observations in that node, or for a binary classification problem we might fit a logistic regression model in each node, etc.

There are two packages in R that perform treed regression, Cubist and party. We will use the implementation in Cubist for regression problems where the response (Y) is numeric. The basic function for fitting a treed OLS regression model is cubist(). cubist(x, y, committees = 1, neighbors=0, control = cubistControl(), ...) Arguments x a matrix or data frame of predictor variables. Missing data are allowed but (at this time) only numeric, character and factor values are allowed. y a numeric vector of outcome committees an integer: how many committee models (e.g.. boosting iterations) should be used? control options that control details of the cubist algorithm. See cubistControl

neighbors number of nearest neighbors to consider in correcting the prediction (0 to 9).

55 The neighbors option specifies whether or not to use nearest neighbors in making predictions. The idea behind nearest-neighbors is outlined on the following page and is taken from the website www.rulequest.com.

For some applications, the predictive accuracy of a rule-based model can be improved by combining it with an instance-based or nearest-neighbor model. The latter predicts the target value of a new case by finding the n most similar cases in the training data, and averaging their target values. Cubist employs an unusual method for combining rule-based and instance-based models. Cubist finds the n training cases that are "nearest" (most similar) to the case in question. Then, rather than averaging their target values directly, Cubist first adjusts these values using the rule-based model. Here's how it works: Suppose that x is the case whose unknown target value is to be predicted, and is one of x's nearest neighbors in the training data. The target value of y is known: let us call it T(). The rule-based model can be used to predict target values for any case, so let its predictions for x and be M(x) and M() respectively. The model then predicts that the difference between the target values of x and is M(x)-M(). The value of x predicted by neighbor is adjusted to reflect this difference, so that Cubist uses T()+M(x)-M() instead of 's raw target value.

The neighbors option instructs Cubist to use composite models of this type. Now for the value of n, the number of nearest neighbors to be used the allowable range is from 0 to 9. We can use cross-validation to choose “optimal” values for the number of committees and the number of nearest-neighbors to use. cubist.cv = function(x,y,p=.667,B=10,committees=1,neighbors=0) { n <- length(y) cv <- rep(0,B) for (i in 1:B) { ss <- floor(n*p) sam <- sample(1:n,ss,replace=F) fit2 <- cubist(x[sam,],y[sam],committees=committees,neighbors=neighbors) ynew <- predict(fit2,newdata=x[-sam,],neighbors=neighbors) cv[i] <- mean((y[-sam]-ynew)^2) } cv }

We now consider the usual examples.

Example 6.7 - Boston Housing Data

> names(Boston3)

[1] "logCMEDV" "CRIM" "ZN" "INDUS" "CHAS" "RM" "AGE" "DIS" "RAD" "TAX" [11] "PTRATIO" "B" "LSTAT" "NOX" > bos.x = Boston3[,-1] > bos.y = logCMEDV > bos.cub = cubist(bos.x,bos.y,committees=1) > bos.cub

Call: cubist.default(x = bos.x, y = bos.y, committees = 1)  no boosting

Number of samples: 506 Number of predictors: 13

Number of committees: 1 Number of rules: 8

56 > summary(bos.cub)

Call: cubist.default(x = bos.x, y = bos.y, committees = 1)

Cubist [Release 2.07 GPL Edition] Tue Feb 28 09:14:54 2012 ------

Target attribute `outcome'

Read 506 cases (14 attributes) from undefined.data

Model:

Rule 1: [44 cases, mean 9.196384, range 8.517193 to 9.786954, est err 0.166292]

if LSTAT > 0.19 NOX > 6.71 then outcome = 8.09701 + 0.579 DIS + 0.29 NOX - 2.9 LSTAT - 0.201 RM - 0.0065 CRIM

Rule 2: [20 cases, mean 9.518296, range 8.853665 to 10.22194, est err 0.169161]

if TAX > 469 LSTAT > 0.19 NOX <= 6.71 then outcome = 23.867485 - 0.01515 TAX - 0.589 NOX - 0.315 DIS - 0.13 LSTAT + 0.0005 RAD - 0.0005 CRIM Rule 3: [25 cases, mean 9.687253, range 9.169518 to 10.12663, est err 0.104636]

if RM > 6.24 LSTAT <= 0.19 NOX > 6.47 then outcome = 10.179097 - 4.83 LSTAT + 0.109 DIS - 0.0034 CRIM + 0.25 B + 0.021 RM + 0.0016 RAD - 7e-005 TAX - 0.005 PTRATIO - 0.007 NOX

Rule 4: [19 cases, mean 9.695155, range 9.449357 to 10.07323, est err 0.159993]

if TAX <= 469 LSTAT > 0.19 NOX <= 6.71 then outcome = 10.350003 - 0.1233 CRIM - 0.0027 AGE - 0.042 NOX - 0.51 LSTAT + 0.0022 RAD - 0.00011 TAX - 0.008 DIS + 0.023 RM - 0.007 PTRATIO + 0.1 B

57 Rule 5: [25 cases, mean 9.881541, range 9.296518 to 10.81978, est err 0.151990]

if RM <= 6.24 DIS <= 1.88 LSTAT <= 0.19 then outcome = 12.574974 - 1.199 DIS - 4.64 LSTAT - 0.0123 CRIM

Rule 6: [180 cases, mean 9.892811, range 9.230143 to 10.49681, est err 0.089884]

if RM <= 6.24 DIS > 1.88 LSTAT <= 0.19 then outcome = 9.811655 - 1.6 LSTAT + 0.141 RM + 0.83 B - 0.03 DIS - 0.028 PTRATIO - 0.0016 AGE + 0.0033 RAD - 0.0027 CRIM - 0.00013 TAX - 0.016 NOX

Rule 7: [10 cases, mean 10.114698, range 9.581903 to 10.81978, est err 0.192493]

if CRIM > 2.92 RM > 6.24 LSTAT <= 0.19 NOX <= 6.47 then outcome = 8.469767 - 6.14 LSTAT + 0.361 NOX + 0.032 RM - 0.0019 CRIM + 0.14 B - 6e-005 TAX + 0.001 RAD - 0.003 PTRATIO + 0.0007 INDUS - 0.002 DIS

Rule 8: [185 cases, mean 10.271324, range 9.615806 to 10.81978, est err 0.073224]

if CRIM <= 2.92 RM > 6.24 then outcome = 9.023735 + 0.296 RM - 2.73 LSTAT - 0.00043 TAX - 0.024 PTRATIO - 0.02 DIS + 0.0042 RAD - 0.0042 CRIM - 0.001 AGE + 0.27 B + 0.0027 INDUS - 0.01 NOX

58 Evaluation on training data (506 cases):

Average |error| 0.091012 Relative |error| 0.30 Correlation coefficient 0.95

Attribute usage: Conds Model

84% 91% RM 64% 100% LSTAT 40% 100% DIS 38% 100% CRIM 23% 95% NOX 8% 86% TAX 86% RAD 82% PTRATIO 82% B 76% AGE 38% INDUS

Time: 0.0 secs

We can also estimate variable importance by using the varImp( ) command from the caret library.

> library(caret) > varImp(bos.cub) Overall RM 87.5 LSTAT 82.0 DIS 70.0 CRIM 69.0 NOX 59.0 TAX 47.0 RAD 43.0 PTRATIO 41.0 B 41.0 AGE 38.0 INDUS 19.0 ZN 0.0 CHAS 0.0

59 > plot(bos.y,predict(bos.cub,newdata=bos.x),xlab="logCMEDV", ylab="Fitted Values from Treed Regression") > abline(0,1,lty=2,col="blue")

> yhat = predict(bos.cub,newdata=bos.x) > cor(bos.y,yhat) [1] 0.9604774 > cor(bos.y,yhat)^2 [1] 0.9225168 > resid = bos.y – yhat > RMSEP = sqrt(mean(resid^2)) > RMSEP [1] 0.1135971 > plot(yhat,resid,xlab="Fitted Values",ylab="Residuals from Treed Regression") > abline(h=0,lty=2,col="blue")

60 Boost the cubist ( M = 10 ) > bos.cub10 = cubist(bos.x,bos.y,committees=10) > yhat = predict(bos.cub10,newdata=bos.x) > cor(bos.y,yhat) [1] 0.9672742 > cor(bos.y,yhat)^2 [1] 0.9356195 > plot(bos.y,yhat,xlab="logCMEDV",ylab="Fitted Values from Treed Regression") > abline(0,1,lty=2,col="blue") > title(main=”Boosted Treed Regression (M = 10)”)

Boost the cubist ( M = 100 )  not much improvement over M = 10 > bos.cub100 = cubist(bos.x,bos.y,committees=100)) > yhat100 = predict(bos.cub100,newdata=bos.x) > cor(bos.y,yhat100) [1] 0.9689198 > cor(bos.y,yhat100)^2 [1] 0.9388055 > plot(bos.y,yhat100,xlab="logCMEDV",ylab="Fitted Values from Treed Regression") > title(main="Boosted Treed Regression (M = 100)") > abline(0,1,lty=2,col="blue")

Using cross-validation to fine tune our treed regression model We can use the cubist.cv function above to decide what level of boosting to do (M) and whether to use a nearest neighbor adjustment when making predictions.

> results = cubist.cv(x=Boston.train[,-1],y=Boston.train$logmedv,committees=1,neighbors=0,B=20) > summary(results) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.02338 0.02917 0.03470 0.03567 0.04091 0.05179 > results = cubist.cv(x=Boston.train[,-1],y=Boston.train$logmedv,committees=5,neighbors=0,B=20) > summary(results) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.02044 0.02339 0.02951 0.03064 0.03482 0.04548 > results = cubist.cv(x=Boston.train[,-1],y=Boston.train$logmedv,committees=10,neighbors=0,B=20) > summary(results) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.01949 0.02320 0.02641 0.02757 0.03261 0.03724 > results = cubist.cv(x=Boston.train[,-1],y=Boston.train$logmedv,committees=15,neighbors=0,B=20) > summary(results) Min. 1st Qu. Median Mean 3rd Qu. Max.

61 0.01749 0.02315 0.02545 0.02758 0.02967 0.05292

62 > results = cubist.cv(x=Boston.train[,-1],y=Boston.train$logmedv,committees=10,neighbors=1,B=20) > summary(results) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.02036 0.02686 0.03242 0.03163 0.03557 0.04613 > results = cubist.cv(x=Boston.train[,-1],y=Boston.train$logmedv,committees=10,neighbors=5,B=20) > summary(results) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.01679 0.02082 0.02439 0.02487 0.02717 0.03972

> results = cubist.cv(x=Boston.train[,-1],y=Boston.train$logmedv,committees=20,neighbors=9,B=20) > summary(results) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.01433 0.02010 0.02293 0.02397 0.02706 0.03595

> bos.cub

Call: cubist.default(x = Boston.train[, -1], y = Boston.train$logmedv, committees = 20, data = Boston.train, neighbors = 9)

Number of samples: 400 Number of predictors: 13

Number of committees: 20 Number of rules per committee: 6, 5, 4, 5, 7, 5, 8, 4, 6, 6, 6, 4, 6, 7, 6, 3, 7, 9, 4, 6

> ypred = predict(bos.cub,newdata=Boston.test[,-1],neighbors=9) > mean((Boston.test$logmedv-ypred)^2) [1] 0.01649623

LOOKS LIKE RANDOM FORESTS HAVE BEEN DETHRONED!

63 Classification trees using the rpart package

In classification trees the goals is to a tree-based model that will classify observations into one of g predetermined classes. The end result of a tree model can be viewed as a series of conditional probabilities (posterior probabilities) of class membership given a set of covariate values. For each terminal node we essentially have a probability distribution for class membership, where the probabilities are of the form:

such that = 1. Here, is a neighborhood defined by a set covariates/predictors,. The neighborhoods are found by a series of binary splits chosen to minimize the overall “loss” of the resulting tree. For classification problems measuring overall “loss” can be a bit complicated. One obvious method is to construct classification trees so that the overall misclassification rate is minimized. In fact, this is precisely what the RPART algorithm does by default. However in classification problems it is often times the case we wish to incorporate prior knowledge about likely class membership. This knowledge is represented by prior probabilities of an observation being from class i, which will denote by. Naturally the priors must be chosen in such a way that they sum to 1. Other information we might want to incorporate into a modeling process is the cost or loss incurred by classifying an object from class i as being from class j and vice versa. With this information provided we would expect the resulting tree to avoid making the most costly misclassifications on our training data set.

Some notation that is used by Therneau & Atkinson (1999) for determining the loss for a given node A:

number of observations in node A from class i number of observations in training set from class i total number of observations in the training set number of observations in node A prior probability of being from class i (by default ) = loss incurred by classifying a class i object as being from class j (i.e. c(j|i) from our discussion of discriminant analysis). predicted class for node A

In general, the loss is specified as a matrix

By default this is a symmetric matrix with .

Using the notation and concepts presented above the risk or loss at a node A is given by

Example: Kyphosis Data

64 > library(rpart) > data(kyphosis) > attach(kyphosis) > names(kyphosis) [1] "Kyphosis" "Age" "Number" "Start"

> k.default <- rpart(Kyphosis~.,data=kyphosis) > k.default n= 81 node), split, n, loss, yval, (yprob) * denotes terminal node

1) root 81 17 absent (0.7901235 0.2098765) 2) Start>=8.5 62 6 absent (0.9032258 0.0967742) 4) Start>=14.5 29 0 absent (1.0000000 0.0000000) * 5) Start< 14.5 33 6 absent (0.8181818 0.1818182) 10) Age< 55 12 0 absent (1.0000000 0.0000000) * 11) Age>=55 21 6 absent (0.7142857 0.2857143) 22) Age>=111 14 2 absent (0.8571429 0.1428571) * 23) Age< 111 7 3 present (0.4285714 0.5714286) * 3) Start< 8.5 19 8 present (0.4210526 0.5789474) *

Sample Loss Calculations: Root Node: R(root) = .2098765*1*(17/17)*(81/81)*81 = 17 Terminal Node 3: R(A) = .7901235*1*(8/64)*(81/19)*19 = 8

The actual tree is shown on the following page.

65 Suppose now we have prior beliefs that 65% of patients will not have Kyphosis (absent) and 35% of patients will have Kyphosis (present). > k.priors <-rpart(Kyphosis~.,data=kyphosis,parms=list(prior=c(.65,.35))) > k.priors n= 81 node), split, n, loss, yval, (yprob) * denotes terminal node

1) root 81 28.350000 absent (0.65000000 0.35000000) 2) Start>=12.5 46 3.335294 absent (0.91563089 0.08436911) * 3) Start< 12.5 35 16.453130 present (0.39676840 0.60323160) 6) Age< 34.5 10 1.667647 absent (0.81616742 0.18383258) * 7) Age>=34.5 25 9.049219 present (0.27932897 0.72067103) *

Sample Loss Calculations: R(root) = .35*1*(17/17)*(81/81)*81 = 28.35 R(node 7) = .65*1*(11/64)*(81/25)*25 = 9.049219

The resulting tree is shown on the following page.

66 Finally suppose we wish to incorporate the following loss information.

Classification L(i,j) present absent Actua present 0 4 l absent 1 0 Note: In R the category orderings for the loss matrix are Z  A in both dimensions.

This says that it is 4 times more serious to misclassify a child that actually has kyphosis (present) as not having it (absent). Again we will use the priors from the previous model (65% - absent, 35% - present).

> lmat <- matrix(c(0,4,1,0),nrow=2,ncol=2,byrow=T) > k.priorloss <- rpart(Kyphosis~.,data=kyphosis,parms=list(prior=c(.65,.35),loss=lmat)) > k.priorloss n= 81 node), split, n, loss, yval, (yprob) * denotes terminal node

1) root 81 52.650000 present (0.6500000 0.3500000) 2) Start>=14.5 29 0.000000 absent (1.0000000 0.0000000) * 3) Start< 14.5 52 28.792970 present (0.5038760 0.4961240) 6) Age< 39 15 6.670588 absent (0.8735178 0.1264822) * 7) Age>=39 37 17.275780 present (0.3930053 0.6069947) *

Sample Loss Calculations: R(root) = .65*1*(64/64)*(81/81)*81 = 52.650000 R(node 7) = .65*1*(21/64)*(81/37)*37 = 17.275780 R(node 6) = .35*4*(1/17)*(81/15)*15 = 6.67058

67 The resulting tree is shown below on the following page.

Example 2 – Owl Diet > owl.tree <- rpart(species~.,data=OwlDiet) > owl.tree n= 179 node), split, n, loss, yval, (yprob) * denotes terminal node

1) root 179 140 tiomanicus (0.14 0.21 0.13 0.089 0.056 0.22 0.16) 2) teeth_row>=56.09075 127 88 tiomanicus (0.2 0.29 0 0.13 0.079 0.31 0) 4) teeth_row>=72.85422 26 2 annandalfi (0.92 0.077 0 0 0 0 0) * 5) teeth_row< 72.85422 101 62 tiomanicus (0.0099 0.35 0 0.16 0.099 0.39 0) 10) teeth_row>=64.09744 49 18 argentiventer (0.02 0.63 0 0.27 0.02 0.061 0) 20) palatine_foramen>=61.23703 31 4 argentiventer (0.032 0.87 0 0.065 0 0.032 0)* 21) palatine_foramen< 61.23703 18 7 rajah (0 0.22 0 0.61 0.056 0.11 0) * 11) teeth_row< 64.09744 52 16 tiomanicus (0 0.077 0 0.058 0.17 0.69 0) 22) skull_length>=424.5134 7 1 surifer (0 0.14 0 0 0.86 0 0) * 23) skull_length< 424.5134 45 9 tiomanicus (0 0.067 0 0.067 0.067 0.8 0) * 3) teeth_row< 56.09075 52 24 whiteheadi (0 0 0.46 0 0 0 0.54) 6) palatine_foramen>=47.15662 25 1 exulans (0 0 0.96 0 0 0 0.04) * 7) palatine_foramen< 47.15662 27 0 whiteheadi (0 0 0 0 0 0 1) *

68 The function misclass.rpart constructs a confusion matrix for a classification tree. > misclass.rpart(owl.tree) > misclass.rpart(owl.tree) Table of Misclassification (row = predicted, col = actual)

1 2 3 4 5 6 7 annandalfi 24 2 0 0 0 0 0 argentiventer 1 27 0 2 0 1 0 exulans 0 0 24 0 0 0 1 rajah 0 4 0 11 1 2 0 surifer 0 1 0 0 6 0 0 tiomanicus 0 3 0 3 3 36 0 whiteheadi 0 0 0 0 0 0 27

Misclassification Rate = 0.134

Using equal prior probabilities for each of the groups. > owl.tree2 <- rpart(species~.,data=OwlDiet,parms=list(prior=c(1,1,1,1,1,1,1)/7))

> misclass.rpart(owl.tree2) Table of Misclassification (row = predicted, col = actual)

1 2 3 4 5 6 7 annandalfi 24 2 0 0 0 0 0 argentiventer 1 27 0 2 0 1 0 exulans 0 0 24 0 0 0 1 rajah 0 4 0 11 1 2 0 surifer 0 1 0 1 9 5 0 tiomanicus 0 3 0 2 0 31 0 whiteheadi 0 0 0 0 0 0 27

Misclassification Rate = 0.145

> plot(owl.tree) > text(owl.tree,cex=.6)

69 We can see that teeth row and palantine foramen figure prominently in the classification rule. Given the analyses we have done previously this is not surprising.

Cross-validation is done in the usual fashion: we leave out a certain percentage of the observations, develop a model from the remaining data, predict back the class of the observations we set aside, calculate the misclassification rate, and then repeat this process are number of times. The function tree.cv3 leaves out (1/k)x100% of the data at a time to perform the cross-validation.

> tree.cv3(owl.tree,k=10,data=OwlDiet,y=species,reps=100) [1] 0.17647059 0.29411765 0.29411765 0.05882353 0.17647059 0.00000000 0.11764706 0.17647059 [9] 0.11764706 0.11764706 0.11764706 0.11764706 0.23529412 0.11764706 0.17647059 0.05882353 [17] 0.11764706 0.29411765 0.11764706 0.11764706 0.11764706 0.29411765 0.29411765 0.23529412 [25] 0.29411765 0.17647059 0.11764706 0.29411765 0.35294118 0.41176471 0.17647059 0.11764706 [33] 0.23529412 0.17647059 0.17647059 0.35294118 0.05882353 0.17647059 0.23529412 0.23529412 [41] 0.29411765 0.05882353 0.05882353 0.17647059 0.23529412 0.11764706 0.17647059 0.17647059 [49] 0.29411765 0.35294118 0.17647059 0.17647059 0.23529412 0.35294118 0.17647059 0.11764706 [57] 0.05882353 0.17647059 0.23529412 0.29411765 0.11764706 0.17647059 0.23529412 0.17647059 [65] 0.23529412 0.35294118 0.11764706 0.29411765 0.05882353 0.23529412 0.17647059 0.17647059 [73] 0.11764706 0.00000000 0.00000000 0.17647059 0.17647059 0.23529412 0.05882353 0.23529412 [81] 0.23529412 0.11764706 0.29411765 0.35294118 0.35294118 0.23529412 0.11764706 0.11764706 [89] 0.11764706 0.17647059 0.17647059 0.00000000 0.29411765 0.05882353 0.41176471 0.35294118 [97] 0.11764706 0.23529412 0.11764706 0.17647059 > results <- tree.cv3(owl.tree,k=10,data=OwlDiet,y=species,reps=100) > mean(results) [1] 0.2147059

70

Recommended publications