Start with review, some new definitions, and pictures on the white board.

Assumptions in the Normal Model

A1: There is a linear relationship between X and Y.

A2: The error terms (and thus the Y’s at each X) have constant .

A3: The error terms are independent.

A4: The error terms (and thus the Y’s at each X) are normally distributed. Note: In practice, we are looking for a fairly symmetric distribution with no major outliers.

Other things to check (Questions to ask):

Q5: Are there any major outliers in the data (X, or combination of (X,Y))?

Q6: Are there other possible predictors that should be included in the model? Useful Plots for Checking Assumptions and Answering These Questions

Reminders: ˆ Residual = ei = Yi −Yi = observed Yi – predicted Yi ˆ ˆˆ where predicted Yi = YXii=ββ01 + , also called “fitted Yi”

Definitions and formulas; we will use R to compute these:

= = Standardized residual for unit i is ( ) ( ) ∗ 𝑒𝑒𝑖𝑖 𝑒𝑒𝑖𝑖 hi defined later in course, but usually is close to 0, so denominator 𝑒𝑒𝑖𝑖 𝑠𝑠𝑠𝑠𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 𝑒𝑒𝑖𝑖 √𝑀𝑀𝑀𝑀𝑀𝑀 1−ℎ𝑖𝑖 Externally studentized residual ri for unit i is the same, except use MSE from the model fit without unit i. ≈ √𝑀𝑀𝑀𝑀𝑀𝑀

Plot Useful for Dotplot, stemplot, of X Q5 Outliers in X; of X values ˆ Residuals ei versus Xi or predicted Yi A1 Linear, A2 Constant var., Q5 outliers ˆ ei* or ri versus Xi or predicted Yi As above, but a better check for outliers Dotplot, stemplot, histogram of ei A4 Normality assumption Residuals ei versus time (if measured) A3 Dependence across time Residuals ei versus other predictors Q6 Predictors missing from model “Normal probability plot” of residuals A4 Normality assumption Example: Highway sign data Plot of: Residuals versus predicted (“fitted”) values Residuals vs Age NOTE: Plot of residuals versus predictor variable X should look the same except for the scale on the X axis, because fitted values are linear transform of X’s. However, when the slope is negative, one will be a mirror image of the other.

Residuals vs fitted values Residuals vs age 100 100 50 50 0 0 Residuals Residuals -50 -50 -100 -100 350 400 450 500 550 20 40 60 80 Fitted values Age

Comments: These are good “residual plots.” Points look randomly scattered around 0. No evidence of nonlinear pattern or unequal . Some other plots of the residuals, used to check the “normality” assumption

Stemplot of standardized residuals To generate them in R, for the linear model called “HWModel”: > Highway$StResids <- rstandard(HWModel)

> stem(Highway$StResids) The decimal point is at the |

-1 | 6655 -1 | 00 -0 | 99866 -0 | 32 0 | 1122234 0 | 56789 1 | 034 1 | 6 2 | 3

Further confirmation that the residuals are relatively symmetric with no major outliers. Residual of 2.3 is for a driver with x = 75 years, y = 460 feet, = 577 3(75) = 352, so residual = 460 – 352 = 108 feet.

𝑦𝑦� − Normal probability plot of residuals and standardized residuals for highway sign data, also to check normality assumption A4 (see Figure 1.7 in textbook for examples with normal and quite non-normal residuals).

Using residuals Using standardized residuals

• Explanation of what these are will be given on the white board. • These are pretty good plots. There is one point at the lower end that is slightly off, and might be investigated, but no major problems. • Note that the plot looks the same using residuals or standardized residuals. It doesn’t matter which ones you use.

Example from textbook of good and bad normal probability plots:

What to do when assumptions aren’t met

Assumption 1: Relationship is linear.

How to detect a problem: Plot y versus x and also plot residuals versus fitted values or residuals versus x. If you see a pattern, there is a problem with the assumption.

What to do about the problem: Transform the X values, X' = f(X). (Read as “X-prime = a function of X.”) Then do the regression using X' instead of X:

Y = β0 + β1 X' + ε where we still assume the ε are N(0, σ2). NOTE: Only use this “solution” if non-linearity is the only problem, not if it also looks like there is non-constant variance or non-normal errors. For those, we will transform Y. REASON: The errors are in the vertical direction. Stretching or shrinking the X-axis doesn’t change those, so if they are normal with constant variance, they will stay that way. Let’s look at what kinds of transformations to use. Residuals are inverted U, use X' = X or log10 X

Scatterplot of Y vs X Residuals Versus the Fitted Values (response is Y) 160 15 140 10

120 5

l 0

100 a u Y d i

s -5

80 e R -10 60 -15 40 -20

20 -25 0 10 20 30 40 50 50 75 100 125 150 X Fitted Value

Scatterplot of Y vs Sqrt_X Residuals Versus the Fitted Values (response is Y) 160

10 140

120 5

100 l a u Y d i

s 0

80 e R

60 -5

40

-10 20 1 2 3 4 5 6 7 20 40 60 80 100 120 140 160 Fitted Value Sqrt_X Residuals are U-shaped and association between X and Y is positive: Use X' = X2

Scatterplot of Yi vs X Residuals Versus the Fitted Values (response is Yi) 3000

2500 500

2000 250 l a i

1500 u Y d i s

e 0 1000 R

500 -250

0 -500 0 10 20 30 40 50 0 500 1000 1500 2000 2500 X Fitted Value

Scatterplot of Yi vs X-squared Residuals Versus the Fitted Values (response is Yi) 3000 300

2500 200

2000 100 l a i u Y

1500 d i 0 s e R 1000 -100

500 -200

0 -300 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 3000 X-squared Fitted Value Residuals are U-shaped and association between X and Y is negative: Use X' = 1/X or X' = exp(-X)

Scatterplot of Y vs X Residuals Versus X 9 (response is Y)

8 2

7

6 1

5 l a u Y d i 4 s 0 e R 3

2 -1 1

0 -2 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4 0.5 X X

Scatterplot of Y vs 1/X Residuals Versus 1/X (response is Y) 9

1.0 8

7 0.5 l

6 a u d Y i

s 0.0 e

5 R

4 -0.5

3 -1.0 2 0 10 20 30 40 50 0 10 20 30 40 50 1/X 1/X Assumption 2: Constant variance of the errors across X values. How to detect a problem: Plot residuals versus fitted values. If you see increasing or decreasing spread, there is a problem with the assumption.

Example: Real estate data for n = 522 homes sold in a Midwestern city. Y = Sales price (thousands); X = Square feet (in hundreds). (Source: Applied Linear Regression Models)

Original data: Residual plot:

400 1000 800 200 600 Residuals PriceThsds 0 400 200 -200 0 10 20 30 40 50 10 20 30 40 50 SqFtHdrd SqFtHdrd

Clearly, the variance is increasing as house size increases!

NOTE: Usually increasing variance and skewed distribution go together. Here is a histogram of the residuals, with a superimposed normal distribution. Notice the residuals extending to the right.

.008 .006 Density .004 .002 0 -200 0 200 400 Residuals

What to do about the problem:

Transform the Y values, or both the X and Y values.

Example: Real estate sales, transform Y values to Y' = ln (Y) = natural log of Y

Scatter plot of ln(Price) vs Square feet Residuals versus Square feet: 1 14 13.5 .5 13 0 LnPrice Residuals 12.5 -.5 12 11.5 -1 1000 2000 3000 4000 5000 1000 2000 3000 4000 5000 SqFt SqFt

Looks like one more transformation might help – use square root of size. But we will leave it as this for now. See histogram of residual on next page. 2.5 2 1.5 Density 1 .5 0 -1 -.5 0 .5 1 Residuals

This looks better – more symmetric and no outliers. Using models after transformations

Transforming X only:

Use transformed X for future predictions: X' = f(X). Then do the regression using X' instead of X:

Y = β0 + β1 X' + ε where we still assume the ε are N(0, σ2).

For example, if X' = X then the predicted values are:

ˆ ˆ ˆ Y = β0 + β1 X

Transforming Y (and possibly X):

Everything must be done in transformed values. For confidence intervals and prediction intervals (to be covered next week), get the intervals first and then transform the endpoints back to original units.

Example: Predicting house sales price using square feet. Regression equation is:

Predicted Ln(Price) = 11.2824 + 0.051(Square feet in hundreds) For a house with 2000 square feet = 20 hundred square feet: Yˆ'= 11.2824 + 0.051(20) = 12.3024 So predicted price = exp(12.3024) = $220,224 (because taking exp reverses a natural log transformation).

Assumption 3: Independent errors

1. The main way to check this is to understand how the data were collected. For example, suppose we wanted to predict blood pressure from amount of fat consumed in the diet. If we were to sample entire families, and treat them as independent, that would be wrong. If one member of the family has high blood pressure, related members are likely to have it as well. Taking a random sample is one way to make sure the observations are independent.

2. If the values were collected over time (or space) it makes sense to plot the residuals versus order collected, and see if there is a trend or cycle. OUTLIERS

Some reasons for outliers:

1. A mistake was made. If it’s obvious that a mistake was made in recording the data, or that the person obviously lied, etc., it’s okay to throw out an outlier and do the analysis without it. For example, a height of 7 inches is an obvious mistake. If you can’t go back and figure out what it should have been (70 inches? 72 inches? 67 inches?) you have no choice but to discard that case.

2. The person (or unit) belongs to a different population, and should not be part of the analysis, so it’s okay to remove the point(s). An example is for predicting house prices, if a data set has a few mansions (5000+ square feet) but the other houses are all smaller (1000 to 2500 square feet, say), then it makes sense to predict sales prices for the smaller houses only. In the future when the equation is used, it should be used only for the range of data from which it was generated.

3. Sometimes outliers are simply the result of natural variability. In that case, it is NOT okay to discard them. If you do, you will underestimate the variance. (Story of Alcohol and Tobacco from DASL)

Scatterplot of Alcohol vs Tobacco

6.5

6.0

l 5.5 o h o c l A 5.0

4.5

4.0

2.5 3.0 3.5 4.0 4.5 Tobacco

Notice Northern Ireland in lower right corner – a definite outlier, based on the combined (X,Y) values.

Why is it an outlier? It represents a different religion than other areas of Britain. Residuals Versus the Fitted Values (response is Alcohol) 1.0

0.5

0.0 l a u d i -0.5 s e R -1.0

-1.5

-2.0 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 Fitted Value

In the plot of residuals versus fitted values, it’s even more obvious that the outlier is a problem. Residuals Versus Tobacco (response is Alcohol) 1.0

0.5

0.0 l a u d i -0.5 s e R -1.0

-1.5

-2.0 2.5 3.0 3.5 4.0 4.5 Tobacco

The plot of residuals versus the X variable is very similar to residuals vs fitted values. Again the problem is obvious.

Scatterplot of Alcohol vs Tobacco

6.5

6.0 l o h o c

l 5.5 A

5.0

4.5

2.5 3.0 3.5 4.0 4.5 Tobacco

Here is a plot with Northern Ireland removed. Residuals Versus Tobacco (response is Alcohol) 0.75

0.50

l 0.25 a u d i s e

R 0.00

-0.25

-0.50

2.5 3.0 3.5 4.0 4.5 Tobacco

Here is a residual plot with Northern Ireland removed. Notice how much the analysis changes when the outlier is removed. “R-sq” /100 is the square of the correlation between Alcohol and Tobacco, so the correlation goes from .05 = .22 to .615 = .78 . Intercept and slope change too:

With Outlier (Northern Ireland)

The regression equation is Alcohol = 4.35 + 0.302 Tobacco

Predictor Coef SE Coef T P Constant 4.351 1.607 2.71 0.024 Tobacco 0.3019 0.4388 0.69 0.509

S = 0.819630 R-Sq = 5.0% R-Sq(adj) = 0.0%

Without Outlier

The regression equation is Alcohol = 2.04 + 1.01 Tobacco

Predictor Coef SE Coef T P Constant 2.041 1.001 2.04 0.076 Tobacco 1.0059 0.2813 3.58 0.007

S = 0.446020 R-Sq = 61.5% R-Sq(adj) = 56.7%

Transformations in R: If you want to transform the response variable Y into some new variable Y', you can add a new column to the data table consisting of the new variable. For the data table named MyData, to square the response variable GPA and add it to the data table, type: > MyData <- cbind(MyData, MyData$GPA^2) To take its square root, type: > MyData <- cbind(MyData, sqrt(MyData$GPA) ) To take its natural logarithm, type: > MyData <- cbind(MyData, log(MyData$GPA) ) To take its common logarithm (base 10), type: > MyData <- cbind(MyData, log10(MyData$GPA) ) To take its reciprocal, type: > MyData <- cbind(MyData, 1/MyData$GPA ) To take its reciprocal square root, type: > MyData <- cbind(MyData, 1/sqrt(MyData$GPA) ) And so on. You will want to give the new column in the data table an appropriate name. You can then run a linear model using the transformed response variable and the original predictor. More in discussion on Friday!