Stat 579: Generalized Linear Models and Extensions

Yan Lu

Jan, 2018, week 3

1 / 67 Hypothesis tests

I Likelihood ratio tests

I Wald tests

I Score tests

2 / 67 Generalized Likelihood ratio tests

Let Y = (Y1, Y2, ··· , Yn), where Y1, Y2, ··· , Yn have joint pdf f (y, θ) for θ ∈ Ω, and consider the hypothesis

H0 : θ ∈ Ω0 v.s. Hα : θ ∈ Ω − Ω0 The generalized likelihood ratio (GLR) is defined by max f (y; θ) f (y; θˆ ) λ(y) = θ∈Ω0 = 0 maxθ∈Ωf (y; θ) f (y; θˆ)

I θˆ denote the usual MLE of θ I θˆ0 denotes the MLE under the restriction that H0 is true. I if y ∼ f (y; θ1, ··· , θk ), then under H0 :(θ1, θ2, ··· , θr ) = (θ10, θ20, ··· , θr0), r < k. Approximately, for large n, −2logλ(y) ∼ χ2(r). I An appropriate size α test is to reject H0 if 2 −2logλ(y) ≥ χ1−α(r)

3 / 67 Example

For the binary outcome, if the hypothesis is

H0 : p = p0 v.s p 6= p0

then

n n ! X X l(ˆµ0) = log likelihood = yi log(p0) + n − yi log(1 − p0) i=1 i=1

n n ! X X l(ˆµ) = log likelihood = yi log(ˆp) + n − yi log(1 − pˆ) i=1 i=1 2 λ(y) = exp(l(ˆµ0))/exp(l(ˆµ)), −2logλ(y) ∼ χ1

4 / 67 Measuring goodness of fit test, Saturated Model

n   X yj θj (µ) − b(µ) l(y; φ, µ) = + c(y , φ) a(φ) j j=1

I Fit the model by ML, letµ ˆ is the MLE of µ 0 g(µi ) = xi β = β0 + xi1β1 + ··· + xi(p−1)βp−1 the maximized value of the log-likelihood is y θ (ˆµ) − b(ˆµ)  l(y; φ, µˆ) = Pn j j + c(y , φ) j=1 a(φ) j I Now fit the alternate model 0 0 g(µi ) = xi β + δ τ

= β0 + xi1β1 + ··· + xi(p−1)βp−1

+τ0 + δi1τ1 + ··· + δi(r−1)τr−1 where for now let r = n − p. We have n observations and n regression parameters. 5 / 67 g(µi ) = β0 + xi1β1 + ··· + xi(p−1)βp−1

+τ0 + δi1τ1 + ··· + δi(r−1)τr−1

I where for now let r = n − p —-we have n observations and n regression parameters.

I If there are no linear dependencies among the predictors then this is the so-called saturated model, which places no constraints on g(µi ) and consequently no constraints on µi .

I Letµ ˜ be the MLE of µ for the saturated model, we haveµ ˜ = y. —–score equation under the saturated model is

X0W∆(y − µ) = 0

0 0 where X is the n × n “extended” design matrix with rows [xi , δi ], by computation, X is invertible, as are W and ∆ ——XW∆(y − µ) = 0, → y − µ = 0 → µ˜ = y

6 / 67 The likelihood ratio test (LRT) is the ratio of the likelihood at the hypothesized parameter values (reduced model) to the likelihood of the data (saturated model) at the MLE(s).

likelihood of reduced model λ(y) = likelihood of saturated model

g(µi ) = β0 + xi1β1 + ··· + xi(p−1)βp−1

+τ0 + δi1τ1 + ··· + δi(r−1)τr−1

The likelihood ratio for testing H0 : τ = 0 is

−2logλ(y) = 2[l(y; φ, y) − l(y; φ, µˆ)]

Because the alternative model is saturated, this is also viewed as a 0 measure of how well the null model g(µi ) = xi β is fitted by the data.

7 / 67 Deviance

y θ − b(θ )  f (y |θ , φ) = exp j j j + c(y , φ) j j a(φ) j

Letµ ˆβ be the MLE at H0, andµ ˜β be the MLE for the saturated model, and let aj (φ) = φ/wj .

n 1 X −2logλ(y) = 2w [(θ (y) − θ (ˆµ))y − (b (y) − b (ˆµ))] φ j j j j j j j=1 1 = D(y, µˆ) φ

I D(y, µˆ) is called deviance 1 I D(y, µˆ) is called scaled deviance, usually denoted as φ D∗(y, µˆ)

8 / 67 Example: Normal distribution deviance

Normal distribution yµ − 0.5µ2  f (y|µ, σ2) = exp + c(y, σ) σ2 2 2 2 θ = µ, b(θ) = 0.5µ , a(φ) = σ /wj , wj = 1, φ = σ n X  1 1  Deviance = 2 (y − µˆ )y − y 2 − µˆ2 j j j 2 j 2 j j=1 n X  1 1  = 2 y 2 − µˆ y − y 2 + µˆ2 j j j 2 j 2 j j=1 n X  2 2 = yj − 2ˆµj yj +µ ˆj j=1 n X 2 = (yj − µˆj ) j=1

= sum of residuals 9 / 67 Distribution and Deviance

Distribution Deviance     P yj nj − yj Binomial 2 j yj log + (nj − yj )log nj µˆj nj − nj µˆj     P yj Poisson 2 j yj log − (yj − µˆj ) wj µˆj P 2 Normal j (yj − µˆj ) wj

I the larger the deviance, the poorer the fit of the model, large values of D(y;µ ˆ) suggest a general lack of fit of the model

I model fits perfectly, D(y;µ ˆ) = 0

10 / 67 Remarks:

1. The standard Poisson and binomial have φ = 1, deviance =scaled deviance ∗ 2 2. In certain settings, D (y, µˆ) ∼ χn−p, number of parameters difference between null and saturated model —-for normal , this result is exact, but of not much practical use, since don’t know σ2 —- for binomial model, yj ∼ Bin(nj , µj ), j = 1, 2, ··· , n this assumes nj large, and the number of binomial observations, n fixed. —-for Poisson model, yj ∼ Poisson(µj ), j = 1, 2, ··· , n, this requires µi large, n fixed —-under suitable conditions for the Binomial and Poisson models, this results can be used to test the adequacy of the model by using the p-values 2 ∗ p-value = P(χn−p > Dobs) ∗ where Dobs is the observed values of scaled and raw deviances (i.e, φ = 1).

11 / 67 3. Even if the approximation breaks down, we can show that

E(D∗(y, µˆ)) ∼ n − p

Since large values of D∗(y, µˆ) suggest lack-of-fit, many researcher recommend comparing D∗(y, µˆ) to n − p to provide a rough idea of lack-of-fit,

D∗(y, µˆ) < 1, → no evidence of lack of fit n − p

D∗(y, µˆ) >> 1, → some suggestion of lack of fit n − p However, there is no accepted “cutoff” for how much greater than 1 the scaled deviance must be to indicate lack-of-fit.

12 / 67 4. Scaled deviance D∗ provides information on whether the model fits the data, while tests on regression coefficients assess the significance of effects assuming the model fits. 5. Alternative GOF, generalized pearson statistic

n 2 2 X (yj − µˆj ) X = wj V (ˆµj ) j=1

Scaled pearson statistic

X 2 X 2 = ∗ φ These are used analogously to the deviance and scaled deviance.

13 / 67 Examples of Pearson statistics

For poisson distribution, y ∼ Poisson(µ)

f (y) = P(Y = y) = e−µµy /y! = exp {ylogµ − µ − logy!}

θ 00 θ θ = logµ, φ = 1, wj = 1, b(θ) = µ = e , b (θ) = e = µ V (y) = E(y).

n 2 n 2 2 X (yj − µˆj ) X (yj − µˆj ) X = wj = V (ˆµj ) µˆj j=1 j=1

X 2 reduces to the usual Pearson statistics

14 / 67 Comparing models

0 Model (1): smaller model (reduced model), g(µi ) = xi β 0 0 Model (2): larger model (full model), g(µi ) = xi β + δi τ I The alternative larger model is not necessary the saturated model, i.e, p + r < n, the number of parameters in Model (2) can be less than number of observations n.

I Assume φ is fixed, the likelihood ratio test for comparing (1) and (2), is equivalent to test H0 : τ = 0 ∗ ∗ ∗ 2 D (ˆµ2, µˆ 1) = (D (y, µˆ 1) − D (y, µˆ 2) ∼ χr

whereµ ˆ 1 andµ ˆ 2 are MLEs under models (1) and (2) respectively, r is the number of parameters in τ , that is, the df of smaller model - df large model

15 / 67 Wald Tests

Wald test: The statistic is a function of the difference in the MLE and the hypothesized value, normalized by an estimate of the standard of the MLE.

I binary example, (ˆp − pˆ )2 W = 0 pˆ(1 − pˆ)/n 2 for large n, W ∼ χ1 I In general

0 −1   βˆ   β    βˆ    βˆ   β  L0 − L0I−1 L L0 − τˆ τ αˆ τˆ τ

2 ∼ χl where l = rank(L)

16 / 67 For example: 0 I testing H0 : βi = 0 or H0 : L β = 0 L0 = [0, 0, ··· , 1, 0, ··· , 0], ith element is 1. 0 I testing H0 : βi = βj or H0 : βi − βj = 0 or H0 : L β = 0 L0 = [0, 0, ··· , 1, 0, ··· , −1, 0, ··· 0], ith element and jth element are 1 and -1 respectively.

I Several linear restrictions H0 : β2 + β3 = 1, β4 + β6 = 0, β5 + β6 = 0   β1  0 1 1 0 0 0 0  1   β2  L = 0 0 0 1 0 1 , β =   , c = 0    .    0 0 0 0 1 1  .  0 β6

L0β = c, rank(L) = 3.

17 / 67 Score tests

If the MLE equals the hypothesized value, p0, then p0 would maximize the likelihood and U(p0) = 0. The score statistic measures how far from zero the score function is when evaluated at the null hypothesis. The test statistic for the binary outcome example is 2 S = U(p0) /I (p0) 2 S ∼ χ1 LRT, Wald, score tests are asymptotically equivalent.

18 / 67 Example: with admission data

0 logit(µi ) = β0 + β1xi1 + ··· + βp−1xi(p−1) = xi β 0 exp(xi β) µi = E(yi ) = 0 1 + exp(xi β) n n ! X  exp(x0β)  X  exp(x0β)  l = y log i + n − y log 1 − i i 1 + exp(x0β) i 1 + exp(x0β) i=1 i i=1 i

I The p score functions can not be solved analytically. It is common to use a numerical algorithm, such as the Newton-Raphson algorithm to obtain the MLEs.

I The information matrix I is a p × p matrix of the partial second derivatives with respect to the parameters, the inverted information matrix is the covariance matrix for βˆ.

19 / 67 Testing a single logistic regression coefficient

To test a single logistic regression coefficient, we will use the Wald test, βˆ − β j j0 ∼ N(0, 1) seˆ (βˆj )

I seˆ (βˆj ) is calculated by taking the inverse of the estimated information matrix. —-This value is given to you in the R output for βj0 = 0 I As in , this test is conditional on all other coefficients being in the model.

20 / 67 Fitting glm in R, we have the following results myfit0 <- glm(admit ~ gpa, data = ex.data, family = "binomial") summary(myfit0)

Estimate Std. Error z value Pr(>|z|) (Intercept) -4.3576 1.0353 -4.209 2.57e-05 *** gpa 1.0511 0.2989 3.517 0.000437 ***

I The fitted model is

logit(µi ) = −4.3576 + 1.0511 ∗ gpai

I The column labelled “z value” is the Wald test statistic. 3.517 = 1.0511/0.2989, since p-value << 0, reject H0 : β1 = 0, conclude that GPA has an significant effect on log odds of admission.

21 / 67 Confidence intervals for the coefficients and the odds ratios

0 logit(µi ) = β0 + β1xi1 + ··· + βp−1xi(p−1) = xi β

I A (1 − α) × 100% confidence interval for βj , j = 0, 1, ··· , p − 1 can be calculated as ˆ ˆ βj ± Z1−α/2seˆ (βj )

I The (1 − α) × 100% confidence interval for the odds ratio over a one unit change in xj is

h ˆ ˆ ˆ ˆ i exp(βj − Z1−α/2seˆ (βj )), exp(βj + Z1−α/2seˆ (βj ))

22 / 67 Example

Fit admission status with gre, gpa and rank Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -3.989979 1.139951 -3.500 0.000465 *** ## gre 0.002264 0.001094 2.070 0.038465 * ## gpa 0.804038 0.331819 2.423 0.015388 * ## rank2 -0.675443 0.316490 -2.134 0.032829 * ## rank3 -1.340204 0.345306 -3.881 0.000104 *** ## rank4 -1.551464 0.417832 -3.713 0.000205 ***

I odds ratio with one unit change in gpa is exp(0.804038) = 2.2345448

I 95% CI of odds ratio for one unit change in gpa is [exp(0.8040 − 1.96 ∗ 0.3318), exp(0.8040 + 1.96 ∗ 0.3318)] = [e0.1537, e1.4543] = [1.1661, 4.2816]

23 / 67 exp(cbind(OR = coef(myfit), confint(myfit))) ## Waiting for profiling to be done... ## OR 2.5 % 97.5 % ## (Intercept) 0.0185001 0.001889165 0.1665354 ## gre 1.0022670 1.000137602 1.0044457 ## gpa 2.2345448 1.173858216 4.3238349 ## rank2 0.5089310 0.272289674 0.9448343 ## rank3 0.2617923 0.131641717 0.5115181 ## rank4 0.2119375 0.090715546 0.4706961

24 / 67 Testing a single logistic regression variable using LRT

logit(µi ) = β0 + β1grei + β2gpai + β3x2i + β4x3i + β5x4i  1 if rank 2  1 if rank 3  1 if rank 4 x = x = x = 2 0 otherwise 3 0 otherwise 4 0 otherwise

I want to test the effect of variable rank, i.e.

H0 : β3 = β4 = β5 = 0

I model under the null hypothesis is

logit(µi ) = β0 + β1grei + β2gpai

I −2logλ(y) = −2(l(βˆ|H0) − l(βˆ|Hα)) —–need to know both l(βˆ|H0) and l(βˆ|Hα) —–fit two models: the full model with gre, gpa and rank the reduced model under H0, with only gre and gpa. 25 / 67 I Then l(βˆ|H0) is the log-likelihood from the model under H0, and l(βˆ|Hα) is the log-likelihood from the full model. 2 I −2logλ(y) ∼ χ3.

26 / 67 Reduced model with gre and gpa

myfit2<-glm(admit ~ gre + gpa, data = ex.data, family = "binomial") summary(myfit2) ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -4.949378 1.075093 -4.604 4.15e-06 *** ## gre 0.002691 0.001057 2.544 0.0109 * ## gpa 0.754687 0.319586 2.361 0.0182 * ## --- ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 499.98 on 399 degrees of freedom ## Residual deviance: 480.34 on 397 degrees of freedom ## AIC: 486.34 27 / 67 Full model with gre, gpa and rank

myfit <- glm(admit ~ gre + gpa + rank, data = ex.data, family = "binomial") summary(myfit) ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -3.989979 1.139951 -3.500 0.000465 *** ## gre 0.002264 0.001094 2.070 0.038465 * ## gpa 0.804038 0.331819 2.423 0.015388 * ## rank2 -0.675443 0.316490 -2.134 0.032829 * ## rank3 -1.340204 0.345306 -3.881 0.000104 *** ## rank4 -1.551464 0.417832 -3.713 0.000205 ***

## Null deviance: 499.98 on 399 degrees of freedom ## Residual deviance: 458.52 on 394 degrees of freedom ## AIC: 470.52 28 / 67 Compare two models

I the−2logλ is listed as the residual deviance from the output of summary(). —–For the full model, −2logλ = 458.52. —- For the reduced model, −2logλ = 480.34 2 I Deviance = 480.34 − 458.52 = 21.82 > 7.814728 = χ0.95(3) —- reject the null hypothesis, conclude that reduced model is not adequate. anova(myfit, myfit2) ## Analysis of Deviance Table ## Model 1: admit ~ gre + gpa + rank ## Model 2: admit ~ gre + gpa ## Resid. Df Resid. Dev Df Deviance ## 1 394 458.52 ## 2 397 480.34 -3 -21.826 qchisq(0.95,3) ## [1] 7.814728 pchisq(21.826,3,lower.tail = FALSE) ## [1] 7.090117e-05 29 / 67 Testing groups of variables using the LRT

Suppose instead of testing just one variable, we wanted to test a group of variables. This follows naturally from the likelihood ratio test. Let’s look at it by example.

logit(µi ) = β0 + β1grei + β2gpai + β3x2i + β4x3i + β5x4i

want to test H0 : β1 = β2 = β3 = β4 = β5 = 0 versus the full model.

30 / 67 Reduced model: intercept model

myfit0<-glm(admit ~ 1, data = ex.data, family = "binomial") summary(myfit0)

## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -0.7653 0.1074 -7.125 1.04e-12 ***

## (Dispersion parameter for binomial family taken to be 1) ## Null deviance: 499.98 on 399 degrees of freedom ## Residual deviance: 499.98 on 399 degrees of freedom ## AIC: 501.98

Notice that null deviance and residual deviance are the same, since we didn’t use any x information in the modeling. 31 / 67 Compare the intercept model with full model

anova(myfit0,myfit,test="Chisq") ## Analysis of Deviance Table ## ## Model 1: admit ~ 1 ## Model 2: admit ~ gre + gpa + rank ## Resid. Df Resid. Dev Df Deviance Pr(>Chi) ## 1 399 499.98 ## 2 394 458.52 5 41.459 7.578e-08 ***

I Reject the reduced model, in favor of the full model.

I df = 5

32 / 67

upper<-formula(~gre+gpa+rank,data=ex.data) model.aic = step(myfit0, scope=list(lower= ~., upper= upper)) ## Start: AIC=501.98 ## admit ~ 1 ## ## Df Deviance AIC ## + rank 3 474.97 482.97 ## + gre 1 486.06 490.06 ## + gpa 1 486.97 490.97 ## 499.98 501.98 The Akaike information criterion (AIC) is an estimator of the relative quality of statistical models for a given set of data. I Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models.

I AIC provides a for model selection. 33 / 67 ## Step: AIC=472.88 ## admit ~ rank + gpa ## ## Df Deviance AIC ## + gre 1 458.52 470.52 ## 462.88 472.88 ## - gpa 1 474.97 482.97 ## - rank 3 486.97 490.97 ## ## Step: AIC=470.52 ## admit ~ rank + gpa + gre ## ## Df Deviance AIC ## 458.52 470.52 ## - gre 1 462.88 472.88 ## - gpa 1 464.53 474.53 ## - rank 3 480.34 486.34

34 / 67 I The smallest AIC = 470.52, with variables rank, gpa and gre

I The second smallest one with AIC =472.88, with variables rank and gpa

I By model comparison for these two models, we would like to choose the full model with rank, gpa and gre.

35 / 67 Wald test

#test that the coefficient for rank=2 is equal to the coefficient for rank=3 l <- cbind(0, 0, 0, 1, -1, 0) wald.test(b = coef(myfit), Sigma = vcov(myfit), L = l) ## Wald test: ## ------## ## Chi-squared test: ## X2 = 5.5, df = 1, P(> X2) = 0.019

Since p-value for the test is 0.019, conclude that the coefficient for rank=2 is not equal to the coefficient for rank=3, or there is a significant difference between the effect on log odds of admission from rank 2 and rank 3 university applicants.

36 / 67 Assessment of model fit

I Model selection

I Residuals: can be useful for identifying potential outliers (observations not well fit by the model) or misspecified models. Residuals not very useful in logistic regression. —-Raw residual —Deviance residuals —-Pearson residuals

I Influence —–Cook’s distance: measures the influent of case i on all of the fitted gi s —–Leverage

I Prediction

37 / 67 Residuals

1. Raw residuals: yj − µˆj , these are called response residuals for GLM’s. Since the of the response is not constant for most GLM’s we need some modification.

2. Deviance residuals dj The deviance residual for the jth observation dj is the signed square root of the contribution of the jth case to the sum for the model deviance.

I n q 2 X 2 dj = sign(yj − µˆj ) dj , D(y, µˆ) = dj j=1

I useful for determining if individual points are not well fit by the model

I you can get the deviance residuals using the function residuals() in R

38 / 67 3. Pearson residuals Γj r wj I Γj = (yj − µˆj ) V (ˆµj ) 2 Pn 2 I X = j=1 Γj Example: for poisson distribution, y ∼ Poisson(µ) f (y) = P(Y = y) = e−µµy /y! = exp {ylogµ − µ − logy!} θ 00 θ θ = logµ, φ = 1, wj = 1, b(θ) = µ = e , b (θ) = e = µ θˆ V (µj ) = µj , V (ˆµj ) = e j (yj − µˆj ) Pearson residual: Γj = p µˆj Recall that Deviance for poisson is     X yj 2 yj log − (yj − µˆj ) wj µˆj j s     yj dj = sign(yj − µˆj ) 2 yj log − (yj − µˆj ) µˆj 39 / 67 Example: logistic regression

µi log = βˆ0 + βˆ1xi1 + βˆ2xi2 1 − µi

I µˆi : fitted probabilities

I raw residual: yi − µˆi yi − µˆi I Pearson residuals: Γi = p µˆi (1 − µˆi ) —this is based on the idea of subtracting off the and dividing by the —-if we replaceµ ˆi by µi , then Γi has mean 0 and variance 1. I Deviance residuals: based on the contribution of each point to the likelihood Pn n ˆ o —For logistic regression, l = i=1 yi logˆµi + (1 − yi )log(1 − µi ) —- r n ˆ o dj = sign(yj − µˆj ) −2 yi logˆµi + (1 − yi )log(1 − µi )

if yi = 1, sign(yj − µˆj ) = 1

—-if yi = 0, sign(yj − µˆj ) = −1 40 / 67 I Each of these type of residuals can be squared and added together to create an () RSS-like statistic Pn 2 —-Deviance: D = i=1 di 2 Pn 2 —-Pearson statistic: X = i=1 Γi

41 / 67 4. Scaled Pearson and Deviance residuals

Γj yj − µˆj √ = s φ φ V (ˆµi ) wj

b00(θ)φ V (θ)φ I Recall Var(y) = = w w I the scaled Pearson residual centers and scales yj by its estimated mean and standard deviation —–Hence, the scaled Pearson residuals are standardized.

I Γ d √j , √j φ φ —– both scaled Pearson and Deviance residuals are approximately having mean 0 and variance 1

42 / 67 1 1 X D∗(y;µ ˆ) = D(y, µˆ) = d2 φ φ j j E(D∗(y;µ ˆ)) ≈ n − p  d  n − p ⇒ Var √j ≈ = 1 − p/n φ n On average, this is less than 1, but not too much if p is small relative to n

43 / 67 5. Standardized Pearson and Deviance residuals Γ d Γ = j , Γ = j pj p Dj p φ(1 − hjj ) φ(1 − hjj )

I this adjust the scaled residuals to have mean 0 and variance 1, hjj is the jth case leverage, defined as the diagonal elements 1 0 −1 0 1 of the hat matrix H = W 2 X(X WX) X W 2 1 √ —-W 2 is the diagonal matrix with diagonal element wii —note thatµ ˆ i 6= Hy I Generally speaking, the standardized deviance residuals tend to be preferable because they are more symmetric than the standardized Pearson residuals, but both are commonly used

44 / 67 6. Studentized deleted residuals

I recall that in linear regression there were a number of diagnostic measures based on the idea of leaving observation i out, refitting the model, and seeing how various things changed (residual, coefficient, estimates, fitted values)

I The same idea can be extended to generalized linear models

I Γ d Γ = j , Γ = j pj q Dj q φ(−j)(1 − hjj ) φ(−j)(1 − hjj )

I Studentized residuals less than -2 and greater than +2 deserve closer inspection.

45 / 67 7. Outliers

I A primary use of residuals is in detecting outliers. —-observations whose values deviate from the expected and produce extremely large residuals

I What is an outlier for 0, 1 data? ——difficult to claim that seeing either of 1 or 0 constitutes an outlier. ——too many 0s or 1s in situations where we would not expect them (for example: too many 1s that we think have a small pi ), this usually suggest a lack of fit. —–perfectly reasonable observations can have “unusually large” residuals

46 / 67 I Influential data, if removing the observation substantially changes the estimate of coefficients or fitted probabilities

I An observation with an extreme value on a predictor variable is called a point with high leverage. —– Leverage is a measure of how far an independent variable deviates from its mean. In fact, the leverage indicates the geometric extremeness of an observation in the multi-dimensional covariate space. —-These leverage points can have an unusually large effect on the estimate of logistic regression coefficients —–Leverages greater than 2h¯ or 3h¯ cause concerns, where h¯ = p/n

47 / 67 plot(hatvalues(myfit)) hatvalues(myfit) 0.01 0.02 0.03 0.04 0.05

0 100 200 300 400

Index

48 / 67 Figure 1: Leverage v.s index (myfit) > highleverage <- which(hatvalues(myfit) > .045) #0.45 = 3*p/n = 3*6/400 > hatvalues(myfit)[highleverage] 373 0.04921401 > ex.data[373,] admit gre gpa rank 373 1 680 2.42 1 > myfit$fit[373] 373 0.3765075 > mgre 1 2 3 4 611.8033 596.0265 574.8760 570.1493 > mgpa 1 2 3 4 3.453115 3.361656 3.432893 3.318358

49 / 67 8. Cook’s distance If βˆ is the MLE of β under the model

0 g(µi ) = xi β ˆ and β(−j) is the MLE based on the data but holding out the jth observation, then cooks distance for case j is

1 ˆ ˆ 0 ˆ −1 ˆ ˆ ck = (β − β ) [Vard(β)] (β − β ) p (−j) (−j) 1 = (βˆ − βˆ )0X0WXˆ (βˆ − βˆ ) p (−j) (−j)

Some package doesn’t scale cj by p.

50 / 67 plot(cooks.distance(myfit)) cooks.distance(myfit) 0.000 0.005 0.010 0.015 0.020

0 100 200 300 400

Index

51 / 67 Figure 2: Cooks distance v.s index (myfit) > max(cooks.distance(myfit)) [1] 0.01941192 > highcook <- which((cooks.distance(myfit)) > .05) #0.05 is simply a very small critical number in $F$ distribution > cooks.distance(myfit)[highcook] named numeric(0)

52 / 67 Comments:

I In a binomial setup where all ni are big the standardized deviance residuals should be closed to Gaussian. The normal probability plot can be used to check this.

I In a Poisson setup where the counts are big the standardized deviance residuals should be closed to Gaussian. The normal probability plot can be used to check this.

I In a binomial setup where xi (number of successes) are very small in some of the groups numerical problems sometimes occur in the estimation. This is often seen in very large standard errors of the parameter estimates.

53 / 67 I Residuals are less informative for logistic regression than they are for linear regression: ——yes/no (1 or 0) outcomes contain less information than continuous ones —– the fact that the adjusted response depends on the fit hampers our ability to use residuals as external checks on the model

I We are making fewer distributional assumptions in logistic regression, so there is no need to inspect residuals for, say, or non constant variance

I Issues of outliers and influential observations are just as relevant for logistic regression and GLM models as they are for linear regression

I If influential observations are present, it may or may not be appropriate to change the model, but you should at least understand why some observations are so influential

54 / 67 Prediction

Fitted probabilities:

###prediction, fitted probabilities myfit$fit[1:20] #fitted probabilities ## 1 2 3 4 5 ## 0.17262654 0.29217496 0.73840825 0.17838461 0.11835391 6 7 8 9 10 0.36996994 0.41924616 0.21700328 0.20073518 0.51786820 ## 11 12 13 14 15 ##0.37431440 0.40020025 0.72053858 0.35345462 0.69237989 ## 16 17 18 19 20 ## 0.18582508 0.33993917 0.07895335 0.54022772 0.57351182

55 / 67 Predicted probabilities: mgre<-tapply(ex.data$gre, ex.data$rank, mean) # mean of gre by rank mgpa<-tapply(ex.data$gpa, ex.data$rank, mean) # mean of gpa by rank newdata1 <- with(ex.data, data.frame(gre = mgre, gpa = mgpa, rank = factor(1:4))) newdata1 ## gre gpa rank ## 1 611.8033 3.453115 1 ## 2 596.0265 3.361656 2 ## 3 574.8760 3.432893 3 ## 4 570.1493 3.318358 4

56 / 67 newdata1$rankP <- predict(myfit, newdata = newdata1, type = "response") newdata1 ## gre gpa rank rankP ## 1 611.8033 3.453115 1 0.5428541 ## 2 596.0265 3.361656 2 0.3514055 ## 3 574.8760 3.432893 3 0.2195579 ## 4 570.1493 3.318358 4 0.1704703

I The predicted probability of being accepted into a graduate program is 0.5429 for students from the highest prestige undergraduate institutions (rank= 1), with gre = 611.8 and gpa=3.45 .

57 / 67 Translate the estimated probabilities into a predicted outcome 1. Use 0.5 as a cutoff. —–ifµ ˆi for a new observation is greater than 0.5, its predicted outcome is y = 1. —- ifµ ˆi for a new observation is less than or equal to 0.5, its predicted outcome is y = 0.

I This approach is reasonable when (a) it is equally likely in the population of interest that the outcomes 0 and 1 will occur, and (b) the costs of incorrectly predicting 0 and 1 are approximately the same.

58 / 67 2. Find the best cutoff for the data set on which the logistic regression model is based. ——we evaluate different cutoff values and for each cutoff value, calculate the proportion of observations that are incorrectly predicted. ——select the cutoff value that minimizes the proportion of incorrectly predicted outcomes.

I This approach is reasonable when (a) the data set is a random sample from the population of interest, and (b) the costs of incorrectly predicting 0 and 1 are the same.

59 / 67 Example:

logit(µi ) = β0 + β1grei + β2gpai + β3x2i + β4x3i + β5x4i if we use the cutoff of 0.5, we get the following results

> table(fitted(myfit)>.5,ex.data$admit) 0 1 FALSE 254 97 TRUE 19 30

> t1<-table(fitted(myfit)>.5,ex.data$admit) > (t1[2,1]+t1[1,2])/sum(t1) [1] 0.29

Recall that 1 means admission, 0 no admission. We misclassify people (97+19)/400=29% of the time.

60 / 67 Instead, let’s try finding a classification rule that minimizes misclassification in our data set. > for(p in seq(.35,.9,.05)) + {t1<-table(fitted(myfit)>p, ex.data$admit) + cat(p,(t1[2,1]+t1[1,2])/sum(t1),"\n")} 0.35 0.325 0.4 0.3 0.45 0.3075 0.5 0.29 0.55 0.29 0.6 0.3025 0.65 0.3075 0.7 0.315 Error in t1[2, 1] : subscript out of bounds > max(fitted(myfit)) [1] 0.7384082 It looks like we can’t do much better than 29%. 61 / 67 Receiver operating characteristic (ROC) curve

ROC curve is a plot of 1-specificity against sensitivity.

I The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.

I The true-positive rate is also known as sensitivity. The false-positive rate is also known as the fall-out or probability of false alarm, and can be calculated as (1 − specificity).

I The ROC curve is the sensitivity as a function of fall-out.

62 / 67 #Roc curve p1<-matrix(0,nrow=12,ncol=3) i=1 for(p in seq(0.15,.7,.05)){ t1<-table(fitted(myfit)>p,ex.data$admit) p1[i,]=c(p,(t1[2,2])/sum(t1[,2]),(t1[1,1])/sum(t1[,1])) i=i+1 } plot(1-p1[,3],p1[,2],type = "o",xlab="1−specificity/false negative rate", ylab="sensitivity/true positive rate") #p1[,2] true positive rate #p1[,3] true negative rate #1-p1[,3] false positive rate

63 / 67 sensitivity/true positive rate sensitivity/true positive 0.0 0.2 0.4 0.6 0.8 1.0

0.0 0.2 0.4 0.6 0.8

1−specificity/false positive rate 64 / 67

Figure 3: Cooks distance v.s index (myfit) Comments:

I The area under the ROC curve can give us insight into the predictive ability of the model.

I If it is equal to 0.5 (an ROC curve with slope = 1), the model can be thought of as predicting at random.

I Values close to 1 indicate that the model has good predictive ability.

I It can also be thought of as a plot of the Power as a function of the Type I Error of the decision rule (when the performance is calculated from just a sample of the population, it can be thought of as estimators of these quantities).

65 / 67 Somers’ Dxy

A similar measure is Somers’ Dxy rank correlation between predicted probabilities and observed outcomes. It is given by

Dxy = 2(c − 0.5)

where c is the area under the ROC curve.

I When Dxy = 0, the model is making random predictions.

I When Dxy = 1, the model discriminates perfectly.

66 / 67 > library(Hmisc) > somers2(fitted(myfit),ex.data$admit) C Dxy n Missing 0.6928413 0.3856826 400.0000000 0.0000000

The area under the ROC curve is 0.6928413, and Dxy = 0.3856826.

67 / 67