Generalized Linear Models

Lecture 6.0: Generalized Linear Squares

1 Outline

1 Introduction

2 Generalized (GLS)

3 Weighted least squares (WLS)

4 Iteratively reweighted least squares (IRWLS)

2 Introduction

Think about the assumptions we made in . How about errors are not constant?

Or not independent? Generalized or weighted least squares (GLS or WLS)

3 Outline

1 Introduction

2 Generalized least squares (GLS)

3 Weighted least squares (WLS)

4 Iteratively reweighted least squares (IRWLS)

4 GLS

Now we have the model Y = Xβ +  E() = 0 Var() = σ2V GLS estimation can be obtained by matrix factorization V = KK. We define z = K−1y B = K−1X g = K−1 We have z = Bβ + g, E(g) = 0, Var(g) = σ2I.

5 GLS

z = Bβ + g

Now we can use OLS to estimate β:

0 S(β) = (z − Bβ) (z − Bβ) 0 = (K−1y − Bβ) (K−1y − Bβ) 0 = (y − Xβ) V−1(y − Xβ) Take the derivative with respect to β and set it to 0, we get 0 0 (X V−1X)β = X V−1y The GLS estimator of β is 0 0 βˆ = (X V−1X)−1X V−1y

6 GLS in R

nlme::gls(y ~ x, data = dat, weights = weights, ...)

7 Outline

1 Introduction

2 Generalized least squares (GLS)

3 Weighted least squares (WLS)

4 Iteratively reweighted least squares (IRWLS)

8 WLS

Some times the errors are uncorrelated, but have unequal . In this case we use weighted least squares (WLS). The matrix of  has the form:   1/w1 0    1/w2  2 2   σ V = σ  .   ..    0 1/wn −1 Let W = V with elements wi, the WLS estimator of βˆ is 0 0 βˆ = (X WX)−1X Wy

9 WLS

In WLS, observations with large get smaller weights than observations with smaller variances. Examples of possible weights are:

−1 Error proportional to a predictor xi suggests wi = xi . When an observation yi is an average of several, ni, observations at 2 that point of the explanatory variable, then, Var(yi) = σ /ni suggests wi = ni.

10 WLS in R

lm(y ~ x, data = dat, weights = weights, ...)

11 Outline

1 Introduction

2 Generalized least squares (GLS)

3 Weighted least squares (WLS)

4 Iteratively reweighted least squares (IRWLS)

12 IRWLS

Sometimes we will have prior information on the weights wi, others we might find, looking at residual plots, that the variability is a function of one or more explanatory variables. In these cases we have to estimate the weights, perform the analysis, re-estimate the weights again based on these results and perform the analysis again. This procedure is called iteratively reweighted least squares (IRWLS).

13 IRWLS example

Suppose Var(i) = γ0 + γ1x1 :

1 Start with wi = 1. 2 Use OLS to estimate β. 2 3 Use residuals to estimate γ, perhaps regress ˆ on x1. 4 Recompute wi and repeat go to step 2.

14