<<

Lecture 10: Linear Mixed Models (Linear Models with Random Effects)

Claudia Czado

TU M¨unchen

c (Claudia Czado, TU Munich) – 0 – Overview West, Welch, and Galecki (2007) Fahrmeir, Kneib, and Lang (2007) (Kapitel 6)

• Introduction

• Likelihood Inference for Linear Mixed Models – Parameter Estimation for known Structure – Parameter Estimation for unknown Covariance Structure – Confidence Intervals and Hypothesis Tests

c (Claudia Czado, TU Munich) – 1 – Introduction

So far: independent response variables, but often

• Clustered – response is measured for each subject – each subject belongs to a group of subjects (cluster) Ex.: – math scores of student grouped by classrooms (class room forms cluster) – birth weigths of rats grouped by litter (litter forms cluster)

• Longitudinal Data – response is measured at several time points – number of time points is not too large (in contrast to ) Ex.: sales of a product at each month in a year (12 measurements)

c (Claudia Czado, TU Munich) – 2 – Fixed and Random Factors/Effects How can we extend the to allow for such dependent data structures?

fixed factor = qualitative covariate (e.g. gender, agegroup)

fixed effect = quantitative covariate (e.g. age)

random factor = qualitative variable whose levels are randomly sampled from a population of levels being studied Ex.: 20 supermarkets were selected and their number of cashiers were reported 10 supermarkets with 2 cashiers observed levels of random factor 5 supermarkets with 1 cashier “number of cashiers” 5 supermarkets with 5 cashiers 

random effect = quantitative variable whose levels are randomly sampled from a population of levels being studied Ex.: 20 supermarkets were selected and their size reported. These size values are random samples from the population of size values of all supermarkets.

c (Claudia Czado, TU Munich) – 3 – Modeling Clustered Data

Yij = response of j-th member of cluster i, i =1,...,m,j =1, . . . , ni m = number of clusters ni = size of cluster i p xij = covariate vector of j-th member of cluster i for fixed effects, ∈ β = fixed effects parameter, ∈ Rp q uij = covariate vector of j-th member of cluster i for random effects, ∈ R q γi = random effect parameter, ∈ R

Model:

t t Yij = xij β + uij γi + ǫij fixed random random | {z } | {z } |{z} i =1,...,m; j =1, . . . , ni

c (Claudia Czado, TU Munich) – 4 – Mixed Linear Model (LMM) I

Assumptions:

q×q γi ∼ Nq(0,D), D ∈ R

ǫi1 . 0 Rni×ni ǫi :=  .  ∼ Nni( , Σi), Σi ∈ ǫ  ini

γ1,..., γm, ǫ1,..., ǫm independent

D = of random effects γi

Σi = covariance matrix of error vector ǫi in cluster i

c (Claudia Czado, TU Munich) – 5 – Mixed Linear Model (LMM) II

Matrix Notation:

t t xi1 ui1 Yi1 . n ×p . n ×q . n Xi :=  .  ∈ R i , Ui :=  .  ∈ R i , Yi :=  .  ∈ R i xt ut Y  ini  ini  ini

Yi = Xiβ + Uiγi + ǫi i =1,...,m

⇒ γi ∼ Nq(0,D) γ1,..., γm, ǫ1,..., ǫm independent (1)

0 ǫi ∼ Nni( , Σi)

c (Claudia Czado, TU Munich) – 6 – Modeling Longitudinal Data

Yij = response of subject i at j-th measurement, i =1,...,m,j =1, . . . , ni ni = number of measurements for subject i m = number of objects xij = covariate vector of i-th subject at j-th measurement for fixed effects β ∈ Rp uij = covariate vector of i-th subject at j-th measurement q for random effects γi ∈ R

Yi = Xiβ + Uiγi + ǫi matrix notation ⇒ γi ∼ Nq(0,D) γ1,..., γm, ǫ1,..., ǫm independent 0 ǫi ∼ Nni( , Σi)

Remark: The general form of the mixed linear model is the same for clustered and longitudinal observations.

c (Claudia Czado, TU Munich) – 7 – Matrix Formulation of the Linear Mixed Model

Y1 m . n Y :=  .  ∈ R , where n := ni i=1 Ym P  

X1 D X :=  .  ∈ Rn×p, β ∈ Rp G :=  ...  ∈ Rmq×mq X D  n   U 0 ... 0 1 n1×q n1×q 0 ... 0 0n ×q U2  U := 2 ∈ Rn×(m·q), 0 := . ... ∈ Rni×q . ... ni×q     0 ... 0 0n ×q Um     m 

γ1 ǫ1 Σ1 0 γ :=  .  ∈ Rm·q, ǫ :=  .  R :=  ...  ∈ Rn×n γm ǫm, 0 Σ      m

c (Claudia Czado, TU Munich) – 8 – Linear Mixed Model (LMM) in matrix formulation

With this, the linear mixed model (1) can be rewritten as

Y = Xβ + Uγ + ǫ (2)

γ 0 G 0mq×n where ∼ Nmq+n , ǫ 0 0n×mq R 

Remarks:

• LMM (2) can be rewritten as two level hierarchical model

Y |γ ∼ Nn(Xβ + Uγ, R) (3) γ ∼ Nmq(0, R) (4)

c (Claudia Czado, TU Munich) – 9 – γ • Let Y = Xβ + ǫ∗, where ǫ∗ := Uγ + ǫ = U I n×n ǫ A  (2) ∗ ⇒ ǫ ∼ Nn(0, V ), where | {z }

0 0 t G t G U V = A A = U In×n 0 R 0 R In×n  U t = UG R = UGU t + R In×n  Y = Xβ + ǫ∗ Therefore (2) implies ∗ (5) marginal model ǫ ∼ Nn(0, V )

• (2) or (3)+(4) implies (5), however (5) does not imply (3)+(4) ⇒ If one is only interested in estimating β one can use the ordinary linear model (5) If one is interested in estimating β and γ one has to use model (3)+(4)

c (Claudia Czado, TU Munich) – 10 – Likelihood Inference for LMM: 1) Estimation of β and γ for known G and R

Estimation of β: Using (5), we have as MLE or weighted LSE of β

−1 β˜ := XtV −1X XtV −1Y (6) 

1/2 1/2 t Recall: Y = Xβ + ǫ, ǫ ∼ Nn(0, Σ), Σ known, Σ = Σ Σ ⇒ Σ−1/2Y = Σ−1/2Xβ + Σ−1/2ǫ (7) 

0 −1/2 −1/2t |∼{zNn}( , Σ ΣΣ ) =In t −1 t βˆ = XtΣ| −1/2 {zΣ−1/2X} XΣ−1/2 Σ−1/2Y ⇒ LSE of β in (7) :  −1  = XtΣ−1X XtΣ−1Y (8) This estimate is called the weighted LSE  Exercise: Show that (8) is the MLE in Y = Xβ + ǫ, ǫ ∼ Nn(0, Σ)

c (Claudia Czado, TU Munich) – 11 – Estimation of γ: From (3) and (4) it follows that Y ∼ Nn(Xβ, V ) γ ∼ Nmq(0, G)

Cov(Y , γ) = Cov(Xβ + Uγ + ǫ, γ) = Cov(Xβ, γ) +U Var(γ, γ) + Cov(ǫ, γ) = UG =0 G =0 | {z } | {z } | {z } Y Xβ V UG ⇒ ∼ N ,  γ  n+mq  0  GU t G 

Y µY ΣY ΣY Z Recall: X = ∼ Np , Z µZ  ΣZY ΣZ  ⇒ Z|Y ∼ N µZ|Y , ΣY |Z with −1 −1 µZ|Y = µZ+ ΣZY ΣY (Y − µY ) , ΣZ|Y = ΣZ − ΣZY ΣY ΣY Z

E(γ|Y ) = 0 + GU tV −10(Y − Xβ)= GU tV −1(Y − Xβ) (9) is the best linear unbiased predictor of γ (BLUP)

Therefore γ˜ := GU tV −1(Y − Xβ˜) is the empirical BLUP (EBLUP)

c (Claudia Czado, TU Munich) – 12 – Joint maximization of log likelihood of (Y t, γt)t with respect to (β, γ) f(y, γ) = f(y|γ) · f(γ) (3)+(4) 1 t −1 ∝ exp{−2(y − Xβ − Uγ) R (y − Xβ − Uγ)}

1 t −1 exp{−2γ G γ}

1 t −1 ⇒ ln f(y, γ) = −2(y − Xβ − Uγ) R (y − Xβ − Uγ)

1 t −1 −2 γ G γ + constants ind. of (β, γ) penalty term for γ | {z } So it is enough to minimize

Q(β, γ) := (y − Xβ − Uγ)tR−1(y − Xβ − Uγ) − γtG−1γ = γtR−1γ − 2βtXtR−1y +2βtXtR−1Uγ − 2γtU tR−1y +βtXtR−1Xβ + γtU tR−1Uγ + γtG−1γ

c (Claudia Czado, TU Munich) – 13 – Recall:

n t f(α) := α b = αjbj jP=1 ∂ f(α) = bj, ∂αi

∂ ∂αf(α) = b

t g(α) := α Aα = αiαjaij Pi Pj n n n ∂ t g(α) = 2αiaii + αjaij + αjaji =2 αjaij =2A α ∂αi i j=1P,j6=i j=1P,j6=i jP=1 t A1 ∂ . t ∂αg(α) = 2  .  = 2Aα Ai is ith row of A At  n

c (Claudia Czado, TU Munich) – 14 – Mixed Model Equation

∂ Q(β, γ) = −2XtR−1y +2XtR−1Uγ +2XtR−1Xβ Set= 0 ∂β

∂ t −1 t −1 t −1 −1 Set ∂γ Q(β, γ) = −2U R Xβ − 2U R y +2U R Uγ +2G γ = 0

⇔ XtR−1Xβ + XtR−1Uγ = XtR−1y U tR−1Xβ +(U tR−1U + G−1)γ = U tR−1y e e e e XtR−1XXtR−1U β XtR−1y ⇔ = (10) U tR−1U U tR−1R + G−1 γ U tR−1y e e

c (Claudia Czado, TU Munich) – 15 – Exercise: Show that β, γ defined by (8) and (9) respectively solve (10).

e e 0 0 Define C := X U , B := 0 G−1  Xt XtR−1 ⇒ CtR−1C = R−1 X U = X U U t U tR−1 XtR−1XX tR−1U  = U tR−1X U tR−1U 

β ⇒ (10) ⇔ (CtR−1C + B) = CtR−1y γ e β ⇔ =(CtR−1C + B)−1CtR−1y γ e e e

c (Claudia Czado, TU Munich) – 16 – 2) Estimation for unknown covariance structure

We assume now in the marginal model (5)

∗ ∗ Y = Xβ + ǫ , ǫ ∼ Nn(0, V ) with V = UGU t + R, that G and R are only known up to the parameter ϑ, i.e. we write

V (ϑ)= UG(ϑ)U t + R(ϑ)

c (Claudia Czado, TU Munich) – 17 – ML Estimation in extended marginal model ∗ ∗ t Y = Xβ + ǫ , ǫ ∼ Nn(0, V (ϑ)) with V (ϑ)= UG(ϑ)U + R(ϑ) loglikelihood for (β, ϑ): 1 l(β, ϑ)= − {ln |V (ϑ)|+(y−Xβ)tV (ϑ)−1(y−Xβ)}+ const. ind. of β, ϑ (11) 2 If we maximize (11) for fixed ϑ with regard to β, we get

β(ϑ):=(XtV (ϑ)−1X)−1XtV (ϑ)−1y e Then the profile log likelihood is

lp(ϑ) := l(β(ϑ), ϑ) = −1{ln |V (ϑ)| +(y − Xβ(ϑ))tV (ϑ)−1(y − Xβ(ϑ))} e2 ˆ e ˆ e Maximizing lp(ϑ) wrt to ϑ gives MLE ϑML. ϑML is however biased; this is why one uses often restricted ML estimation (REML)

c (Claudia Czado, TU Munich) – 18 – Restricted ML Estimation in extended marginal model Here we use for the estimation of ϑ the marginal log likelihood:

lR(ϑ) := ln( L(β, ϑ)dβ) Z

L(β, ϑ)dβ = 1 |V (ϑ)|−1/2 + exp{−1(y − Xβ)tV (ϑ)−1(y − Xβ)}dβ (2π)n/2 2 R R Consider: (y − Xβ)tV (ϑ)−1(y − Xβ)= βt XtV (ϑ)−1X β − 2ytV (ϑ)−1Xβ + ytV (ϑ)−1y A(ϑ) =(β − B(ϑ)y)tA(ϑ)(β − B(ϑ)y)+| yt{zV (ϑ)−}1 − ytB(ϑ)tA(ϑ)B(ϑ)y

where B(ϑ) := A(ϑ)−1XtV (ϑ)−1

(Note that B(ϑ)tA(ϑ)= V (ϑ)−1XA(ϑ)−1A(ϑ)= V (ϑ)−1X )

c (Claudia Czado, TU Munich) – 19 – Therefore we have

−1/2 L(β, ϑ)dβ = |V (ϑ)| exp{−1(yt[V (ϑ)−1 + B(ϑ)tA(ϑ)B(ϑ)]y} (2π)n/2 2 R 1 · exp{− (β − B(ϑ)y)tA(ϑ)(β − B(ϑ)y)}dβ (12) Z 2 p/2 (2π) (Variance is A(ϑ)−1!) | |A(ϑ)−1|−1/2 {z }

Now (y − Xβ(ϑ))tV (ϑ)−1(y − Xβ(ϑ)) = ytV (ϑ)−e1y − 2ytV (ϑ)−1Xβ(eϑ)+ β(ϑ)t XtV (ϑ)−1X β(ϑ) e e A(ϑ) e = ytV (ϑ)−1y − 2ytV (ϑ)−1XB(ϑ)y + ytB(|ϑ)tA{z(ϑ)B(}ϑ)y = ytV (ϑ)−1y − ytB(ϑ)tA(ϑ)B(ϑ)y

Here we used: β =(XtV (ϑ)−1X)−1XtV (ϑ)−1y = A(ϑ)−1XtV (ϑ)−1y = B(ϑ)y and e B(ϑ)tA(ϑ)B(ϑ)= V (ϑ)−1XA(ϑ)−1A(ϑ)B(ϑ)= V (ϑ)−1XB(ϑ)

c (Claudia Czado, TU Munich) – 20 – Therefore we can rewrite (12) as

−1/2 L(β, ϑ)dβ = |V (ϑ)| exp{−1(y − Xβ(ϑ))tV (ϑ)−1(y − Xβ(ϑ))} (2π)n/2 2 R e e n/2 · (2π) |A(ϑ)−1| = 1 |A(ϑ)−1|−1/2 |A|

⇒ lR(θ) = ln( L(β, ϑ)dβ) R 1 t −1 = −2{ln |V (ϑ)| +(y − Xβ(ϑ)) V (ϑ) (y − Xβ(ϑ))}

1 e e −2 ln |A(ϑ)| + C

1 = lp(ϑ) − 2 ln |A(ϑ)| + C ˆ Therefore the restricted ML (REML) of ϑ is given by ϑREML which maximizes 1 l (ϑ)= l (ϑ) − ln |XtV (ϑ)−1X| R p 2

c (Claudia Czado, TU Munich) – 21 – Summary: Estimation in LMM with unknown cov. For the linear mixed model

γ 0 G(ϑ) 0mq×n Y = Xβ + Uγ + ǫ, ∼ Nmq+n , ǫ 0 0n×mq R(ϑ)  with V (ϑ)= UG(ϑ)U t + R(ϑ) the covariance parameter vector ϑ is estimated by either ˆ ϑML which maximizes 1 t −1 lp(ϑ)= −2{ln |V (ϑ)| +(y − Xβ(ϑ)) V (ϑ) (y − Xβ(ϑ))} β XtV ϑ −1X −1XtV ϑ −1Y where =( ( ) ) e( ) e or by ˆ e 1 t −1 ϑREML which maximizes lR(ϑ)= lp(ϑ) − 2 ln |X V (ϑ) X|

The fixed effects β and random effects γ are estimated by −1 β = XtV −1X XtV −1Y  t −1  γb = GU Vb (Y − Xβ)b where V = V (ϑML) or V (ϑREML) b b b b b b b c (Claudia Czado, TU Munich) – 22 – Special Case (Dependence on ϑ is ignored to ease notation)

D U1 Σ1 G =  ...  ,U =  ...  , R =  ...  , D U Σ    m  m

X1 Y1 X =  .  , Y =  .  X Ym  m   t U1DU1 + Σ1 0 ⇒ V = UGU t + R =  ...  (blockdiagonal) 0 U DU t + Σ  m m m

V1 .. t =  .  where Vi := UiDUi + Σi V  m

c (Claudia Czado, TU Munich) – 23 – Define

t Vi := UiD(ϑ)Ui + Σi(ϑ), where ϑ = ϑML or ϑREML b b b b b b ⇒ β = (XtV −1X)−1XtV −1Y b m b b m t −1 −1 t −1 = Xi Vi Xi) ( Xi Vi Yi) Xi=1 Xi=1 b b and

γ = GU tV −1(Y − Xβ) has components b b b b −1 γi = D(γ)UiVi (yi − Xiβ) b b b b

c (Claudia Czado, TU Munich) – 24 – 3) Confidence intervals and hypothesis tests

Since Y ∼ N(Xβ, V (ϑ)) holds, an approximation to the covariance of −1 β = XtV −1(ϑ)X XtV −1(ϑ)Y is given by   b b b A(ϑ):=(XtV −1(ϑ)X)−1 b b Note: here one assumes that V (ϑ) is fixed and does not depend on Y . Therefore σ := (XtV −1(ϑ)X)−1 are considered as estimates of V ar(β ). j jjb j Therefore b b b t −1 −1 βj ± z1−α/2 (X V (ϑ)X)jj q gives an approximate 100(1b − α)% CI for βj. b

t −1 −1 It is expected that (X V (ϑ)X)jj underestimates V ar(βj) since the variation in ϑ is not taken into account.b b A full Bayesian analysis using MCMC methods is preferable to these b approximations.

c (Claudia Czado, TU Munich) – 25 – Under the assumption that β is asymptotically normal with β and covariance matrix A(ϑ), then the usal hypothesis tests can be done; i.e. for b

• H0 : βj =0 versus H1 : βj 6=0

βj Reject H0 ⇔|tj| = |σ | >z1−α/2 bj b

• H0 : Cβ = d versus H1 : Cβ 6= d rank(C)= r

t t −1 2 Reject H0 ⇔ W := (Cβ − d) (C A(ϑ)C) (Cβ − d) > χ1−α,r (Wald-Test) b b b or 2 Reject H0 ⇔−2[l(β, γ) − l(βR, γR)] > χ1−α,r where β, γ estimates in unrestricted model b b b b β , γ Cβ d bR R estimates in restricted model ( = ) (Likelihood Ratiob Test) b b

c (Claudia Czado, TU Munich) – 26 – References

Fahrmeir, L., T. Kneib, and S. Lang (2007). Regression: Modelle, Methoden und Anwendungen. Berlin: Springer-Verlag. West, B., K. E. Welch, and A. T. Galecki (2007). Linear Mixed Models: a practical guide using statistical software. Boca Raton: Chapman- Hall/CRC.

c (Claudia Czado, TU Munich) – 27 –