Outline of Glms

Outline of Glms

Outline of GLMs Definitions This is a short outline of GLM details, adapted from the book \Nonparamet- ric Regression and Generalized Linear Models", by Green and Silverman. The responses Yi have density (or mass function) yiθi b(θi) p(yi; θi; φ) = exp − + c(yi; φ) (1) φ which becomes a one parameter exponential family in θi if the scale param- eter φ is known. This is simpler than McCullagh and Nelder's version: they use a(φ) or even ai(φ) in place of φ. The ai(φ) version is useful for data whose precision varies in a known way, as for example when Yi is an average of ni iid random variables. There are two simple examples to keep in mind. Linear regression has 2 Yi N(θi; σ ) where θi = xiβ. Logistic regression has Yi Bin(m; θi) ∼ 1 ∼ where θi = (1 + exp( xiβ))− . It is a good exercise to write these models out as in (1), and identify− φ, b and c. Consider also what happens for the 2 models Yi N(θi; σ ) and Yi Bin(mi; θi). ∼ i ∼ The mean and variance of Yi can be determined from its distribution given in equation (1). They must therefore be expressable in terms of φ, θi, and the functions b( ) and c( ; ). In fact, · · · E(Yi; θi; φ) = µi = b0(θi) (2) with b0 denoting a first derivative, and V (Yi; θi; φ) = b00(θi)φ. (3) Equations (2) and (3) can be derived by solving E(@ log p(yi; θi; φ)=@θi) = 0 and 2 2 2 E( @ log p(yi; θi; φ)=@θ ) = E((@ log p(yi; θi; φ)=@θi) ): − i To generalize linear models, one links the parameter θi to (a column of) predictors xi by G(µi) = xi0 β (4) 1 for a \link function" G. Thus 1 1 θi = (b0)− (G− (xi0 β)): (5) 1 The choice G( ) = b ( ) is convenient because it makes θi = xiβ, with the · 0− · result that X0Y = i xi0 yi becomes a sufficient statistic for β (given φ). Of course real data areP under no obligation to follow a distribution with this canonical link. But if they are close to this link then sufficient statistics allow one a great reduction data in set size. Estimation The log likelihood function is n yiθi b(θi) l(θ; φ) = − + c(yi; φ) (6) X φ i=1 where θ subsumes all of the θi. It could also be written as a function of β and φ because (given the xi), β determines all the θi. The main way of estimating β is by maximizing (6). The fact that G(µi) = xiβ suggests a crude approximate estimate: regress G(yi) on xi, perhaps modifying yi in order to avoid violating range restrictions (such as taking log(0)), and accounting for the differing variances of the observations. Fisher scoring is the widely use technique for maximizing the GLM like- lihood over β. The basic step is 1 @2l − @l β(k+1) = β(k) E !! (7) − @β@β0 @β where expectations are taken with β = β(k). This is like a Newton step, except that the Hessian of l is replaced by it's expectation. The expectation takes a simpler form. This is similar to (generalizes?) what happens in the Gauss-Newton algorithm, where individual observation Hessians multiplied by residuals are dropped. (It clearly would not work to replace @l/β by its expectation!) Fisher scoring simplifies to (k+1) 1 β = (X0WX)− X0W z (8) where W is a diagonal matrix with 2 1 Wii = (G0(µi) b00(θi))− : (9) 2 and zi = (Yi µi)G0(µi) + xiβ: (10) − (k) (k) (k) Both equations (9) and (10) use β and the derived values of θi and µi . The iteration (8) is known as \iteratively reweighted least squares", or IRLS. The weights Wii have the usual interpretation as reciprocal variances: b00(θi) is proportional to the variance of Yi and the G0(µi) factor in zi is squared in Wii. The constant φ does not appear. More precisely: it appeared in (7) twice and cancelled out. Fisher scoring may also be written (k+1) (k) 1 β = β + (X0WX)− X0W z∗ (11) where z = (Yi µi)G (µi). i∗ − 0 Inference The saturated model S is one with n parameters θi, one for each yi. The value of θi in the saturated model is the maximizer ofe (1), commonly yi itself. 2 In the N(θi; σ ) model this gives a zero sum of squares for the saturated model. [Exercise: what is the likelihood for a saturated normal model?] Note that if xi = xj, so that for any β xiβ = xjβ, the saturated model can still have θi = θj. 6 Let lmaxe bee the log likelihood of S. In the widely used models this is finite. (In some cases saturating a model can lead to an infinite likelihood. Consider the normal distribution where one picks a mean and a variance for each observation.) The scaled deviance is D∗ = 2(lmax l(θ(β^))) (12) − and the unscaled deviance is n n D = φD = 2 Y (θ θ^ ) b(θ ) + b(θ^ ) = d : (13) ∗ X i i i i i X i i=1 e − − e i=1 The scaled deviance D∗ (and the unscaled when φ = 1) is asymptotically 2 χn p where p is the number of parameters in β, under some restrictive conditions.− The usual likelihood ratio asymptotics do not apply because the degrees of freedom n p is growing as fast as the number of observations n. In a limit with each observation− becoming \more normal" and the number of 3 observations remaining fixed a chisquare result can obtain. (Some examples: Bin(mi; pi) observations with increasing mi and mipi(1 pi) bounded away − from zero, and Poi(λi) observations with λi increasing.) There are however cases where a chisquare limit holds with n increasing (see Rice's introductory statistics book). 2 When the χn p approximation is accurate we can use it to test goodness of fit. A model with− too large a deviance doesn't fit the data well. Differences between scaled deviances of nested models have a more nearly chisquare distribution than the deviances themselves. Because the degrees of freedom between two models is fixed these differences are also chisquared in the usual asymptotic setup, of increasing n. The deviance is additive so that for models M1 M2 M3, the ⊆ ⊆ chisquare statistic for testing M1 within M3 is the sum of those for test- ing M1 within M2 and M2 within M3. The deviance generalizes the sum of squared errors (and D∗ generalizes the sum of squares normalized by σ2). Another generalization of sum of squared errors is Pearson's chisquare n 2 n 2 (Yi E(Yi)) (Yi b (θi)) χ2 = φ − = − 0 ; (14) X V (Y ) X b (θ ) i=1 i i=1 00 i often denoted by X2 to indicate that it is a sample quantity. Pearson's chisquare can also be used to test nested models, but it does not have the additivity that the deviance does. It may also be asymptotically chisquared, but just as for the deviance, there are problems with large \sparse" models. (Lots of observations but small mi or λi.) Pearson's chisquare is often used to estimate φ by χ2 φ^ = : (15) n p − Residuals can be defined by partitioning either the deviance or Pearson's χ2. The deviance residuals are (D) r = sign(Yi µi) di (16) i − p where di is from (13) and the Pearson residuals are (P ) Yi µi r = − : (17) i b (θ ) p 00 i These can also be \leverage adjusted" to reflect the effect of xi on the vari- ance of the different observations. See references in Green and Silverman. 4 These residuals can be used in plots to test for lack of fit of a model, or to identify which points might be outliers. They need not have a normal distribution, so qq plots of them can't necessarily diagnose a lack of fit. Approximate confidence intervals may be obtained by profiling the likeli- hood function and referring the difference in (scaled) deviance values to cal- ibrate the coverage levels. Computationally simpler confidence statements can be derived from the asymptotic covariance matrix of β^ 2 1 @ l − 1 E ! = φ(X0WX)− (18) − @β@β0 (I think the formula that Green and Silverman give (page 97) instead of (18) is wrong.....they have partials with respect to a vector η with components ηi = xiβ.) These simple confidence statements are much like the lineariza- tion inferences in nonlinear least squares. They judge H0 : β β0 not by the − likelihood at β0 but by a prediction of what that likelihood would be based on an expansion around β β^. − Overdispersion Though our model may imply that V (Yi) = φb00(θi) in practice it can turn out that the model is too simple. For example a model in which Yi is a count may assume it to be a sum of independent identically distributed Bernoulli random variables. But non-constant success probability or depen- dence among the Bernoulli variables can cause V (Yi) = mipi(1 pi). We write this as 6 − 2 V (Yi) = b00(θi)φσi (19) for some overdispersion parameter σ2 = 1. i 6 One can model the overdispersion in terms of xi and possibly other parameters as well. But commonly a very simple model with a constant 2 2 value σi = σ is used. Given that variances are usually harder to estimate than means, it is reasonable to use a smaller model for the overdispersion than for θi.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us