Generalized Linear Models

Generalized Linear Models

Chapter 6 Generalized Linear Models In Chapters 2 and 4 we studied how to estimate simple probability densities over a single random variable—that is, densities of the form P (Y ). In this chapter we move on to the problem of estimating conditional densities—that is, densities of the form P (Y X). Logically speaking, it would be possible to deal with this problem simply by assuming| that Y may have an arbitrarily different distribution for each possible value of X, and to use techniques we’ve covered already to estimate a different density P (Y X = xi) for each possible value xi of X. However, this approach would miss the fact that X may| have a systematic effect on Y ; missing this fact when it is true would make it much more difficult to estimate the conditional distribution. Here, we cover a popular family of conditional probability distributions known as generalized linear models. These models can be used for a wide range of data types and have attractive computational properties. 6.1 The form of the generalized linear model Suppose we are interested in modeling the distribution of Y conditional on a number of random variables X1,...,Xn. Generalized linear models are a framework for modeling this type of conditional distribution P (Y X ,...,X ) subject to four key assumptions: | 1 n 1. The influences of the Xi variables on Y can be summarized into an intermediate form, the linear predictor{ } η; 2. η is a linear combination of the X ; { i} 3. There is a smooth, invertible function l mapping η to the expected value µ of Y ; 4. The distribution P (Y = y; µ) of Y around µ is a member of a certain class of noise 1 functions and is not otherwise sensitive to the Xi variables. Assumptions 1 through 3 can be expressed by the following two equations: 1The class of allowable noise functions is described in Section ??. 107 θ X η Y Figure 6.1: A graphical depiction of the generalized linear model. The influence of the conditioning variables X on the response Y is completely mediated by the linear predictor η. η = α + β X + + β X (linearpredictor) (6.1) 1 1 ··· n n η =l(µ) (linkfunction) (6.2) Assumption 4 implies conditional independence of Y from the Xi variables given η. Various choices of l(µ) and P (Y = y; µ) give us different classes{ } of models appropriate for different types of data. In all cases, we can estimate the parameters of the models using any of likelihood-based techniques discussed in Chapter 4. We cover three common classes of such models in this chapter: linear models, logit (logistic) models, and log-linear models. 6.2 Linear models and linear regression We can obtain the classic linear model by chooosing the identity link function η = l(µ)= µ and a noise function that adds noise ǫ N(0,σ2) ∼ to the mean µ. Substituting these in to Equations (6.1) and 6.2, we can write Y directly as a function of X as follows: { i} Predicted Mean Noise N(0,σ2) ∼ Y = α + β X + + β X + ǫ (6.3) 1 1 ··· n n z }| { z}|{ 2 We can also write this whole thing in more compressed form as Y N(α i βiXi,σ ). To gain intuitions for how this model places a conditional probability∼ density on Y , we can visualize this probability density for a single independent variable X, asP in Figure 6.2— lighter means more probable. Each vertical slice of constant X = x represents a conditional Roger Levy – Probabilistic Models in the Study of Language draft, November 6, 2012 108 Conditional probability density P(y|x) for a linear model 4 0.20 4 2 0.15 2 y 0 y 0 0.10 −2 −2 0.05 −4 −4 0.00 −4 −2 0 2 4 −4 −2 0 2 4 x x (a) Predicted outcome (b) Probability density Figure 6.2: A plot of the probability density on the outcome of the Y random variable given 1 2 the X random variable; in this case we have η = 2 X and σ = 4. distribution P (Y x). If you imagine a vertical line extending through the plot at X = 0, you | will see that the plot along this line is lightest in color at Y = 1 = α. This is the point at which ǫ takes its most probable value, 0. For this reason, α is− also called the intercept parameter of the model, and the βi are called the slope parameters. 6.2.1 Fitting a linear model The process of estimating the parameters α and βi of a linear model on the basis of some data (also called fitting the model to the data) is called linear regression. There are many techniques for parameter estimation in linear regression; here we will cover the method of maximum likelihood and also Bayesian linear regression. Maximum-likelihood linear regression Before we talk about exactly what the maximum-likelihood estimate looks like, we’ll intro- duce some useful terminology. Suppose that we have chosen model parametersα ˆ and βˆi . This means that for each point x ,y in our dataset y, we can construct a predicted{ } h j ji value fory ˆj as follows: yˆj =α ˆ + βˆ1xj1 + ... βˆnxjn where xji is the value of the i-th predictor variable for the j-th data point. This predicted value is both the expectated value and the modal value of Yj due to the Gaussian-noise assumption of linear regression. We define the residual of the j-th data point simply as Roger Levy – Probabilistic Models in the Study of Language draft, November 6, 2012 109 y yˆ j − j —that is, the amount by which our model’s prediction missed the observed value. It turns out that for linear models with a normally-distributed error term ǫ, the log- likelihood of the model parameters with respect to y is proportional to the sum of the squared residuals. This means that the maximum-likelihood estimate of the parameters is also the estimate that minimizes the the sum of the squared residuals. You will often see description of regression models being fit using least-squares estimation. Whenever you see this, recognize that this is equivalent to maximum-likelihood estimation under the assumption that residual error is normally-distributed. 6.2.2 Fitting a linear model: case study The dataset english contains reaction times for lexical decision and naming of isolated English words, as well as written frequencies for those words. Reaction times are measured in milliseconds, and word frequencies are measured in appearances in a 17.9-million word written corpus. (All these variables are recorded in log-space) It is well-established that words of high textual frequency are generally responded to more quickly than words of low textual frequency. Let us consider a linear model in which reaction time RT depends on the log-frequency, F , of the word: RT = α + βF F + ǫ (6.4) This linear model corresponds to a formula in R, which can be specified in either of the Introduction to following ways: section 11.1 RT ~ F RT~1+F The 1 in the latter formula refers to the intercept of the model; the presence of an intercept is implicit in the first formula. The result of the linear regression is an intercept α = 843.58 and a slope β = 29.76. F − The WrittenFrequency variable is in natural log-space, so the slope can be interpreted as saying that if two words differ in frequency by a factor of e 2.718, then on average the more frequent word will be recognized as a word of English 26.97≈ milliseconds faster than the less frequent word. The intercept, 843.58, is the predicted reaction time for a word whose log-frequency is 0—that is, a word occurring only once in the corpus. 6.2.3 Conceptual underpinnings of best linear fit Let us now break down how the model goes about fitting data in a simple example. Suppose we have only three observations of log-frequency/RT pairs: Roger Levy – Probabilistic Models in the Study of Language draft, November 6, 2012 110 exp(RTlexdec) 600 800 1000 1200 0 2 4 6 8 10 WrittenFrequency Figure 6.3: Lexical decision reaction times as a function of word frequency 4, 800 h i 6, 775 h i 8, 700 h i Let use consider four possible parameter estimates for these data points. Three estimates will draw a line through two of the points and miss the third; the last estimate will draw a line that misses but is reasonably close to all the points. First consider the solid black line, which has intercept 910 and slope -25. It predicts the following values, missing all three points: x yˆ Residual (ˆy y) 4 810 10 − 6760− 15 8 710 10 − and the sum of its squared residuals is 425. Each of the other three lines has only one non-zero residual, but that residual is much larger, and in all three case, the sum of squared residuals is larger than for the solid black line. This means that the likelihood of the parameter values α = 910, βF = 25 is higher than the likelihood of the parameters corresponding to any of the other lines.− What is the MLE for α, βF with respect to these three data points, and what are the residuals for the MLE? Results of a linear fit (almost every statistical software package supports linear regression) indicate that the MLE is α = 908 1 , β = 25.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    66 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us