Bayesian Modeling Strategies for Generalized Linear Models, Part 1

Total Page:16

File Type:pdf, Size:1020Kb

Bayesian Modeling Strategies for Generalized Linear Models, Part 1 Bayesian Modeling Strategies for Generalized Linear Models, Part 1 Reading: Ho Chapter 9; Albert and Chib (1993) Sections 13.2, 4.1; Polson (2012) Sections 13, Appendix S6.2; Pillow and Scott (2012) Sections 13.1, 4; Dadaneh et al. (2018); Neelon (2018) Sections 1 and 2 Fall 2018 1 / 160 Linear Regression Model Consider the following simple linear regression model: x T 1 Yi = i β + ei i = ;:::; n; where • x i is a p × 1 vector of covariates (including an intercept) • β is a p × 1 vector of regression coecients iid N 0 −1 • ei ∼ ( ; τe ) 1 2 is a precision term • τe = /σe Or, combining all n observations: Y = X β + e; where Y and e are n × 1 vectors and X is an n × p design matrix 2 / 160 Maximum Likelihood Inference for Linear Model It is straightforward to show that the MLEs are T −1 T βb = X X X Y 2 1 σ = (Y − X βb)T (Y − X βb) = MLE be n 2 1 σ~ = (Y − X βb)T (Y − X βb) = RMLE e n − p Gauss-Markov Theorem: Under the standard linear regression assumptions, βb is the best linear unbiased estimator (BLUE) of β 3 / 160 Prior Specication for Linear Model In the Bayesian framework, we place priors on and 2 (or, β σe equivalently τe ) Common choices are the so-called semi-conjugate or conditionally conjugate priors −1 β ∼ Np(β0; T 0 ); where T 0 is the prior precision τe ∼ Ga(a; b) where b is a rate parameter Equivalently, we can say that 2 is inverse-Gamma (IG) with σe shape a and scale b These choices lead to conjugate full conditional distributions 4 / 160 Choice of Prior Parameters Common choices for the prior parameters include: • β0 = 0 −1 • T 0 = 0:01I p ) Σ0 = T 0 = 100I p • a = b = 0:001 See Ho Section 9.2.2 for alternative default priors, including Zellner's g-prior for β 5 / 160 Full Conditional Distributions∗ Can show that the full conditional for β is βjY = y; τe ∼ N(m; V ); where T −1 V = T 0 + τe X X and T m = V T 0β0 + τe X y To derive this, you must complete the square in p dimensions: Proposition For vectors β and η and symmetric, invertible matrix V , βT V −1β − 2ηT β = (β − V η)T V −1 (β − V η) − ηT V η 6 / 160 Full Conditional Distributions (Cont'd) Similarly, the full conditional distribution of τe is ∗ ∗ τe jy; β ∼ Ga(a ; b ) where a∗ = a + n=2 b∗ = b + (y − X β)T (y − X β)=2 Homework for Friday: Please derive the full conditionals for β and τe We can use these full conditionals to develop a straightforward Gibbs sampler for posterior inference See Linear Regression.r for an illustration 7 / 160 R Code for Linear Regression Gibbs Sampler Gibbs Sampler for Linear Regression Model library(mvtnorm) # Priors beta0<-rep(0,p) # Prior mean of beta, where p=# of parameters T0<-diag(0.01,p) # Prior Precision of beta a<-b<-0.001 # Gamma hyperparms for taue # Inits taue<-1 # Error precision # MCMC Info nsim<-1000 # Number of MCMC Iterations thin<-1 # Thinning interval burn<-nsim/2 # Burnin lastit<-(nsim-burn)/thin # Last stored value # Store Beta<-matrix(0,lastit,p) # Matrices to store results Sigma2<-rep(0,lastit) Resid<-matrix(0,lastit,n) # Store resids Dy<-matrix(0,lastit,512) # Store density values for residual density plot Qy<-matrix(0,lastit,100) # Store quantiles for QQ plot ######### # Gibbs # ######### tmp<-proc.time() # Store current time for (i in1:nsim){ # Update beta v<-solve(T0+taue*crossprod(X)) m<-v%*%(T0%*%beta0+taue*crossprod(X,y)) beta<-c(rmvnorm(1,m,v)) # Update tau taue<-rgamma(1,a+n/2,b+crossprod(y-X%*%beta)/2) # Store Samples if (i> burn & i%%thin==0){ j<-(i-burn)/thin Beta[j,]<-beta Sigma2[j]<-1/taue Resid[j,]<-resid<-y-X%*%beta # Raw Resid Dy[j,]<-density(resid/sd(resid),from=-5,to=5)$y # Density of Standardized Resids Qy[j,]<-quantile(resid/sd(resid),probs=seq(.001,.999,length=100)) # Quantiles for QQ Plot } if (i%%100==0) print(i) } run.time<-proc.time()-tmp # MCMC run time # Took 1 second to run 1000 iterations with n=1000 subjects 8 / 160 1 Example The program Linear Regression.r simulates 1000 observations from the following linear regression model: Yi = β0 + β1xi + ei ; i = 1;:::; 1000 iid N 0 2 eij ∼ ( ; σe ) The results, based on 1000 iterations with a burn-in of 500, are: Table 1: Results for Linear Regression Model. True Posterior Parameter Value MLE (SE) Mean (SD) β0 −1 −1:01 (0:06) −1:01 (0:06) β1 1 0:99 (0:06) 0:99 (0:06) 2 4 3 83 0 17 3 84 0 16 σe : ( : ) : ( : ) 9 / 160 Plots of Standardized Residuals Density Plot of Standardized Residuals 0.4 MLE Bayes 0.3 0.2 Density 0.1 0.0 −4 −2 0 2 4 Quantile QQ Plot of Standardized Residuals 3 ● ● ● MLE ● 2 ● ● ● ● Bayes ●●● ●●●●● ●●●●●● 1 ● ●●●●●● ●●●●●● ●●●●●●● ●●●●●●●●● ●●●●●●●●●● ●●●●●●● ●●●●●●● ●●●●●●● ●●●●●● ●●●●● −1 ● ●●● ●●●● ● ● ● ● Sample Quantiles ● −3 −3 −2 −1 0 1 2 3 Normal Quantile 10 / 160 Skew-Normal Data Suppose we generate data from a skew-normal distribution1 SN(µ, !2; α), where µ 2 <, ! > 0, and α 2 < are location, scale, and skewness parameters α > 0 ) positive skewness, α < 0 ) negative skewness, and when α = 0, the density reduces to a symmetric normal distribution For details on Bayesian approaches to tting SN models, see Frühwirth-Schnatter and Pyne (2010) and Neelon et al. (2015) For now, suppose we ignore skewness and t an ordinary linear regression to the data See Residual Diagnostics with SN Data.r for details 1O'Hagan and Leonard (1976); Azzalini, 1985 11 / 160 Plots of Standardized Residuals True Errors (e) Density of Standardized Residuals α = −5 MLE ω = 4 0.4 Bayes 0.15 0.3 0.10 0.2 Density Density 0.05 0.1 0.0 0.00 −15 −10 −5 0 −4 −2 0 2 y Standardized Residual QQ Plot of Standardized Residuals 3 ● MLE ● Bayes 2 ● ● ● ● ●●● ●●●● ●●●●● 1 ●●●●● ●●● ●●●● ●●● ●●● ●●● ●●● 0 ● ●●● ●●● ●●● ●●● ●● ●●● ●●● −1 ●● ●● ●● ●● Observed Quantile ● −2 ● ●● ● −3 −3 −2 −1 0 1 2 3 ● ● Normal Quantile 12 / 160 Probit and Logit Models for Binary Data∗ Consider the following probit model for a dichotomous outcome Yi : −1 Pr 1 x T 1 Φ [ (Yi = )] = i β; i = ;:::; n; where Φ(·) denotes the CDF of a standard normal random variable We can represent the model vis-à-vis a latent variable Zi such that N x T 1 and Zi ∼ ( i β; ) Yi = 1 () Zi > 0 implying that Pr 1 Pr 0 x T (Yi = ) = (Zi > ) = Φ( i β) 13 / 160 Latent Variable Interpretation of Probit Model 0.4 0.3 ) i z 0.2 f( Pr(Zi > 0) = Pr(Yi = 1) 0.1 0.0 xTβ −2 0 i 2 4 zi 14 / 160 Albert and Chib (1993) Data-Augmented Sampler Albert and Chib (1993) take advantage of this latent variable structure to develop an ecient data-augmented Gibbs sampler for probit regression Data augmentation is a method by which we introduce T additional (or augmented) variables, Z = (Z1;:::; Zn) , as part of the Gibbs sampler to facilitate sampling Data augmentation is useful when the conditional density π(βjy) is intractable, but the joint posterior π(β; zjy) is easy to sample from via Gibbs, where z is an n × 1 vector of realizations of Z 15 / 160 Data Augmentation Sampler In particular, suppose it's straightforward to sample from the full conditionals π(βjy; z) and π(zjy; β) Then we can apply Gibbs sampling to obtain the joint posterior π(β; zjy) After convergence, the samples of β, fβ(1);:::; β(T )g, will constitute a Monte Carlo sample from π(βjy) Note that if β and Y are conditionally independent given Z, so that π(βjy; z) = π(βjz), then the sampler proceeds in two stages: 1 Draw z from π(zjy; β) 2 Draw β from π(βjz) 16 / 160 Gibbs Sampler for Probit Model∗ The data augmented sampler proposed by Albert and Chib proceeds by −1 assigning a Np β0; T 0 prior to β and dening the posterior variance T −1 of β as V = T 0 + X X Note that because Var(Zi ) = 1, we can dene V outside the Gibbs loop Next, we iterate through the following Gibbs steps: 1 For 1 , sample from a N x T 1 distribution i = ;:::; n zi ( i β; ) truncated below (above) by 0 for yi = 1 (yi = 0) T 2 Sample β from Np(m; V ), where m = V T 0β0 + X z and V is dened above Note: Conditional on Z, β is independent of Y , so we can work solely with the augmented likelihood when updating β See Probit.r for details 17 / 160 R Code for Probit Gibbs Sampler Gibbs for Probit Regression Model # Priors beta0<-rep(0,p) # Prior mean of beta (of dimension p) T0<-diag(.01,p) # Prior precision of beta # Inits beta<-rep(0,p) z<-rep(0,n) # Latent normal variables ns<-table(y) # Category sample sizes # MCMC info analogous to linear reg. code # Posterior var of beta -- Note: can calculate outside of loop vbeta<-solve(T0+crossprod(X,X)) ######### # Gibbs # ######### tmp<-proc.time() # Store current time for (i in1:nsim){ # Update latent normals, z, from truncated normal using inverse-CDF method muz<-X%*%beta z[y==0]<-qnorm(runif(ns[1],0,pnorm(0,muz[y==0],1)),muz[y==0],1) z[y==1]<-qnorm(runif(ns[2],pnorm(0,muz[y==1],1),1),muz[y==1],1) # Alternatively, can use rtnorm function from msm package -- this is slower # z[y==0]<-rtnorm(n0,muz[y==0],1,-Inf,0) # z[y==1]<-rtnorm(n1,muz[y==1],1,0,Inf) # Update beta mbeta <- vbeta%*%(T0%*%beta0+crossprod(X,z)) # Posterior mean of beta beta<-c(rmvnorm(1,mbeta,vbeta)) ################# # Store Results # ################# if (i> burn & i%%thin==0){ j<-(i-burn)/thin Beta[j,]<-beta } if (i%%100==0) print(i) } proc.time()-tmp # MCMC run time -- 0.64 seconds to run 1000 iterations with n=1000 18 / 160 1 Example The program Probit.r ts the following probit model: Yi ∼ Bern(πi ) −1 Φ (πi ) = β0 + β1xi ; i = 1;:::; 1000: The results, based on 1000 iterations with a burn-in of 500, are: Table 2: Results for Probit Model.
Recommended publications
  • MNP: R Package for Fitting the Multinomial Probit Model
    JSS Journal of Statistical Software May 2005, Volume 14, Issue 3. http://www.jstatsoft.org/ MNP: R Package for Fitting the Multinomial Probit Model Kosuke Imai David A. van Dyk Princeton University University of California, Irvine Abstract MNP is a publicly available R package that fits the Bayesian multinomial probit model via Markov chain Monte Carlo. The multinomial probit model is often used to analyze the discrete choices made by individuals recorded in survey data. Examples where the multi- nomial probit model may be useful include the analysis of product choice by consumers in market research and the analysis of candidate or party choice by voters in electoral studies. The MNP software can also fit the model with different choice sets for each in- dividual, and complete or partial individual choice orderings of the available alternatives from the choice set. The estimation is based on the efficient marginal data augmentation algorithm that is developed by Imai and van Dyk (2005). Keywords: data augmentation, discrete choice models, Markov chain Monte Carlo, preference data. 1. Introduction This paper illustrates how to use MNP, a publicly available R (R Development Core Team 2005) package, in order to fit the Bayesian multinomial probit model via Markov chain Monte Carlo. The multinomial probit model is often used to analyze the discrete choices made by individuals recorded in survey data. Examples where the multinomial probit model may be useful include the analysis of product choice by consumers in market research and the analysis of candidate or party choice by voters in electoral studies. The MNP software can also fit the model with different choice sets for each individual, and complete or partial individual choice orderings of the available alternatives from the choice set.
    [Show full text]
  • Chapter 3: Discrete Choice
    Chapter 3: Discrete Choice Joan Llull Microeconometrics IDEA PhD Program I. Binary Outcome Models A. Introduction In this chapter we analyze several models to deal with discrete outcome vari- ables. These models that predict in which of m mutually exclusive categories the outcome of interest falls. In this section m = 2, this is, we consider binary or dichotomic variables (e.g. whether to participate or not in the labor market, whether to have a child or not, whether to buy a good or not, whether to take our kid to a private or to a public school,...). In the next section we generalize the results to multiple outcomes. It is convenient to assume that the outcome y takes the value of 1 for cat- egory A, and 0 for category B. This is very convenient because, as a result, −1 PN N i=1 yi = Pr[c A is selected]. As a consequence of this property, the coeffi- cients of a linear regression model can be interpreted as marginal effects of the regressors on the probability of choosing alternative A. And, in the non-linear models, it allows us to write the likelihood function in a very compact way. B. The Linear Probability Model A simple approach to estimate the effect of x on the probability of choosing alternative A is the linear regression model. The OLS regression of y on x provides consistent estimates of the sample-average marginal effects of regressors x on the probability of choosing alternative A. As a result, the linear model is very useful for exploratory purposes.
    [Show full text]
  • Lecture 6 Multiple Choice Models Part II – MN Probit, Ordered Choice
    RS – Lecture 17 Lecture 6 Multiple Choice Models Part II – MN Probit, Ordered Choice 1 DCM: Different Models • Popular Models: 1. Probit Model 2. Binary Logit Model 3. Multinomial Logit Model 4. Nested Logit model 5. Ordered Logit Model • Relevant literature: - Train (2003): Discrete Choice Methods with Simulation - Franses and Paap (2001): Quantitative Models in Market Research - Hensher, Rose and Greene (2005): Applied Choice Analysis 1 RS – Lecture 17 Model – IIA: Alternative Models • In the MNL model we assumed independent nj with extreme value distributions. This essentially created the IIA property. • This is the main weakness of the MNL model. • The solution to the IIA problem is to relax the independence between the unobserved components of the latent utility, εi. • Solutions to IIA – Nested Logit Model, allowing correlation between some choices. – Models allowing correlation among the εi’s, such as MP Models. – Mixed or random coefficients models, where the marginal utilities associated with choice characteristics vary between individuals. Multinomial Probit Model • Changing the distribution of the error term in the RUM equation leads to alternative models. • A popular alternative: The εij’s follow an independent standard normal distributions for all i,j. • We retain independence across subjects but we allow dependence across alternatives, assuming that the vector εi = (εi1,εi2, , εiJ) follows a multivariate normal distribution, but with arbitrary covariance matrix Ω. 2 RS – Lecture 17 Multinomial Probit Model • The vector εi = (εi1,εi2, , εiJ) follows a multivariate normal distribution, but with arbitrary covariance matrix Ω. • The model is called the Multinomial probit model. It produces results similar results to the MNL model after standardization.
    [Show full text]
  • A Multinomial Probit Model with Latent Factors: Identification and Interpretation Without a Measurement System
    DISCUSSION PAPER SERIES IZA DP No. 11042 A Multinomial Probit Model with Latent Factors: Identification and Interpretation without a Measurement System Rémi Piatek Miriam Gensowski SEPTEMBER 2017 DISCUSSION PAPER SERIES IZA DP No. 11042 A Multinomial Probit Model with Latent Factors: Identification and Interpretation without a Measurement System Rémi Piatek University of Copenhagen Miriam Gensowski University of Copenhagen and IZA SEPTEMBER 2017 Any opinions expressed in this paper are those of the author(s) and not those of IZA. Research published in this series may include views on policy, but IZA takes no institutional policy positions. The IZA research network is committed to the IZA Guiding Principles of Research Integrity. The IZA Institute of Labor Economics is an independent economic research institute that conducts research in labor economics and offers evidence-based policy advice on labor market issues. Supported by the Deutsche Post Foundation, IZA runs the world’s largest network of economists, whose research aims to provide answers to the global labor market challenges of our time. Our key objective is to build bridges between academic research, policymakers and society. IZA Discussion Papers often represent preliminary work and are circulated to encourage discussion. Citation of such a paper should account for its provisional character. A revised version may be available directly from the author. IZA – Institute of Labor Economics Schaumburg-Lippe-Straße 5–9 Phone: +49-228-3894-0 53113 Bonn, Germany Email: [email protected] www.iza.org IZA DP No. 11042 SEPTEMBER 2017 ABSTRACT A Multinomial Probit Model with Latent Factors: Identification and Interpretation without a Measurement System* We develop a parametrization of the multinomial probit model that yields greater insight into the underlying decision-making process, by decomposing the error terms of the utilities into latent factors and noise.
    [Show full text]
  • Nlogit — Nested Logit Regression
    Title stata.com nlogit — Nested logit regression Description Quick start Menu Syntax Options Remarks and examples Stored results Methods and formulas References Also see Description nlogit performs full information maximum-likelihood estimation for nested logit models. These models relax the assumption of independently distributed errors and the independence of irrelevant alternatives inherent in conditional and multinomial logit models by clustering similar alternatives into nests. By default, nlogit uses a parameterization that is consistent with a random utility model (RUM). Before version 10 of Stata, a nonnormalized version of the nested logit model was fit, which you can request by specifying the nonnormalized option. You must use nlogitgen to generate a new categorical variable to specify the branches of the nested logit tree before calling nlogit. Quick start Create variables identifying alternatives at higher levels For a three-level nesting structure, create l2alt identifying four alternatives at level two based on the variable balt, which identifies eight bottom-level alternatives nlogitgen l2alt = balt(l2alt1: balt1 | balt2, l2alt2: balt3 | balt4, /// l2alt3: balt5 | balt6, l2alt4: balt7 | balt8) Same as above, defining l2alt from values of balt rather than from value labels nlogitgen l2alt = balt(l2alt1: 1|2, l2alt2: 3|4, l2alt3: 5|6, /// l2alt4: 7|8) Create talt identifying top-level alternatives based on the four alternatives in l2alt nlogitgen talt = l2alt(talt1: l2alt1 | l2alt2, talt2: l2alt3 | l2alt4) Examine tree structure
    [Show full text]
  • Simulating Generalized Linear Models
    6 Simulating Generalized Linear Models 6.1 INTRODUCTION In the previous chapter, we dug much deeper into simulations, choosing to focus on the standard linear model for all the reasons we discussed. However, most social scientists study processes that do not conform to the assumptions of OLS. Thus, most social scientists must use a wide array of other models in their research. Still, they could benefit greatly from conducting simulations with them. In this chapter, we discuss several models that are extensions of the linear model called generalized linear models (GLMs). We focus on how to simulate their DGPs and evaluate the performance of estimators designed to recover those DGPs. Specifically, we examine binary, ordered, unordered, count, and duration GLMs. These models are not quite as straightforward as OLS because the depen- dent variables are not continuous and unbounded. As we show below, this means we need to use a slightly different strategy to combine our systematic and sto- chastic components when simulating the data. None of these examples are meant to be complete accounts of the models; readers should have some familiarity beforehand.1 After showing several GLM examples, we close this chapter with a brief discus- sion of general computational issues that arise when conducting simulations. You will find that several of the simulations we illustrate in this chapter take much longer to run than did those from the previous chapter. As the DGPs get more demanding to simulate, the statistical estimators become more computationally demanding, and the simulation studies we want to conduct increase in complexity, we can quickly find ourselves doing projects that require additional resources.
    [Show full text]
  • Estimation of Logit and Probit Models Using Best, Worst and Best-Worst Choices Paolo Delle Site, Karim Kilani, Valerio Gatta, Edoardo Marcucci, André De Palma
    Estimation of Logit and Probit models using best, worst and best-worst choices Paolo Delle Site, Karim Kilani, Valerio Gatta, Edoardo Marcucci, André de Palma To cite this version: Paolo Delle Site, Karim Kilani, Valerio Gatta, Edoardo Marcucci, André de Palma. Estimation of Logit and Probit models using best, worst and best-worst choices. 2018. hal-01953581 HAL Id: hal-01953581 https://hal.archives-ouvertes.fr/hal-01953581 Preprint submitted on 13 Dec 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Estimation of Logit and Probit models using best, worst and best-worst choices Paolo Delle Sitea,∗, Karim Kilanib, Valerio Gattac, Edoardo Marcuccid, André de Palmae aUniversity Niccolò Cusano, Rome, Italy bCNAM LIRSA, Paris, France cUniversity Roma Tre, Rome, Italy dMolde University College, Molde, Norway eENS Paris Saclay, France Abstract The paper considers models for best, worst and best-worst choice probabilities, that use a single common set of random utilities. Choice probabilities are de- rived for two distributions of the random terms: i.i.d. extreme value, i.e. Logit, and multivariate normal, i.e. Probit. In Logit, best, worst and best-worst choice probabilities have a closed form.
    [Show full text]
  • Using Heteroskedastic Ordered Probit Models to Recover Moments of Continuous Test Score Distributions from Coarsened Data
    Article Journal of Educational and Behavioral Statistics 2017, Vol. 42, No. 1, pp. 3–45 DOI: 10.3102/1076998616666279 # 2016 AERA. http://jebs.aera.net Using Heteroskedastic Ordered Probit Models to Recover Moments of Continuous Test Score Distributions From Coarsened Data Sean F. Reardon Benjamin R. Shear Stanford University Katherine E. Castellano Educational Testing Service Andrew D. Ho Harvard Graduate School of Education Test score distributions of schools or demographic groups are often summar- ized by frequencies of students scoring in a small number of ordered proficiency categories. We show that heteroskedastic ordered probit (HETOP) models can be used to estimate means and standard deviations of multiple groups’ test score distributions from such data. Because the scale of HETOP estimates is indeterminate up to a linear transformation, we develop formulas for converting the HETOP parameter estimates and their standard errors to a scale in which the population distribution of scores is standardized. We demonstrate and evaluate this novel application of the HETOP model with a simulation study and using real test score data from two sources. We find that the HETOP model produces unbiased estimates of group means and standard deviations, except when group sample sizes are small. In such cases, we demonstrate that a ‘‘partially heteroskedastic’’ ordered probit (PHOP) model can produce esti- mates with a smaller root mean squared error than the fully heteroskedastic model. Keywords: heteroskedastic ordered probit model; test score distributions; coarsened data The widespread availability of aggregate student achievement data provides a potentially valuable resource for researchers and policy makers alike. Often, however, these data are only publicly available in ‘‘coarsened’’ form in which students are classified into one or more ordered ‘‘proficiency’’ categories (e.g., ‘‘basic,’’ ‘‘proficient,’’ ‘‘advanced’’).
    [Show full text]
  • Multinomial Logistic Regression Models with SAS Example
    Newsom Psy 525/625 Categorical Data Analysis, Spring 2021 1 Multinomial Logistic Regression Models Multinomial logistic regression models estimate the association between a set of predictors and a multicategory nominal (unordered) outcome. Examples of such an outcome might include “yes,” “no,” and “don’t know”; “Apple iPhone,” “Android,” and “Samsung Galaxy”; or “walk,” “bike,” “car,” “public transit.” The most common form of the model is a logistic model that is a generalization of the binary outcome of standard logistic regression involving comparisons of each category of the outcome to a referent category. There are J total categories of the outcome, indexed by the subscript j, and the number of comparisons is then J – 1. The equation for the model is written in terms of the logit of the outcome, which is a comparison of a particular category to the referent category, both denoted πj here. π ln j =αβ + X π jj j The natural log of the ratio of the two proportions is the same as the logit in standard logistic regression, where ln(πj/πj) replaces ln[π/(1-π)] , and is sometimes referred to as the generalized logit. The binary logistic model is therefore a special case of the multinomial model. In generalized linear modeling terms, the link function is the generalized logit and the random component is the multinomial distribution. The model differs from the standard logistic model in that the comparisons are all estimated simultaneously within the same model. The j subscript on both the intercept, αj, and slope, βj, indicate that there is an intercept and a slope for the comparison of each category to the referent category.
    [Show full text]
  • The Generalized Multinomial Logit Model
    The Generalized Multinomial Logit Model By Denzil G. Fiebig University of New South Wales Michael P. Keane University of Technology Sydney Jordan Louviere University of Technology Sydney Nada Wasi University of Technology Sydney June 20, 2007 Revised September 30, 2008 Abstract: The so-called “mixed” or “heterogeneous” multinomial logit (MIXL) model has become popular in a number of fields, especially Marketing, Health Economics and Industrial Organization. In most applications of the model, the vector of consumer utility weights on product attributes is assumed to have a multivariate normal (MVN) distribution in the population. Thus, some consumers care more about some attributes than others, and the IIA property of multinomial logit (MNL) is avoided (i.e., segments of consumers will tend to switch among the subset of brands that possess their most valued attributes). The MIXL model is also appealing because it is relatively easy to estimate. But recently Louviere et al (1999, 2008) have argued that the MVN is a poor choice for modelling taste heterogeneity. They argue that much of the heterogeneity in attribute weights is accounted for by a pure scale effect (i.e., across consumers, all attribute weights are scaled up or down in tandem). This implies that choice behaviour is simply more random for some consumers than others (i.e., holding attribute coefficients fixed, the scale of their error term is greater). This leads to what we call a “scale heterogeneity” MNL model (or S-MNL). Here, we develop a “generalized” multinomial logit model (G-MNL) that nests S-MNL and MIXL. By estimating the S-MNL, MIXL and G-MNL models on ten datasets, we provide evidence on their relative performance.
    [Show full text]
  • 15 Panel Data Models for Discrete Choice William Greene, Department of Economics, Stern School of Business, New York University
    15 Panel Data Models for Discrete Choice William Greene, Department of Economics, Stern School of Business, New York University I. Introduction A. Analytical Frameworks for Panel Data Models for Discrete Choice B. Panel Data II Discrete Outcome Models III Individual Heterogeneity A. Random Effects A.1. Partial Effects A.2. Alternative Models for Random Effects A.3. Specification Tests A.4. Choice Models B. Fixed Effects Models C. Correlated Random Effects D. Attrition IV Dynamic Models V. Spatial Correlation 1 I. Introduction We survey the intersection of two large areas of research in applied and theoretical econometrics. Panel data modeling broadly encompasses nearly all of modern microeconometrics and some of macroeconometrics as well. Discrete choice is the gateway to and usually the default framework in discussions of nonlinear models in econometrics. We will select a few specific topics of interest: essentially modeling cross sectional heterogeneity in the four foundational discrete choice settings: binary, ordered multinomial, unordered multinomial and count data frameworks. We will examine some of the empirical models used in recent applications, mostly in parametric forms of panel data models. There are many discussions elsewhere in this volume that discuss discrete choice models. The development here can contribute a departure point to the more specialized treatments such as Keane’s (2013, this volume) study of panel data discrete choice models of consumer demand or more theoretical discussions, such as Lee’s (2013, this volume) extensive development of dynamic multinomial logit models. Toolkits for practical application of most of the models noted here are built into familiar modern software such as Stata, SAS, R, NLOGIT, MatLab, etc.
    [Show full text]
  • A Survey of Discrete Choice Models
    Discrete Choice Modeling William Greene* Abstract We detail the basic theory for models of discrete choice. This encompasses methods of estimation and analysis of models with discrete dependent variables. Entry level theory is presented for the practitioner. We then describe a few of the recent, frontier developments in theory and practice. Contents 0.1 Introduction 0.2 Specification, estimation and inference for discrete choice models 0.2.1 Discrete choice models and discrete dependent variables 0.2.2 Estimation and inference 0.2.3 Applications 0.3 Binary Choice 0.3.1 Regression models 0.3.2 Estimation and inference in parametric binary choice models 0.3.3 A Bayesian estimator 0.3.4 Semiparametric models 0.3.5 Endogenous right hand side variables 0.3.6 Panel data models 0.3.7 Application 0.4 Bivariate and multivariate binary choice 0.4.1 Bivariate binary choice 0.4.2 Recursive simultaneous equations 0.4.3 Sample selection in a bivariate probit model 0.4.4 Multivariate binary choice and the panel probit model 0.4.5 Application 0.5 Ordered choice 0.5.1 Specification analysis 0.5.2 Bivariate ordered probit models 0.5.3 Panel data applications 0.5.4 Applications 0.6 Models for counts 0.6.1 Heterogeneity and the negative binomial model 0.6.2 Extended models for counts: Two part, zero inflation, sample selection, bivariate 0.6.3 Panel data models 0.6.4 Applications 0.7 Multinomial unordered choices 0.7.1 Multinomial logit and multinomial probit models 0.7.2 Nested logit models 0.7.3 Mixed logit and error components models 0.7.4 Applications 0.8 Summary and Conclusions To appear as “Discrete Choice Modeling,” in The Handbook of Econometrics: Vol.
    [Show full text]