Bayesian Modeling Strategies for Generalized Linear Models, Part 1 Reading: Ho Chapter 9; Albert and Chib (1993) Sections 13.2, 4.1; Polson (2012) Sections 13, Appendix S6.2; Pillow and Scott (2012) Sections 13.1, 4; Dadaneh et al. (2018); Neelon (2018) Sections 1 and 2
Fall 2018
1 / 160 Linear Regression Model
Consider the following simple linear regression model: x T 1 Yi = i β + ei i = ,..., n, where
• x i is a p × 1 vector of covariates (including an intercept) • β is a p × 1 vector of regression coecients
iid N 0 −1 • ei ∼ ( , τe ) 1 2 is a precision term • τe = /σe
Or, combining all n observations: Y = X β + e, where Y and e are n × 1 vectors and X is an n × p design matrix 2 / 160 Maximum Likelihood Inference for Linear Model
It is straightforward to show that the MLEs are