Gibbs Sampling in Linear Models #1

Gibbs Sampling in Linear Models #1

Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances Gibbs Sampling in Linear Models #1 Econ 690 Purdue University Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances Outline 1 Conditional Posterior Distributions for Regression Parameters in the Linear Model [Lindley and Smith (1972, JRSSB)] 2 Gibbs Sampling in the Classical Linear Regression Model Application to our Wage Data 3 Gibbs Sampling in a Linear Regression Model with Groupwise Heteroscedasticity Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances Before discussing applications of Gibbs sampling in several different linear models, we must first prove an important result that will assist us in deriving needed conditional posterior distributions. This result is relevant for the derivation of posterior conditional distributions for regression parameters, and makes use of our completion of the square formulae presented earlier. We will use this result all the time in this and subsequent lectures. I'm not kidding. ALL THE TIME! (So try and get familiar with it). Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances Deriving βjΣ; y in the LRM Consider the regression model (where y is n × 1) under the proper prior (and some independent prior on Σ, p(Σ) which we will not explicitly model). Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances We will show that where and Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances First, note (under prior independence) where p(Σ) denotes the prior for Σ. Since the conditional posterior distribution p(βjΣ; y) is proportional to the joint above (why?), we can ignore all terms which enter multiplicatively and involve only Σ or y (or both). Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances It follows that 1 h i p(βjΣ; y) / exp − (y − X β)0Σ−1(y − X β) + (β − µ )0V −1(β − µ ) : 2 β β β This is almost in the form where we can apply our previous completion of the square result. However, before doing this, we must get a quadratic form for β in the \likelihood function part" of this expression. Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances To this end, let us define Note that β∗ is just the familiar GLS estimator. Also note that, given Σ and y, β∗ is effectively \known" for purposes of deriving βjΣ; y (i.e., we can and will condition on it). Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances For the sake of space, let and note Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances The reason why this middle term was zero on the previous page follows since Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances Using this last result, we can replace the quadratic form (y − X β)0Σ−1(y − X β) with u^0Σ−1u^ + (β − β∗)0X 0Σ−1X (β − β∗) to yield Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances We can apply our completion of the square formula to the above to get: 1 p(βjΣ; y) / exp − (β − β)0V (β − β) 2 β where 0 −1 −1 V β = (X Σ X + Vβ ) −1 0 −1 ∗ −1 β = V β (X Σ X β + Vβ µβ) −1 0 −1 −1 = V β (X Σ y + Vβ µβ) Defining −1 0 −1 −1 Dβ = V β and dβ = X Σ y + Vβ ; it follows that βjΣ; y ∼ N(Dβdβ; Dβ); as claimed. Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances Gibbs Sampling in the Classical Linear Model Consider the regression model 2 y ∼ N(X β; σ In) under the priors 2 β ∼ N(µβ; Vβ); σ ∼ IG(a; b): We seek to show how the Gibbs sampler can be employed to fit this model (noting, of course, that we do not need to resort to this algorithm in such a simplified model, but introduce it here as a starting point.) Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances To implement the sampler, we need to derive two things: 1 2 The first of these can be obtained by applying our previous 2 theorem (with Σ = σ In). Specifically, we obtain: 2 βjσ ; y ∼ N(Dβdβ; Dβ); where −1 0 2 −1 0 2 −1 Dβ = X X /σ + Vβ ; dβ = X y/σ + Vβ µβ: Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances As for the posterior conditional for σ2, note p(β; σ2jy) / p(β)p(σ2)p(yjβ; σ2): Since p(σ2jβ; y) is proportional to the joint posterior above, it follows that Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances The density on the last page is easily recognized as the kernel of an density. Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances Thus, to implement the Gibbs sampler in the linear regression model, we can proceed as follows. 2 1 Given a current value of σ : 2 2 2 Calculate Dβ = Dβ(σ ) and dβ = dβ(σ ): 2 2 2 3 Draw β~ from a N[Dβ(σ )dβ(σ ); Dβ(σ )] distribution. 2 4 Drawσ ~ from an ! n 1 −1 IG + a; b−1 + (y − X β~)0(y − X β~) 2 2 distribution. 5 Repeat this process many times, updating the posterior conditionals at each iteration to condition on the most recent simulations produced in the chain. 6 Discard an early set of parameter simulations as the burn-in period. 7 Use the subsequent draws to compute posterior features of interest. Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances Application to Wage Data We apply this Gibbs sampler to our wage-education data set. We seek to compare posterior features estimated via the Gibbs output to those we previously derived analytically in our linear regression model lecture notes. The following results are obtained under the prior 2 β ∼ N(0; 4I2); σ ∼ IG[3; (1=[2 ∗ :2])]: We obtain 5,000 simulations, and discard the first 100 as the burn-in period. Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances Posterior Calculations Using the Gibbs Sampler 2 β0 β1 σ E(·|y) 1.18 .091 [.091] .267 Std(·|y) .087 .0063 [.0066] .0011 In addition, we calculate Pr(β1 < :10jy) = :911: The numbers in square brackets were our analytical results based on a diffuse prior. Thus, we are able to match these analytical results almost exactly with 4,900 post-convergence simulations. Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances Gibbs with Groupwise Heteroscedasticity Suppose we generalize the regression model in the previous exercise so that y = X β + , where 2 2 σ1 0 0 0 0 0 0 0 0 0 3 6 . .. 7 6 . 7 6 7 6 0 0 σ2 0 0 0 0 0 0 0 7 6 1 7 6 0 0 0 σ2 0 0 0 0 0 0 7 6 2 7 6 . 7 6 . .. 7 0 6 . 7 E( jX ) ≡ Σ = 6 2 7 : 6 0 0 0 0 0 σ2 0 0 0 0 7 6 . 7 6 . .. 7 6 . 7 6 2 7 6 0 0 0 0 0 0 0 σJ 0 0 7 6 7 6 . 7 4 . .. 5 2 0 0 0 0 0 0 0 0 0 σJ Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances That is, we relax the homoscedasticity assumption with one of groupwise heteroscedasticity- where we presume there are J different groups with identical variance parameters within each group, but different parameters across groups. Let nj represent the number of observations belonging to group J. Given priors of the form show how the Gibbs sampler can be used to fit this model. Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances It is important to recognize that even in this \simple" model, proceeding analytically, as we did in the homoscedastic regression model, will prove to be very difficult. As will soon become clear, however, estimation of this model via the Gibbs sampler involves only a simple extension of our earlier algorithm for the \classical" linear regression model. Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances Since we have pre-sorted the data into blocks by type of variance parameter, define: yj as the nj × 1 outcome vector and Xj the nj × k covariate matrix, respectively, for group j, j = 1; 2;:::; J. Thus, 2 3 2 3 y1 X1 6 y2 7 6 X2 7 Var(y jX ) = σ2I ; y = 6 7 ; X = 6 7 : j j j nj 6 . 7 6 . 7 4 . 5 4 . 5 yJ XJ Justin L. Tobias Gibbs Sampling #1 Posterior Conditionals for Regression Parameters Gibbs: Regression Model Gibbs: Unequal Variances The likelihood function for this sample of data can be written as: Justin L.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    30 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us