Week 5: Simple Linear Regression

Week 5: Simple Linear Regression

Week 5: Simple Linear Regression Brandon Stewart1 Princeton September 28-October 2, 2020 1These slides are heavily influenced by Matt Blackwell, Adam Glynn, Erin Hartman and Jens Hainmueller. Illustrations by Shay O'Brien. Stewart (Princeton) Week 5: Simple Linear Regression September 28-October 2, 2020 1 / 127 Where We've Been and Where We're Going... Last Week I hypothesis testing I what is regression This Week I mechanics and properties of simple linear regression I inference and measures of model fit I confidence intervals for regression I goodness of fit Next Week I mechanics with two regressors I omitted variables, multicollinearity Long Run I probability ! inference ! regression ! causal inference Stewart (Princeton) Week 5: Simple Linear Regression September 28-October 2, 2020 2 / 127 Macrostructure|This Semester The next few weeks, Linear Regression with Two Regressors Break Week and Multiple Linear Regression Rethinking Regression Regression in the Social Sciences Causality with Measured Confounding Unmeasured Confounding and Instrumental Variables Repeated Observations and Panel Data Review and Final Discussion Stewart (Princeton) Week 5: Simple Linear Regression September 28-October 2, 2020 3 / 127 1 Mechanics of OLS 2 Classical Perspective (Part 1, Unbiasedness) Sampling Distributions Classical Assumptions 1{4 3 Classical Perspective: Variance Sampling Variance Gauss-Markov Large Samples Small Samples Agnostic Perspective 4 Inference Hypothesis Tests Confidence Intervals Goodness of fit Interpretation 5 Non-linearities Log Transformations Fun With Logs LOESS Stewart (Princeton) Week 5: Simple Linear Regression September 28-October 2, 2020 4 / 127 Narrow Goal: Understand lm() Output Call: lm(formula = sr ~ pop15, data = LifeCycleSavings) Residuals: Min 1Q Median 3Q Max -8.637 -2.374 0.349 2.022 11.155 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 17.49660 2.27972 7.675 6.85e-10 *** pop15 -0.22302 0.06291 -3.545 0.000887 *** --- Signif. codes: 0 `***' 0.001 `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 Residual standard error: 4.03 on 48 degrees of freedom Multiple R-squared: 0.2075,Adjusted R-squared: 0.191 F-statistic: 12.57 on 1 and 48 DF, p-value: 0.0008866 Stewart (Princeton) Week 5: Simple Linear Regression September 28-October 2, 2020 5 / 127 Reminder How do we fit the regression line Y^ = β^0 + β^1X to the data? Answer: We will minimize the squared sum of residuals Residual ui is “part” of Yi not predicted ui Yi Y i n 2 u min i 0 ,1 i1 Stewart (Princeton) Week 5: Simple Linear Regression September 28-October 2, 2020 6 / 127 The Population Quantity Broadly speaking we are interested in the conditional expectation function (CEF) in part because it minimizes the mean squared error. The CEF has a potentially arbitrary shape but there is always a best linear predictor (BLP) or linear projection which is the line given by: g(X ) = β0 + β1X Cov[X ; Y ] β = E[Y ] − E[X ] 0 V [X ] Cov[X ; Y ] β = 1 V [X ] This may not be a good approximation depending on how non-linear the true CEF is. However, it provides us with a reasonable target that always exists. Define deviations from the BLP as u = Y − g(X ) then, the following properties hold: (1) E[u] = 0, (2) E[Xu] = 0, (3) Cov[X ; u] = 0 Stewart (Princeton) Week 5: Simple Linear Regression September 28-October 2, 2020 7 / 127 What is OLS? The best linear predictor is the line that minimizes 2 (β0; β1) = arg min E[(Y − b0 − b1X ) ] b0;b1 Ordinary Least Squares (OLS) is a method for minimizing the sample analog of this quantity. It solves the optimization problem: n X 2 (βb0; βb1) = arg min (Yi − b0 − b1Xi ) b0;b1 i=1 In words, the OLS estimates are the intercept and slope that minimize the sum of the squared residuals. There are many loss functions, but OLS uses the squared error loss which is connected to the conditional expectation function. If we chose a different loss, we would target a different feature of the conditional distribution. Stewart (Princeton) Week 5: Simple Linear Regression September 28-October 2, 2020 8 / 127 Deriving the OLS estimator Let's think about n pairs of sample observations: (Y1; X1); (Y2; X2);:::; (Yn; Xn) Let fb0; b1g be possible values for fβ0; β1g Define the least squares objective function: n X 2 S(b0; b1) = (Yi − b0 − b1Xi ) : i=1 How do we derive the LS estimators for β0 and β1? We want to minimize this function, which is actually a very well-defined calculus problem. 1 Take partial derivatives of S with respect to b0 and b1. 2 Set each of the partial derivatives to 0 3 Solve for fb0; b1g and replace them with the solutions We are going to step through this process together. Stewart (Princeton) Week 5: Simple Linear Regression September 28-October 2, 2020 9 / 127 Step 1: Take Partial Derivatives n X 2 S(b0; b1) = (Yi − b0 − Xi b1) i=1 n X 2 2 2 2 = (Yi − 2Yi b0 − 2Yi b1Xi + b0 + 2b0b1Xi + b1Xi ) i=1 n @S(b0; b1) X = (−2Yi + 2b0 + 2b1Xi ) @b0 i=1 n X = −2 (Yi − b0 − b1Xi ) i=1 n @S(b0; b1) X 2 = (−2Yi Xi + 2b0Xi + 2b1Xi ) @b1 i=1 n X = −2 Xi (Yi − b0 − b1Xi ) i=1 Stewart (Princeton) Week 5: Simple Linear Regression September 28-October 2, 2020 10 / 127 Solving for the Intercept n @S(b0; b1) X = −2 (Yi − b0 − b1Xi ) @b0 i=1 n X 0 = −2 (Yi − b0 − b1Xi ) i=1 n X 0 = (Yi − b0 − b1Xi ) i=1 n n n X X X 0 = Yi − b0 − b1Xi i=1 i=1 i=1 n ! n ! X X b0n = Yi − b1 Xi i=1 i=1 b0 = Y − b1X Stewart (Princeton) Week 5: Simple Linear Regression September 28-October 2, 2020 11 / 127 A Helpful Lemma on Deviations from Means Lemmas are like helper results that are often invoked repeatedly. Lemma (Deviations from the Mean Sum to 0) n n ! X X (Xi − X¯) = Xi − nX¯ i=1 i=1 n ! n X X = Xi − n Xi =n i=1 i=1 n ! n X X = Xi − Xi i=1 i=1 = 0 Stewart (Princeton) Week 5: Simple Linear Regression September 28-October 2, 2020 12 / 127 Solving for the Slope n X 0 = −2 Xi (Yi − b0 − b1Xi ) i=1 n X 0 = Xi (Yi − b0 − b1Xi ) i=1 n X 0 = Xi (Yi − (Y − b1X ) − b1Xi )(sub in b0) i=1 n X 0 = Xi (Yi − Y − b1(Xi − X )) i=1 n n X X 0 = Xi (Yi − Y ) − b1 Xi (Xi − X ) i=1 i=1 n n X X X b1 Xi (Xi − X ) = Xi (Yi − Y ) − X (Yi − Y )(add 0) i=1 i=1 i=1 Stewart (Princeton) Week 5: Simple Linear Regression September 28-October 2, 2020 13 / 127 Solving for the Slope n n X X X b1 Xi (Xi − X ) = Xi (Yi − Y ) − X (Yi − Y ) i=1 i=1 i=1 n n X X X b1 Xi (Xi − X ) = Xi (Yi − Y ) − X (Yi − Y ) i=1 i=1 i=1 n ! n X X X b1 Xi (Xi − X ) − X (Xi − X ) = (Xi − X )(Yi − Y ) add 0 i=1 i=1 i=1 n n X X b1 (Xi − X )(Xi − X ) = (Xi − X )(Yi − Y ) i=1 i=1 Pn (X − X )(Y − Y ) b = i=1 i i 1 Pn 2 i=1(Xi − X ) Stewart (Princeton) Week 5: Simple Linear Regression September 28-October 2, 2020 14 / 127 The OLS estimator Now we're done! Here are the OLS estimators: βb0 = Y − βb1X Pn (X − X )(Y − Y ) β = i=1 i i b1 Pn 2 i=1(Xi − X ) Stewart (Princeton) Week 5: Simple Linear Regression September 28-October 2, 2020 15 / 127 Intuition of the OLS estimator The intercept equation tells us that the regression line goes through the point (Y ; X ): Y = βb0 + βb1X The slope for the regression line can be written as the following: Pn (X − X )(Y − Y ) Sample Covariance between X and Y β = i=1 i i = b1 Pn 2 i=1(Xi − X ) Sample Variance of X The higher the covariance between X and Y , the higher the slope will be. Negative covariances ! negative slopes; positive covariances ! positive slopes If Xi doesn't vary, the denominator is undefined. If Yi doesn't vary, you get a flat line. Stewart (Princeton) Week 5: Simple Linear Regression September 28-October 2, 2020 16 / 127 Mechanical properties of OLS Later we'll see that under certain assumptions, OLS will have nice statistical properties. But some properties are mechanical since they can be derived from the first order conditions of OLS. 1 The sample mean of the residuals will be zero: n 1 X u = 0 n bi i=1 2 The residuals will be uncorrelated with the predictor (Covd is the sample covariance): n X Xi ubi = 0 =) Cov(d Xi ; ubi ) = 0 i=1 3 The residuals will be uncorrelated with the fitted values: n X Ybi ubi = 0 =) Cov(d Ybi ; ubi ) = 0 i=1 Stewart (Princeton) Week 5: Simple Linear Regression September 28-October 2, 2020 17 / 127 OLS slope as a weighted sum of the outcomes One useful derivation is to write the OLS estimator for the slope as a weighted sum of the outcomes. n X βb1 = Wi Yi i=1 Where here we have the weights, Wi as: (X − X ) W = i i Pn 2 i=1(Xi − X ) This is important for two reasons.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    131 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us