2.1.7 Polynomial Regression

2.1.7 Polynomial Regression

2.1.7 Polynomial Regression Simple Linear Regression In many practical situations, the simple straight line y = ¯0 + ¯1x is not appropriate. Instead, a model including powers of x x2; x3;:::; xk should be considered. For example Xk j k y = ¯0 + ¯j x = ¯0 + ¯1x + ¢ ¢ ¢ + ¯k x j=1 1/7 Simple Linear Regression The Polynomial Regression Model k Y = ¯0 + ¯1x + ¢ ¢ ¢ + ¯k x + ² where ² is a random error term as before can be used to model data. Two immediate problems: 1. How to choose k 2. How to carry out inference I estimation I testing I prediction 2/7 We begin by addressing 2. The estimation of parameters can be again carried out using Least Squares provided that the Simple Linear model assumptions listed before are valid. Consider k = 2. Regression T We choose ¯ = (¯0; ¯1; ¯2) to minimize the sum of squared errors e Xn Xn 2 2 2 SSE(¯) = (yi ¡ ybi ) = (yi ¡ ¯0 ¡ ¯1xi ¡ ¯2xi ) e i=1 i=1 that is the ¯tted values for parameters ¯ are e 2 ybi = ¯0 + ¯1xi + ¯2xi ¯b can be found to minimize SSE using calculus techniques (di®erentiatinge with respect to the elements of ¯) to give the minimum SSE e Xn b b b 2 2 SSE(¯) = (yi ¡ ¯0 ¡ ¯1xi ¡ ¯2xi ) e i=1 3/7 Simple Linear Regression We can also compute the estimated standard errors s ; s ; s ¯b0 ¯b1 ¯b2 which allow tests of parameters to be carried out, and con¯dence intervals calculated. We can also compute prediction intervals. The best estimate of the residual error variance σ2 is 1 σb2 = SSE(¯b) n ¡ 3 e p is the number of parameters estimated equal to three, so we divide by n ¡ 3. 4/7 Simple Linear Regression We can also compute I Residuals I can be used to assess the ¯t of the model. I the residuals should be patternless if the model ¯t is good. I R2, Adjusted R2 statistics I used to assess the global ¯t of the model. I used to compare the quality of ¯t with other models. 5/7 Simple Linear Regression Example: Hooker Pressure Data. For the Hooker pressure data, a quadratic polynomial (k = 2) might be suitable. 2 Y = ¯0 + ¯1x + ¯2x We need to estimate ¯0, ¯1 and ¯2 for these data to see if the model ¯ts better than the straight line model we ¯tted previously. This can be achieved using SPSS. It transpires that the quadratic model produces a set of residuals that are patternless, which the straight line model when ¯tted does not. See Handout for full details. 6/7 Simple Linear Regression Note: It is common to use the Standardized Residuals be y ¡ yb bz = i = i i i σb σb where σb2 is the estimate of σ2 de¯ned previously, as Var[bzi ] ¼ 1 if the model ¯t is good, whereas 2 Var[bei ] ¼ σ which clearly depends on σ. This makes it hard to compare bei across di®erent models when inspecting residuals. 7/7.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us