
Kernel weighted averages Local linear regression Local regression I Patrick Breheny November 1 Patrick Breheny STA 621: Nonparametric Statistics 1/27 Kernel weighted averages The Nadaraya-Watson estimator Local linear regression Expected loss and prediction error Simple local models In the previous lecture, we examined a very simple local model: k-nearest neighbors Another simple local regression model is the local average: P ^ i yiI(jxi − x0j < h) f(x0) = P i I(jxi − x0j < h) However, both of these approaches have the disadvantage that they lead to discontinuous estimates | as an observation enters/leaves the neighborhood, the estimate changes abruptly Patrick Breheny STA 621: Nonparametric Statistics 2/27 Kernel weighted averages The Nadaraya-Watson estimator Local linear regression Expected loss and prediction error The Nadaraya-Watson kernel estimator As with kernel density estimators, we can eliminate this problem by introducing a continuous kernel which allows observations to enter and exit the model smoothly Generalizing the local average, we obtain the following estimator, known as the Nadaraya-Watson kernel estimator: P ^ i yiKh(xi; x0) f(x0) = P ; i Kh(xi; x0) xi−x0 where Kh(xi; x0) = K( h ) is the kernel, and if K is continuous, then so is f^ As with kernel density estimates, we need to choose a bandwidth h, which controls the degree of smoothing Patrick Breheny STA 621: Nonparametric Statistics 3/27 Kernel weighted averages The Nadaraya-Watson estimator Local linear regression Expected loss and prediction error Expected loss and prediction error for regression Because it is customary to treat x as fixed in regression, instead of integrating over x to obtain the expected loss, we average over the observed values of x: 1 X L(f; f^) = L(f(x ); f^(x )) E n E i i i The expected squared error loss is particularly convenient in regression, as it is directly related to the expected prediction error: ( ) 1 X EPE = (Y − f^(x ))2 ; E n i i i where Yi and f^ are independent variables Patrick Breheny STA 621: Nonparametric Statistics 4/27 Kernel weighted averages The Nadaraya-Watson estimator Local linear regression Expected loss and prediction error Bias-variance decomposition of EPE Theorem: At a given point x0, EPE = σ2 + Bias2(f^) + Var(f^); 2 where σ is the variance of Y jx0 Thus, expected prediction error consists of three parts: Irreducible error: this is beyond our control and would remain even if we were able to estimate f perfectly ^ Bias (squared): the difference between Eff(x0)g and the true value f(x0) ^ Variance: the variance of the estimate f(x0) Patrick Breheny STA 621: Nonparametric Statistics 5/27 Kernel weighted averages The Nadaraya-Watson estimator Local linear regression Expected loss and prediction error Relationship between expected loss and EPE Furthermore, 2 EP E = EL(f; f^) + σ Thus, the expected prediction error and the expected loss are equal up to a constant This is attractive because prediction error is easy to evaluate via cross-validation Patrick Breheny STA 621: Nonparametric Statistics 6/27 Kernel weighted averages The Nadaraya-Watson estimator Local linear regression Expected loss and prediction error Cross-validation Specifically, we can estimate the expected prediction error with 1 X n o2 CV = y − f^ (x ) ; n i (−i) i i ^ where f(−i) is the estimate of f obtained by omitting the pair fxi; yig Furthermore, as we will see, one can obtain a closed form expression for the leave-one-out cross validation score above for any \linear smoother", without actually refitting the model n times Patrick Breheny STA 621: Nonparametric Statistics 7/27 Kernel weighted averages The Nadaraya-Watson estimator Local linear regression Expected loss and prediction error Bone mineral density data As an example of a real data set with an interesting change in E(yjx) as a function of x, we will look at a study of changes in bone mineral density in adolescents The outcome is the difference in spinal bone mineral density, taken on two consecutive visits, divided by the average of the two measurements Age is the average age over the two visits A person's bone mineral density generally increases until the individual is done growing, then remains relatively constant until old age Patrick Breheny STA 621: Nonparametric Statistics 8/27 Kernel weighted averages The Nadaraya-Watson estimator Local linear regression Expected loss and prediction error Cross-validation to choose bandwidth 1.82 1.80 CV 1.78 1.76 1.74 1 2 3 4 5 6 Bandwidth Patrick Breheny STA 621: Nonparametric Statistics 9/27 Kernel weighted averages The Nadaraya-Watson estimator Local linear regression Expected loss and prediction error Estimates of the regression function h = 1 h = 2.75 h = 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.15 ● ● 0.15 ● ● 0.15 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ●●● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ●●● ● ● ● ● ●●● ● ● ● ● ● ● ●● ●●● ● ●● ● ● ● ● ● ●● ●●● ● ●● ● ● ● ● ● ●● ●●● ● ●● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ●● ● ● 0.05 ● ● 0.05 ● ● 0.05 ● ● ● ●● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ●●● ● ● ● ● ● ● ● ●● ● ● ●●● ● ● ● ● ● ● ● ●● ● ● ●●● ● ● ● ●● ● ●● ● ● ● ● ●● ● ● ● ●● ● ●● ● ● ● ● ●● ● ● ● ●● ● ●● ● ● ● ● ●● ● ●● ● ● ● ● ●● ● ●● ● ● ● ● ●● ● ●● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ●● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Relative change in spinal BMD Relative change in spinal BMD Relative change in spinal BMD Relative ● ● ● −0.05 −0.05 −0.05 10 15 20 25 10 15 20 25 10 15 20 25 Age Age Age Patrick Breheny STA 621: Nonparametric Statistics 10/27 Advantages of local linear fitting Kernel weighted averages Selection of the smoothing parameter Local linear regression Extensions and modifications The problem with kernel weighted averages Unfortunately, the Nadaraya-Watson kernel estimator suffers from bias, both at the boundaries and in the interior when the xi's are not uniformly distributed: ●● ● 100 ● ● ● ● ● ● ● ● ●● ● ●● ●●● ●● 80 ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● 60 y ● 40 ● ●●● ● ●● ●● ● ● ● ● ●● ● ●● ●● ● 20 ● ● ● ● ●●● ● ●●● ● ●●● ● ● ● 0 0 20 40 60 80 100 x Patrick Breheny STA 621: Nonparametric Statistics 11/27 Advantages of local linear fitting Kernel weighted averages Selection of the smoothing parameter Local linear regression Extensions and modifications Loess This arises due to the asymmetry effect of the kernel in these regions However, we can (up to first order) eliminate this problem by fitting straight lines locally, instead of constants In locally weighted regression, also known as lowess or loess, we solve a separate weighted least squares problem at each target point x0: ^ X 2 (^α; β) = arg minα,β Kh(x0; xi)(yi − α − xiβ) i The estimate is then f^(x0) =α ^ + x0β^ Patrick Breheny STA 621: Nonparametric Statistics 12/27 Advantages of local linear fitting Kernel weighted averages Selection of the smoothing parameter Local linear regression Extensions and modifications Loess is a linear smoother Let X denote the n × 2 matrix with ith row (1; xi − x0), and W denote the n × n diagonal matrix with ith diagonal element wi(x0) = Kh(x0; xi) Then, ^ 0 0 −1 0 f(x0) = e1[X WX] X Wy X = li(x0)yi; i 0 0 where e1 = (1; 0) = (1; x0 − x0) and it is important to keep in mind that both X and W depend implicitly on x0 Thus, our estimate is a linear combination of the yi's; such estimates of f are called linear smoothers Patrick Breheny STA 621: Nonparametric Statistics 13/27 Advantages of local linear fitting Kernel weighted averages Selection of the smoothing parameter Local linear regression Extensions and modifications Some facts about the linear weights Homework: Show that the linear weights fli(x0)g defined on the previous slide satisfy P (a) i li(x0) = 1 for all x0 P (b) i li(x0)(xi − x0) = 0 for all x0 (c) If K(xi; x0) = 0, then li(x0) = 0 (Note that property (a) ensures that the estimate preserves constant curves) Patrick Breheny STA 621: Nonparametric Statistics 14/27 Advantages of local linear fitting Kernel weighted averages Selection of the smoothing parameter Local linear regression Extensions and modifications Effective kernels The loess approach is similar to the Nadaraya-Watson approach in that both are taking linear combinations of the responses fyig In loess, however, the weights fli(x0)g are constructed by combining both kernel weighting and least squares operations, forming what is sometimes called an effective kernel or equivalent kernel Before the development of loess, a fair amount of research focused on deriving adaptive modifications to kernels in order to alleviate the bias that we previously discussed However, local linear regression automatically modifies the kernel in such a way that this bias is largely eliminated, a phenomenon known as automatic kernel carpentry Patrick Breheny STA 621: Nonparametric Statistics 15/27 Advantages of local linear fitting Kernel weighted averages Selection of the smoothing parameter Local linear regression Extensions and modifications
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages27 Page
-
File Size-