Heteroskedasticity in Regression

Total Page:16

File Type:pdf, Size:1020Kb

Heteroskedasticity in Regression Heteroskedasticity 1 / 50 Heteroskedasticity in Regression Paul E. Johnson1 2 1Department of Political Science 2Center for Research Methods and Data Analysis, University of Kansas 2020 Heteroskedasticity 2 / 50 Introduction Outline 1 Introduction 2 Fix #1: Robust Standard Errors 3 Weighted Least Squares Combine Subsets of a Sample Random coefficient model Aggregate Data 4 Testing for heteroskedasticity Categorical Heteroskedasticity Checking for Continuous Heteroskedasticity Toward a General Test for Heteroskedasticity 5 Appendix: Robust Variance Estimator Derivation 6 Practice Problems Heteroskedasticity 3 / 50 Introduction Remember the Basic OLS Theory behind the Linear Model yi = β0 + β1x1i + ei Error term, we assumed, for all i, E(ei ) = 0 for all i (errors are \symmetric" above and below) 2 2 Var(ei ) = E[(ei − E(ei )) ] = σ (Homoskedasticity: same variance). Heteroskedasticity: the assumption of homogeneous variance is violated. Heteroskedasticity 4 / 50 Introduction Homoskedasticity means 2 σe 0 0 0 0 2 0 σe 0 0 0 2 0 0 σ 0 0 Var(e) = e .. 0 0 0 . 0 2 0 0 0 0 σe Heteroskedasticity 5 / 50 Introduction Heteroskedasticity depicted in one of these ways 2 σe 2 w 0 σe w1 0 1 σ2 σ2w e e 2 w2 2 σ2 Var(e) = σe w3 or e w3 . . . . 2 0 σ wN σ2 e 0 e wN I get confused a lot when comparing textbooks because of this problem! Heteroskedasticity 6 / 50 Introduction Consequences of Heteroskedasticity 1: β^OLS still unbiased, consistent OLS Estimates of β0 and β1 are still unbiased and consistent. Unbiased: E[β^OLS ] = β Consistent: As N → ∞, β^OLS tends to β in probability limit. If the predictive line was \right" before, It's still right now. However, these are incorrect standard error of β^ RMSE confidence / prediction intervals Heteroskedasticity 7 / 50 Introduction Proof: β^OLS Still Unbiased Begin with \mean centered" data. OLS with one variable: P P P 2 P P ^ xi · yi xi (b · xi + ei ) β1 xi + xi · ei xi · ei β1 = P 2 = P 2 = P 2 = β1+ P 2 xi xi xi xi Apply the Expected value operator to both sides: P ^ xi · ei E[β1] = E(β1) + E( P 2 ) xi P P ^ xi · ei E[xi · ei ] E[β1] = β1 + E( P 2 ) = β1 + ( P 2 ) xi xi Assume xi is uncorrelated with ei , E[xi ei ] = 0, the work is done E(β^1) = β1 Heteroskedasticity 8 / 50 Introduction Consequence 2. OLS formula for Var\(β^) is wrong 1 Usual formula to estimate Var(β^), Var\(β^) is wrong. And it's square root, the std.err.(β^) is wrong. 2 Thus t-tests are WRONG (too big). Heteroskedasticity 9 / 50 Introduction Proof: OLS Var\(β^) Wrong Variance of ei : Var(ei ). The variance of the OLS slope estimator, Var(b^1), in \mean-centered (or deviations) form": P P P P 2 xi · ei Var[ xi ei ] Var(xi ei ) xi · Var(ei ) Var(β^1) = Var = = = P x2 P 2 2 P 2 2 P 2 2 i ( xi ) ( xi ) ( xi ) (1) We assume all Var(ei ) are equal, and we put in the MSE as an estimate of it. \^ MSE Var(β1) = P 2 (2) xi Heteroskedasticity 10 / 50 Introduction Proof: OLS Var(β^)Wrong (page 2) Instead, suppose the \true" variance 2 2 Var(ei ) = s + si (3) (a common minimum variance s2 plus an additional individualized 2 variance si ). Plug this into (1): P x2(s2 + s2) s2 P x · s2 i i = + i i (4) P 2 2 P x2 P 2 2 ( xi ) i ( xi ) The first term is "roughly"what OLS would calculate for the variance of β^1. The second term is the additional "true variance" in β^1 that the OLS formula V\(β^1) does not include. Heteroskedasticity 11 / 50 Introduction Consequence 3: β^OLS is Inefficient ^OLS 1 βi is inefficient: It has higher variance than the \weighted" estimator. 2 Note that to prove an estimator is \inefficient", it is necessary to provide an alternative estimator that has lower variance. ^WLS 3 WLS: Weighted Least Squares estimator, β1 . 4 The Sum of Squares to be minimized now includes a weight for each case N X 2 SS(β^0, β^1) = Wi (y − y^i ) (5) i=1 5 The weights chosen to \undo" the heteroskedasticity. 2 Wi = 1/Var(ei ) (6) Heteroskedasticity 12 / 50 Introduction Heteroskedasticity 13 / 50 Introduction Covariance matrix of error terms This thing is a weighting matrix 1 0 Var(e1) 1 Var(e2) .. . 0 1 Var(eN ) Is usually simplified in various ways. Factor out a common parameter, so each individual's error variance is proportional 1 0 w1 1 1 w 2 2 .. σe . 0 1 wN Heteroskedasticity 14 / 50 Introduction 2 Example. Suppose variance proportional to xi Heteroskedasticity The \truth" is 60 yi = 3 + 0.25xi + ei (7) 40 20 Homoskedastic: y 0 Std.Dev.(ei ) = σe = 10 (8) −20 Heteroskedastic: 30 40 50 60 x Std.Dev.(ei ) = 0.05∗(xi −min(xi ))∗σe (9) Heteroskedasticity 15 / 50 Introduction Compare Lines of 1000 Fits (Homo vs Heteroskedastic) Heteroskedastic Errors Homoskedastic Errors 60 60 40 40 20 20 Homoskedastic y Homoskedastic 0 0 Heteroskedastic y Heteroskedastic −20 −20 20 30 40 50 60 70 80 20 30 40 50 60 70 80 x x Heteroskedasticity 16 / 50 Introduction Histograms of Slope Estimates (w/Kernel Density Lines) Heteroskedastic Data Homoskedastic Data 5 8 4 6 3 4 Density Density 2 2 1 0 0 0.0 0.2 0.4 0.0 0.2 0.4 Slope Est. Slope Est. Heteroskedasticity 17 / 50 Introduction So, the Big Message Is Heteroskedasticity inflates the amount of uncertainty in the estimates. Distorts t-ratios. Heteroskedasticity 18 / 50 Fix #1: Robust Standard Errors Outline 1 Introduction 2 Fix #1: Robust Standard Errors 3 Weighted Least Squares Combine Subsets of a Sample Random coefficient model Aggregate Data 4 Testing for heteroskedasticity Categorical Heteroskedasticity Checking for Continuous Heteroskedasticity Toward a General Test for Heteroskedasticity 5 Appendix: Robust Variance Estimator Derivation 6 Practice Problems Heteroskedasticity 19 / 50 Fix #1: Robust Standard Errors Robust estimate of the variance of b^ Replace OLS formula for Var\(b^) with a more \robust" version Robust \heteroskedasticity consistent" variance estimator: weaker assumptions. No known small sample properties But are consistent / asymptotically valid Note: This does not “fix" b^OLS . It just gives us more accurate t-ratios by correcting std.err(b^). Heteroskedasticity 20 / 50 Fix #1: Robust Standard Errors Robust Std.Err. in a Nutshell Recall: the variance-covariance matrix of the errors assumed by OLS. 2 σe 0 0 0 0 2 0 σe 0 0 0 0 Var(e) = E(e · e |X ) = 0 0 σ2 0 0 (10) e ......... 0 2 0 0 0 0 σe Heteroskedasticity 21 / 50 Fix #1: Robust Standard Errors Heteroskedastic Covariance Matrix If there's heteroskedasticity, we have to allow the possibility like this: 2 σ1 0 0 0 0 0 σ2 0 0 0 2 0 .. Var(e) = E[e · e |X ] = 0 0 . ··· 0 2 0 0 0 σN−1 0 2 0 0 0 0 σN Heteroskedasticity 22 / 50 Fix #1: Robust Standard Errors Robust Std.Err. : Use Variance Estimates Fill in estimates for the case-specific error variances 2 σc1 0 0 0 0 2 0 σc2 0 0 0 \ .. Var(e) = 0 0 . ··· 0 0 0 0 σ[2 0 N−1 2 0 0 0 0 σcN Embed those estimates into the larger formula that is used to calculate the robust standard errors. Famous paper White, Halbert. (1980). A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity. Econometrica, 48(4), 817-838. Robust estimator originally proposed by Huber (1967), but was forgotten Heteroskedasticity 23 / 50 Fix #1: Robust Standard Errors Derivation: Calculating The Robust Estimator of Var(b^) The true variance of the OLS estimator is P 2 ^ xi Var(ei ) Var(b1) = P 2 2 (11) ( xi ) 2 Assuming Homoskedasticity, estimate σe with MSE. \^ MSE ^ Var(b1) = P 2 and the square root of that is std.err.(b1) (12) xi The robust versions replace Var(ei ) with other estimates. White's suggestion was P 2 2 \^ xi · e^i Robust Var(b1) = P 2 2 (13) ( xi ) 2 e^i : the \squared residual", used in place of the unknown error variance. Heteroskedasticity 24 / 50 Weighted Least Squares Outline 1 Introduction 2 Fix #1: Robust Standard Errors 3 Weighted Least Squares Combine Subsets of a Sample Random coefficient model Aggregate Data 4 Testing for heteroskedasticity Categorical Heteroskedasticity Checking for Continuous Heteroskedasticity Toward a General Test for Heteroskedasticity 5 Appendix: Robust Variance Estimator Derivation 6 Practice Problems Heteroskedasticity 25 / 50 Weighted Least Squares WLS is Efficient Change OLS: N X 2 SS(β^) = (yi − y^i ) i=1 to WLS: N X 2 minimize SS(β^) = Wi (yi − y^i ) i=1 In practice, weights are guesses about the standard deviation of the error term 1 Wi = (14) σi Heteroskedasticity 26 / 50 Weighted Least Squares Feasible Weighted Least Squares. Analysis proceeds in 2 steps. Regression is estimated to gather information about the error variance. That information is used to fill in the Weight matrix with WLS May revise the weights, re-fit the WLS, repeatedly until convergence. Heteroskedasticity 27 / 50 Weighted Least Squares Combine Subsets of a Sample Data From Categorical Groups Suppose you separately investigate data for men and women men : yi = β0 + β1xi + ei (15) women : yi = c0 + c1xi + ui (16) Then you wonder, \can I combine the data for men and women to estimate one model" humans : yi = β0 + β1xi + β2sexi + β3sexi xi + ei (17) This \manages" the differences of intercept and slope for men and women by adding coefficients β2 and β3.
Recommended publications
  • Generalized Linear Models and Generalized Additive Models
    00:34 Friday 27th February, 2015 Copyright ©Cosma Rohilla Shalizi; do not distribute without permission updates at http://www.stat.cmu.edu/~cshalizi/ADAfaEPoV/ Chapter 13 Generalized Linear Models and Generalized Additive Models [[TODO: Merge GLM/GAM and Logistic Regression chap- 13.1 Generalized Linear Models and Iterative Least Squares ters]] [[ATTN: Keep as separate Logistic regression is a particular instance of a broader kind of model, called a gener- chapter, or merge with logis- alized linear model (GLM). You are familiar, of course, from your regression class tic?]] with the idea of transforming the response variable, what we’ve been calling Y , and then predicting the transformed variable from X . This was not what we did in logis- tic regression. Rather, we transformed the conditional expected value, and made that a linear function of X . This seems odd, because it is odd, but it turns out to be useful. Let’s be specific. Our usual focus in regression modeling has been the condi- tional expectation function, r (x)=E[Y X = x]. In plain linear regression, we try | to approximate r (x) by β0 + x β. In logistic regression, r (x)=E[Y X = x] = · | Pr(Y = 1 X = x), and it is a transformation of r (x) which is linear. The usual nota- tion says | ⌘(x)=β0 + x β (13.1) · r (x) ⌘(x)=log (13.2) 1 r (x) − = g(r (x)) (13.3) defining the logistic link function by g(m)=log m/(1 m). The function ⌘(x) is called the linear predictor. − Now, the first impulse for estimating this model would be to apply the transfor- mation g to the response.
    [Show full text]
  • Generalized Least Squares and Weighted Least Squares Estimation
    REVSTAT – Statistical Journal Volume 13, Number 3, November 2015, 263–282 GENERALIZED LEAST SQUARES AND WEIGH- TED LEAST SQUARES ESTIMATION METHODS FOR DISTRIBUTIONAL PARAMETERS Author: Yeliz Mert Kantar – Department of Statistics, Faculty of Science, Anadolu University, Eskisehir, Turkey [email protected] Received: March 2014 Revised: August 2014 Accepted: August 2014 Abstract: • Regression procedures are often used for estimating distributional parameters because of their computational simplicity and useful graphical presentation. However, the re- sulting regression model may have heteroscedasticity and/or correction problems and thus, weighted least squares estimation or alternative estimation methods should be used. In this study, we consider generalized least squares and weighted least squares estimation methods, based on an easily calculated approximation of the covariance matrix, for distributional parameters. The considered estimation methods are then applied to the estimation of parameters of different distributions, such as Weibull, log-logistic and Pareto. The results of the Monte Carlo simulation show that the generalized least squares method for the shape parameter of the considered distri- butions provides for most cases better performance than the maximum likelihood, least-squares and some alternative estimation methods. Certain real life examples are provided to further demonstrate the performance of the considered generalized least squares estimation method. Key-Words: • probability plot; heteroscedasticity; autocorrelation; generalized least squares; weighted least squares; shape parameter. 264 Yeliz Mert Kantar Generalized Least Squares and Weighted Least Squares 265 1. INTRODUCTION Regression procedures are often used for estimating distributional param- eters. In this procedure, the distribution function is transformed to a linear re- gression model. Thus, least squares (LS) estimation and other regression estima- tion methods can be employed to estimate parameters of a specified distribution.
    [Show full text]
  • Generalized and Weighted Least Squares Estimation
    LINEAR REGRESSION ANALYSIS MODULE – VII Lecture – 25 Generalized and Weighted Least Squares Estimation Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur 2 The usual linear regression model assumes that all the random error components are identically and independently distributed with constant variance. When this assumption is violated, then ordinary least squares estimator of regression coefficient looses its property of minimum variance in the class of linear and unbiased estimators. The violation of such assumption can arise in anyone of the following situations: 1. The variance of random error components is not constant. 2. The random error components are not independent. 3. The random error components do not have constant variance as well as they are not independent. In such cases, the covariance matrix of random error components does not remain in the form of an identity matrix but can be considered as any positive definite matrix. Under such assumption, the OLSE does not remain efficient as in the case of identity covariance matrix. The generalized or weighted least squares method is used in such situations to estimate the parameters of the model. In this method, the deviation between the observed and expected values of yi is multiplied by a weight ω i where ω i is chosen to be inversely proportional to the variance of yi. n 2 For simple linear regression model, the weighted least squares function is S(,)ββ01=∑ ωii( yx −− β0 β 1i) . ββ The least squares normal equations are obtained by differentiating S (,) ββ 01 with respect to 01 and and equating them to zero as nn n ˆˆ β01∑∑∑ ωβi+= ωiixy ω ii ii=11= i= 1 n nn ˆˆ2 βω01∑iix+= βω ∑∑ii x ω iii xy.
    [Show full text]
  • Lecture 24: Weighted and Generalized Least Squares 1
    Lecture 24: Weighted and Generalized Least Squares 1 Weighted Least Squares When we use ordinary least squares to estimate linear regression, we minimize the mean squared error: n 1 X MSE(b) = (Y − X β)2 (1) n i i· i=1 th where Xi· is the i row of X. The solution is T −1 T βbOLS = (X X) X Y: (2) Suppose we minimize the weighted MSE n 1 X W MSE(b; w ; : : : w ) = w (Y − X b)2: (3) 1 n n i i i· i=1 This includes ordinary least squares as the special case where all the weights wi = 1. We can solve it by the same kind of linear algebra we used to solve the ordinary linear least squares problem. If we write W for the matrix with the wi on the diagonal and zeroes everywhere else, then W MSE = n−1(Y − Xb)T W(Y − Xb) (4) 1 = YT WY − YT WXb − bT XT WY + bT XT WXb : (5) n Differentiating with respect to b, we get as the gradient 2 r W MSE = −XT WY + XT WXb : b n Setting this to zero at the optimum and solving, T −1 T βbW LS = (X WX) X WY: (6) But why would we want to minimize Eq. 3? 1. Focusing accuracy. We may care very strongly about predicting the response for certain values of the input | ones we expect to see often again, ones where mistakes are especially costly or embarrassing or painful, etc. | than others. If we give the points near that region big weights, and points elsewhere smaller weights, the regression will be pulled towards matching the data in that region.
    [Show full text]
  • 18.650 (F16) Lecture 10: Generalized Linear Models (Glms)
    Statistics for Applications Chapter 10: Generalized Linear Models (GLMs) 1/52 Linear model A linear model assumes Y X (µ(X), σ2I), | ∼ N And ⊤ IE(Y X) = µ(X) = X β, | 2/52 Components of a linear model The two components (that we are going to relax) are 1. Random component: the response variable Y X is continuous | and normally distributed with mean µ = µ(X) = IE(Y X). | 2. Link: between the random and covariates X = (X(1),X(2), ,X(p))⊤: µ(X) = X⊤β. · · · 3/52 Generalization A generalized linear model (GLM) generalizes normal linear regression models in the following directions. 1. Random component: Y some exponential family distribution ∼ 2. Link: between the random and covariates: ⊤ g µ(X) = X β where g called link function� and� µ = IE(Y X). | 4/52 Example 1: Disease Occuring Rate In the early stages of a disease epidemic, the rate at which new cases occur can often increase exponentially through time. Hence, if µi is the expected number of new cases on day ti, a model of the form µi = γ exp(δti) seems appropriate. ◮ Such a model can be turned into GLM form, by using a log link so that log(µi) = log(γ) + δti = β0 + β1ti. ◮ Since this is a count, the Poisson distribution (with expected value µi) is probably a reasonable distribution to try. 5/52 Example 2: Prey Capture Rate(1) The rate of capture of preys, yi, by a hunting animal, tends to increase with increasing density of prey, xi, but to eventually level off, when the predator is catching as much as it can cope with.
    [Show full text]
  • Weighted Least Squares
    Weighted Least Squares 2 ∗ The standard linear model assumes that Var("i) = σ for i = 1; : : : ; n. ∗ As we have seen, however, there are instances where σ2 Var(Y j X = xi) = Var("i) = : wi ∗ Here w1; : : : ; wn are known positive constants. ∗ Weighted least squares is an estimation technique which weights the observations proportional to the reciprocal of the error variance for that observation and so overcomes the issue of non-constant variance. 7-1 Weighted Least Squares in Simple Regression ∗ Suppose that we have the following model Yi = β0 + β1Xi + "i i = 1; : : : ; n 2 where "i ∼ N(0; σ =wi) for known constants w1; : : : ; wn. ∗ The weighted least squares estimates of β0 and β1 minimize the quantity n X 2 Sw(β0; β1) = wi(yi − β0 − β1xi) i=1 ∗ Note that in this weighted sum of squares, the weights are inversely proportional to the corresponding variances; points with low variance will be given higher weights and points with higher variance are given lower weights. 7-2 Weighted Least Squares in Simple Regression ∗ The weighted least squares estimates are then given as ^ ^ β0 = yw − β1xw P w (x − x )(y − y ) ^ = i i w i w β1 P 2 wi(xi − xw) where xw and yw are the weighted means P w x P w y = i i = i i xw P yw P : wi wi ∗ Some algebra shows that the weighted least squares esti- mates are still unbiased. 7-3 Weighted Least Squares in Simple Regression ∗ Furthermore we can find their variances 2 ^ σ Var(β1) = X 2 wi(xi − xw) 2 3 1 2 ^ xw 2 Var(β0) = 4X + X 25 σ wi wi(xi − xw) ∗ Since the estimates can be written in terms of normal random variables, the sampling distributions are still normal.
    [Show full text]
  • Heteroscedastic Errors
    Heteroscedastic Errors ◮ Sometimes plots and/or tests show that the error variances 2 σi = Var(ǫi ) depend on i ◮ Several standard approaches to fixing the problem, depending on the nature of the dependence. ◮ Weighted Least Squares. ◮ Transformation of the response. ◮ Generalized Linear Models. Richard Lockhart STAT 350: Heteroscedastic Errors and GLIM Weighted Least Squares ◮ Suppose variances are known except for a constant factor. 2 2 ◮ That is, σi = σ /wi . ◮ Use weighted least squares. (See Chapter 10 in the text.) ◮ This usually arises realistically in the following situations: ◮ Yi is an average of ni measurements where you know ni . Then wi = ni . 2 ◮ Plots suggest that σi might be proportional to some power of 2 γ γ some covariate: σi = kxi . Then wi = xi− . Richard Lockhart STAT 350: Heteroscedastic Errors and GLIM Variances depending on (mean of) Y ◮ Two standard approaches are available: ◮ Older approach is transformation. ◮ Newer approach is use of generalized linear model; see STAT 402. Richard Lockhart STAT 350: Heteroscedastic Errors and GLIM Transformation ◮ Compute Yi∗ = g(Yi ) for some function g like logarithm or square root. ◮ Then regress Yi∗ on the covariates. ◮ This approach sometimes works for skewed response variables like income; ◮ after transformation we occasionally find the errors are more nearly normal, more homoscedastic and that the model is simpler. ◮ See page 130ff and check under transformations and Box-Cox in the index. Richard Lockhart STAT 350: Heteroscedastic Errors and GLIM Generalized Linear Models ◮ Transformation uses the model T E(g(Yi )) = xi β while generalized linear models use T g(E(Yi )) = xi β ◮ Generally latter approach offers more flexibility.
    [Show full text]
  • 4.3 Least Squares Approximations
    218 Chapter 4. Orthogonality 4.3 Least Squares Approximations It often happens that Ax D b has no solution. The usual reason is: too many equations. The matrix has more rows than columns. There are more equations than unknowns (m is greater than n). The n columns span a small part of m-dimensional space. Unless all measurements are perfect, b is outside that column space. Elimination reaches an impossible equation and stops. But we can’t stop just because measurements include noise. To repeat: We cannot always get the error e D b Ax down to zero. When e is zero, x is an exact solution to Ax D b. When the length of e is as small as possible, bx is a least squares solution. Our goal in this section is to compute bx and use it. These are real problems and they need an answer. The previous section emphasized p (the projection). This section emphasizes bx (the least squares solution). They are connected by p D Abx. The fundamental equation is still ATAbx D ATb. Here is a short unofficial way to reach this equation: When Ax D b has no solution, multiply by AT and solve ATAbx D ATb: Example 1 A crucial application of least squares is fitting a straight line to m points. Start with three points: Find the closest line to the points .0; 6/; .1; 0/, and .2; 0/. No straight line b D C C Dt goes through those three points. We are asking for two numbers C and D that satisfy three equations.
    [Show full text]
  • Power Comparisons of the Mann-Whitney U and Permutation Tests
    Power Comparisons of the Mann-Whitney U and Permutation Tests Abstract: Though the Mann-Whitney U-test and permutation tests are often used in cases where distribution assumptions for the two-sample t-test for equal means are not met, it is not widely understood how the powers of the two tests compare. Our goal was to discover under what circumstances the Mann-Whitney test has greater power than the permutation test. The tests’ powers were compared under various conditions simulated from the Weibull distribution. Under most conditions, the permutation test provided greater power, especially with equal sample sizes and with unequal standard deviations. However, the Mann-Whitney test performed better with highly skewed data. Background and Significance: In many psychological, biological, and clinical trial settings, distributional differences among testing groups render parametric tests requiring normality, such as the z test and t test, unreliable. In these situations, nonparametric tests become necessary. Blair and Higgins (1980) illustrate the empirical invalidity of claims made in the mid-20th century that t and F tests used to detect differences in population means are highly insensitive to violations of distributional assumptions, and that non-parametric alternatives possess lower power. Through power testing, Blair and Higgins demonstrate that the Mann-Whitney test has much higher power relative to the t-test, particularly under small sample conditions. This seems to be true even when Welch’s approximation and pooled variances are used to “account” for violated t-test assumptions (Glass et al. 1972). With the proliferation of powerful computers, computationally intensive alternatives to the Mann-Whitney test have become possible.
    [Show full text]
  • Time-Series Regression and Generalized Least Squares in R*
    Time-Series Regression and Generalized Least Squares in R* An Appendix to An R Companion to Applied Regression, third edition John Fox & Sanford Weisberg last revision: 2018-09-26 Abstract Generalized least-squares (GLS) regression extends ordinary least-squares (OLS) estimation of the normal linear model by providing for possibly unequal error variances and for correlations between different errors. A common application of GLS estimation is to time-series regression, in which it is generally implausible to assume that errors are independent. This appendix to Fox and Weisberg (2019) briefly reviews GLS estimation and demonstrates its application to time-series data using the gls() function in the nlme package, which is part of the standard R distribution. 1 Generalized Least Squares In the standard linear model (for example, in Chapter 4 of the R Companion), E(yjX) = Xβ or, equivalently y = Xβ + " where y is the n×1 response vector; X is an n×k +1 model matrix, typically with an initial column of 1s for the regression constant; β is a k + 1 ×1 vector of regression coefficients to estimate; and " is 2 an n×1 vector of errors. Assuming that " ∼ Nn(0; σ In), or at least that the errors are uncorrelated and equally variable, leads to the familiar ordinary-least-squares (OLS) estimator of β, 0 −1 0 bOLS = (X X) X y with covariance matrix 2 0 −1 Var(bOLS) = σ (X X) More generally, we can assume that " ∼ Nn(0; Σ), where the error covariance matrix Σ is sym- metric and positive-definite. Different diagonal entries in Σ error variances that are not necessarily all equal, while nonzero off-diagonal entries correspond to correlated errors.
    [Show full text]
  • Statistical Properties of Least Squares Estimates
    1 STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS Recall: Assumption: E(Y|x) = η0 + η1x (linear conditional mean function) Data: (x1, y1), (x2, y2), … , (xn, yn) ˆ Least squares estimator: E (Y|x) = "ˆ 0 +"ˆ 1 x, where SXY "ˆ = "ˆ = y -"ˆ x 1 SXX 0 1 2 SXX = ∑ ( xi - x ) = ∑ xi( xi - x ) SXY = ∑ ( xi - x ) (yi - y ) = ∑ ( xi - x ) yi Comments: 1. So far we haven’t used any assumptions about conditional variance. 2. If our data were the entire population, we could also use the same least squares procedure to fit an approximate line to the conditional sample means. 3. Or, if we just had data, we could fit a line to the data, but nothing could be inferred beyond the data. 4. (Assuming again that we have a simple random sample from the population.) If we also assume e|x (equivalently, Y|x) is normal with constant variance, then the least squares estimates are the same as the maximum likelihood estimates of η0 and η1. Properties of "ˆ 0 and "ˆ 1 : n #(xi " x )yi n n SXY i=1 (xi " x ) 1) "ˆ 1 = = = # yi = "ci yi SXX SXX i=1 SXX i=1 (x " x ) where c = i i SXX Thus: If the xi's are fixed (as in the blood lactic acid example), then "ˆ 1 is a linear ! ! ! combination of the yi's. Note: Here we want to think of each y as a random variable with distribution Y|x . Thus, ! ! ! ! ! i i ! if the yi’s are independent and each Y|xi is normal, then "ˆ 1 is also normal.
    [Show full text]
  • Research Report Statistical Research Unit Goteborg University Sweden
    Research Report Statistical Research Unit Goteborg University Sweden Testing for multivariate heteroscedasticity Thomas Holgersson Ghazi Shukur Research Report 2003:1 ISSN 0349-8034 Mailing address: Fax Phone Home Page: Statistical Research Nat: 031-77312 74 Nat: 031-77310 00 http://www.stat.gu.se/stat Unit P.O. Box 660 Int: +4631 773 12 74 Int: +4631 773 1000 SE 405 30 G6teborg Sweden Testing for Multivariate Heteroscedasticity By H.E.T. Holgersson Ghazi Shukur Department of Statistics Jonkoping International GOteborg university Business school SE-405 30 GOteborg SE-55 111 Jonkoping Sweden Sweden Abstract: In this paper we propose a testing technique for multivariate heteroscedasticity, which is expressed as a test of linear restrictions in a multivariate regression model. Four test statistics with known asymptotical null distributions are suggested, namely the Wald (W), Lagrange Multiplier (LM), Likelihood Ratio (LR) and the multivariate Rao F-test. The critical values for the statistics are determined by their asymptotic null distributions, but also bootstrapped critical values are used. The size, power and robustness of the tests are examined in a Monte Carlo experiment. Our main findings are that all the tests limit their nominal sizes asymptotically, but some of them have superior small sample properties. These are the F, LM and bootstrapped versions of Wand LR tests. Keywords: heteroscedasticity, hypothesis test, bootstrap, multivariate analysis. I. Introduction In the last few decades a variety of methods has been proposed for testing for heteroscedasticity among the error terms in e.g. linear regression models. The assumption of homoscedasticity means that the disturbance variance should be constant (or homoscedastic) at each observation.
    [Show full text]