
Robust goodness-of-fit tests of the classical linear regression model Oleksandr Movshuk Assistant Research Professor, ICSEAD and Visiting Assistant Professor, Graduate School of Economics, Kyushu University Working Paper Series Vol. 2002-01 March 2002 The views expressed in this publication are those of the author(s) and do not necessarily reflect those of the Institute. No part of this book may be used reproduced in any manner whatsoever without written permission except in the case of brief quotations embodied in articles and reviews. For information, please write to the Centre. The International Centre for the Study of East Asian Development, Kitakyushu Robust goodness-of-fit tests of the classical linear regression model ∗ Oleksandr Movshuk International Centre for the Study of East Asian Development 11-4 Otemachi, Kokurakita, Kitakyushu, 803-0814, Japan (email: [email protected]) ABSTRACT This paper develops two goodness-of-fit tests to verify the joint null hypothesis that regression disturbances have zero mean and constant variance, and are generated from the Gaussian normal distribution. Initially, these tests use a high-breakdown regression estimator to identify a subset of regular observations, and then calculate standardized prediction residuals and studentized prediction residuals, from which the final test statistics are derived. A Monte Carlo study demonstrates that the first test is particularly sensitive to a small number of regression outliers with non-zero mean or unusually large variance, and in general to regression misspecifications that produce regression disturbances with longer tails than could be justified by the normality assumption. In contrast, the second test detects a substantial number of regression outliers, specifications with incorrect functional forms, omissions of relevant variables, and short tails in the distribution of the error term. While most specification tests are designed for a particular alternative, the joint application of the proposed tests has a high power to detect a wide range of breakdowns of the linear regression model. The omnibus property of the suggested tests makes redundant the current practice of running the battery of various specification tests. ∗ The financial support from ICSEAD and Kyushu University is gratefully acknowledged. 1. INTRODUCTION Although the classical regression model Y = Xβ + ε postulates that the error term ε has zero mean, constant variance σ 2 and normal distribution for all its elements, conventional econometric measures of goodness of fit do not directly examine these distributional assumptions. For example, the traditional measure of goodness of fit – the R 2 statistic – compares the variation of estimated OLS residuals with the variation of dependent variable Y. However, this ratio does not verify any distributional properties of ε . As a result, researchers that examine the goodness-of-fit of their models by R 2 may fail to notice substantial deviations from the assumed distribution of ε . Distributional assumptions about ε can be also verified by univariate tests for normality, such as the Jarque-Bera, Shapiro-Wilk, or Shapiro-Francia tests. Since the vector of regression disturbances ε is not observable, the univariate tests for normality are usually applied to available estimates of ε , most often to OLS residuals εˆ = (I −V )ε , where V is the projection − matrix X(X′X ) 1 X′ . The distribution of εˆ does converge to ε asymptotically (Theil, 1971, p. 378-379), but in finite samples their correspondence may be poor1. In particular, if some rows of X are unusually large, then the corresponding diagonal elements of matrix V (denoted hereafter by vii ) can also become large, producing so-called ‘high-leverage points’. Since the th ε σ 2 − variance of i element of OLS residual vector ˆ is (1 vii ) , high leverage points may substantially reduce the variance of OLS residuals2. In general, the projection matrix V can ε modify OLS residuals so much that the distribution of ˆi may have little in common with the distribution of ε . This may greatly diminish the power of univariate tests of normality. For 1 This is because the distribution of ε′ depends on the configuration of the matrix V. 2 The drop in variance is especially conspicuous when a dummy variable for ith observation is used. In this case = ε v ii 1 , so that the variance, as well as the value of ˆi , would become zero even though the corresponding ε regression disturbance i may have unusually large variance, or non-zero mean 1 example, if deviations from the assumed normality of ε are located at high-leverage points, ε 3 univariate tests for normality may fail to spot these deviations in estimated ˆi . Another drawback of univariate tests for normality is that they usually examine the null hypothesis of normal distribution regardless of any specific parametric value for mean and variance of ε . In contrast, the full ideal conditions of the linear regression model explicitly postulate that each element of vector ε has zero mean. Thus, most of the reported normality tests are in fact examining a broader null hypothesis than the one postulated by the linear regression model. In this paper I introduce two goodness of fit tests of the linear regression model that avoid these pitfalls of mechanical application of univariate goodness-of-fit tests to OLS residuals. A Monte Carlo study demonstrates a high degree of complimentarity in the power of these tests. In particular, the first test is sensitive to a small number of regression outliers, defined as observations with non-zero mean or unusually large variance of ε . More generally, the first test is sensitive to regression misspecification that results in ε with longer tails than could be justified by the normality assumption. In contrast, the second test has a good power to detect a large number of regression outliers, specifications with incorrect functional forms, omissions of relevant variables, and short tails in the distribution of ε . While the vast majority of specification tests in econometrics are designed for a particular alternative4, the joint application of the proposed tests has a high power to detect most major breakdowns of the classical linear regression model 5 . The omnibus property of the 3 This was demonstrated by Weisberg (1980), who compared the power of the univariate Shapiro-Wilk test with respect to unobservable ε and to corresponding regression residuals εˆ , resulting from different configurations of matrix V. When, for example, ε had log-normal distribution, the Shapiro-Wilk easily detected the non-normality of ε (the power was 99 per cent), but depending on V, the test power with OLS residuals varied from 41 to 91 per cent. 4 Even including a few ‘general’ tests that are ostensibly designed to guard against unspecified violations of full ideal conditions, as demonstrated by Thursby (1989). 5 With the exception of the independence of ε and the full rank of X. 2 suggested tests makes redundant the current practice of running the battery of various specification tests. The paper is organized as follows. Section 2 describes test statistics. Their non-standard distribution under the null is discussed in section 3, followed by an illustrative example in section 4. Section 5 reports the power of suggested and conventional tests in detecting major violations of the linear regression model, and section 6 offers conclusions. 2. TEST ALGORITHM Consider the standard regression model y = Xβ + ε , where y is (n ×1) vector of observations on a dependent variable, X is ( n ×k ) matrix of n observations on k independent variables (including the intercept), β is ( k ×1 ) vector of unknown regression coefficients, and ε is × σ 2 (1n ) vector of unobservable regression disturbances, assumed to be i.i.d. N(0, In ) . The matrix X is assumed to be fixed and of full rank. To validate the distributional properties of ε , the test algorithm uses the sequence of = + recursive residuals w i ( i k 1,...,n ) = − βˆ + ' ' −1 wi ( yi x i i −1 ) 1 x i (X i −1X i −1 ) x i (1) βˆ = −1 β − where i−1 (X'i −1 X i −1 ) X'i −1 Yi−1 is the OLS estimate of , calculated from preceding i 1 observations. As shown by Brown et al. (1975), if the null hypothesis ε ∼ N(0,σ 2 ) , then w ∼ N(0,σ 2 ) . In other words, if regression disturbances ε have zero mean and constant variance σ 2 and are normally distributed, then estimated recursive residuals w have the same property. The values of recursive residuals depend on data ordering, so that even a minor data permutation can produce a completely different set of recursive residuals. In particular, the exact normality of recursive residuals holds only if they are calculated on randomly-ordered 3 observations, or by any variable that is statistically independent of w i (such as any independent variable in X, or predicted values Xβˆ ). If, in contrast, the data ordering is determined by analyzed data (i.e., determined endogenously), the normality of w is only approximate. A. ENDOGENOUS DATA ORDERING Starting from Brown et al. (1975), most application of recursive residuals assumed that the ordering of observations is either fixed or known a priori (with frequent references to a ‘natural’ order, such as time or by some independent variable). In contrast, the objective of this paper is to select an endogenous order of observations in a way that maximizes the power of recursive residuals to detect departures from the null hypothesis. The endogenous ordering makes the distribution of w nonstandard, but this problem can be solved by Barnard’s (1963) suggestion to approximate the distribution of any non-standard statistic with simulated random data, generated under the null hypothesis. The task of ordering observations endogenously is to ensure that the estimation subset of preceding observations contains only regular observations, while discordant observations penetrate into the estimation subset as late as possible.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages36 Page
-
File Size-