HAC Corrections for Strongly Autocorrelated Time Series

HAC Corrections for Strongly Autocorrelated Time Series

HAC Corrections for Strongly Autocorrelated Time Series Ulrich K. MULLER¨ Department of Economics, Princeton University, Princeton, NJ 08544 ([email protected]) Applied work routinely relies on heteroscedasticity and autocorrelation consistent (HAC) standard errors when conducting inference in a time series setting. As is well known, however, these corrections perform poorly in small samples under pronounced autocorrelations. In this article, I first provide a review of popular methods to clarify the reasons for this failure. I then derive inference that remains valid under a specific form of strong dependence. In particular, I assume that the long-run properties can be approximated by a stationary Gaussian AR(1) model, with coefficient arbitrarily close to one. In this setting, I derive tests that come close to maximizing a weighted average power criterion. Small sample simulations show these tests to perform well, also in a regression context. KEYWORDS: AR(1); Local-to-unity; Long-run variance. 1. INTRODUCTION underlying autocorrelations lead to severely oversized tests. This might be expected, as the derivation of the limiting distributions A standard problem in time series econometrics is the deriva- under fixed-b asymptotics assumes the underlying process to tion of appropriate corrections to standard errors when con- display no more than weak dependence. ducting inference with autocorrelated data. Classical references This article has two goals. First, I provide a review of con- include Berk (1974), Newey and West (1987), and Andrews sistent and inconsistent approaches to LRV estimation, with an (1991), among many others. These articles show how one emphasis on the spectral perspective. This clarifies why com- may estimate “heteroscedasticity and autocorrelation consis- mon approaches to LRV estimation break down under strong tent” (HAC) standard errors, or “long-run variances” (LRV) in autocorrelations. econometric jargon, in a large variety of circumstances. Second, I derive valid inference methods for a scalar pa- Unfortunately, small sample simulations show that these cor- rameter that remain valid even under a specific form of strong rections do not perform particularly well as soon as the underly- dependence. In particular, I assume that the long-run properties ing series displays pronounced autocorrelations. One potential are well approximated by a stationary Gaussian AR(1) model. reason is that these classical approaches ignore the sampling The AR(1) coefficient is allowed to take on values arbitrarily uncertainty of the LRV estimator (i.e., t- and F-tests based on close to one, so that potentially, the process is very persistent. these corrections employ the usual critical values derived from In this manner, the problem of “correcting” for serial correla- the normal and chi-squared distributions, as if the true LRV tion remains a first-order problem also in large samples. I then was plugged in). While this is justified asymptotically under numerically determine tests about the mean of the process that the assumptions of these articles, it might not yield accurate (approximately) maximize weighted average power, using in- approximations in small samples. sights of Elliott, Muller,¨ and Watson (2012). A more recent literature, initiated by Kiefer, Vogelsang, and By construction, these tests control size in the AR(1) model Downloaded by [Princeton University] at 08:13 05 August 2014 Bunzel (2000) and Kiefer and Vogelsang (2002a, 2005), seeks in large samples, and this turns out to be very nearly true also in to improve the performance of these procedures by explicitly small samples. In contrast, all standard HAC corrections have taking the sampling variability of the LRV estimator into ac- arbitrarily large size distortions for values of the autoregres- count. (Also see Jansson 2004;Muller¨ 2004, 2007; Phillips sive root sufficiently close to unity. In more complicated set- 2005; Phillips, Sun, and Jin 2006, 2007; Sun, Phillips, and Jin tings, such as inference about a linear regression coefficient, 2008; Gonalves and Vogelsang 2011; Atchade and Cattaneo the AR(1) approach still comes fairly close to controlling size. 2012; Sun and Kaplan 2012.) This is accomplished by con- Interestingly, this includes Granger and Newbold’s (1974)clas- sidering the limiting behavior of LRV estimators under the as- sical spurious regression case, where two independent random sumption that the bandwidth is a fixed fraction of the sample walks are regressed on each other. size. Under such “fixed-b” asymptotics, the LRV estimator is no The remainder of the article is organized as follows. The next longer consistent, but instead converges to a random matrix. The two sections provide a brief overview of consistent and inconsis- resulting t- and F-statistics have limiting distributions that are tent LRV estimation, centered around the problem of inference nonstandard, with randomness stemming from both the param- about a population mean. Section 4 contains the derivation of eter estimator and the estimator of its variance. In these limiting the new test in the AR(1) model. Section 5 relates these results distributions, the true LRV acts as a common scale factor that cancels, so that appropriate nonstandard critical values can be © 2014 American Statistical Association tabulated. Journal of Business & Economic Statistics While this approach leads to relatively better size control July 2014, Vol. 32, No. 3 in small sample simulations, it still remains the case that strong DOI: 10.1080/07350015.2014.931238 311 312 Journal of Business & Economic Statistics, July 2014 to more general regression and generalized method of moment definition (1) (GMM) problems. Section 6 contains some small sample results, and Section 7 concludes. T −1 ωˆ 2 = k(j/S )ˆγ (j), (4) k,ST T =− + 2. CONSISTENT LRV ESTIMATORS j T 1 For a second-order stationary time series yt with popula- −1 T whereγ ˆ(j) = T (y − μˆ )(y −| | − μˆ ) and k is an tion mean E[y ] = μ, sample meanμ ˆ = T −1 T y , and ab- t=|j|+1 t t j t t=1 t even weight function with k(0) = 1 and k(x) → 0as|x|→ solutely summable autocovariances γ (j) = E[(y − μ)(y − − t t j ∞, and the bandwidth parameter S satisfies S →∞ and μ)], the LRV ω2 is defined as T T S /T → 0 (the subscript k ofω ˆ 2 stands for “kernel”). The T k,ST ∞ Newey and West (1987) estimator, for instance, has this form ω2 = lim var[T 1/2μˆ ] = γ (j). (1) with k equal to the Bartlett kernel k(x) = max(1 −|x|, 0). Up to T →∞ j=−∞ some approximationω ˆ 2 can be written as a weighted average k,ST of periodogram ordinates With a central limit theorem (CLT) forμ ˆ , T 1/2(ˆμ − μ) ⇒ N (0,ω2), a consistent estimator of ω2 allows the straightfor- (T−1)/2 T −1 ward construction of tests and confidence sets about μ. 2 2 ωˆ ≈ KT,lpl, KT,l = cos(2πjl/T )k(j/ST ), It is useful to take a spectral perspective on the prob- k,ST T 2 l=1 j=−T +1 lem of estimating ω . The spectral density of yt is given by the even function f :[−π, π] → [0, ∞) defined via f (λ) = (5) 1 ∞ 2 = 2π j=−∞ cos(jλ)γ (j), so that ω 2πf (0). Assume T odd for notational convenience. The discrete Fourier transform is a where the weights KT,l approximately sum to one, one-to-one mapping from the T values {y }T intoμ ˆ , and the − t t=1 (T 1)/2 K → 1. See Appendix for details. Since − { cos}(T −1)/2 l=1 T,l T 1 trigonometrically weighted averages Zl l=1 and (T/S )K = O(1) for l = O(T/S ), these estimators are (T −1)/2 T T,lT T T { sin} 2 Zl l=1 , where conceptionally close toω ˆ with n ≈ T/S →∞. p,nT T T As an illustration, consider the problem of constructing a con- √ T − fidence interval for the population mean of the U.S. unemploy- Zcos = T 1/2 2 cos(2πl(t − 1)/T)y , l t ment rate. The data consist of T = 777 monthly observations t=1 and is plotted in the left panel of Figure 1. The right panel shows √ T the first 24 log-periodogram ordinates log(p ), along with the sin = −1/2 − j Zl T 2 sin(2πl(t 1)/T)yt .(2)corresponding part of the log-spectrum (scaled by 2π)ofafitted = t 1 AR(1) process. The truly remarkable property of this transformation (see Propo- Now consider a Newey–West estimator with bandwidths cho- = 1/3 ≈ sition 4.5.2 in Brockwell and Davis 1991) is that all pairwise cor- sen as ST 0.75T 6.9 (a default value suggested in Stock 1/2 { cos}(T −1)/2 and Watson’s (2011) textbook for weakly autocorrelated data) relations between the T random variables T μˆ , Zl l=1 − { sin}(T 1)/2 →∞ and ST = 115.9 (the value derived in Andrews (1991) based on and Zl l=1 converge to zero as T , and the AR(1) coefficient 0.973). The normalized weights KT,l/KT,1 | cos 2 − |→ sup E[(Zl ) ] 2πf (2πl/T) 0, of (5) for the first 24 periodogram ordinates, along with the l≤(T −1)/2 AR(1) spectrum normalized by 2π/ω2, are plotted in Figure 2. Downloaded by [Princeton University] at 08:13 05 August 2014 | sin 2 − |→ Assuming the AR(1) model to be true, it is immediately apparent sup E[(Zl ) ] 2πf (2πl/T) 0. (3) l≤(T −1)/2 that both estimators are not usefully thought of as approximately consistent: the estimator with ST = 6.9 is severely downward bi- Thus, the discrete Fourier transform converts autocorrelation in ased, as it puts most of its weight on periodogram ordinates with sin cos yt into heteroscedasticity of (Z ,Z ), with the shape of the 2 l l expectation much below ω .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us