Unit Root Tests
Total Page:16
File Type:pdf, Size:1020Kb
This is page 111 Printer: Opaque this 4 Unit Root Tests 4.1 Introduction Many economic and financial time series exhibit trending behavior or non- stationarity in the mean. Leading examples are asset prices, exchange rates and the levels of macroeconomic aggregates like real GDP. An important econometric task is determining the most appropriate form of the trend in the data. For example, in ARMA modeling the data must be transformed to stationary form prior to analysis. If the data are trending, then some form of trend removal is required. Two common trend removal or de-trending procedures are first differ- encing and time-trend regression. First differencing is appropriate for I(1) time series and time-trend regression is appropriate for trend stationary I(0) time series. Unit root tests can be used to determine if trending data should be first differenced or regressed on deterministic functions of time to render the data stationary. Moreover, economic and finance theory often suggests the existence of long-run equilibrium relationships among nonsta- tionary time series variables. If these variables are I(1), then cointegration techniques can be used to model these long-run relations. Hence, pre-testing for unit roots is often a first step in the cointegration modeling discussed in Chapter 12. Finally, a common trading strategy in finance involves ex- ploiting mean-reverting behavior among the prices of pairs of assets. Unit root tests can be used to determine which pairs of assets appear to exhibit mean-reverting behavior. 112 4.UnitRootTests This chapter is organized as follows. Section 4.2 reviews I(1) and trend stationary I(0) time series and motivates the unit root and stationary tests described in the chapter. Section 4.3 describes the class of autoregres- sive unit root tests made popular by David Dickey, Wayne Fuller, Pierre Perron and Peter Phillips. Section 4.4 describes the stationarity tests of Kwiatkowski, Phillips, Schmidt and Shinn (1992). Section 4.5 discusses some problems associated with traditional unit root and stationarity tests, and Section 4.6 presents some recently developed so-called “efficient unit root tests” that overcome some of the deficiencies of traditional unit root tests. In this chapter, the technical details of unit root and stationarity tests are kept to a minimum. Excellent technical treatments of nonstationary time series may be found in Hamilton (1994), Hatanaka (1995), Fuller (1996) and the many papers by Peter Phillips. Useful surveys on issues associated with unit root testing are given in Stock (1994), Maddala and Kim (1998) and Phillips and Xiao (1998). 4.2 Testing for Nonstationarity and Stationarity To understand the econometric issues associated with unit root and sta- tionarity tests, consider the stylized trend-cycle decomposition of a time series yt: yt = TDt + zt TDt = κ + δt 2 zt = φzt 1 + εt,εt WN(0,σ ) − ∼ where TDt is a deterministic linear trend and zt is an AR(1) process. If φ < 1thenyt is I(0) about the deterministic trend TDt.Ifφ =1,then | | t zt = zt 1 + εt = z0 + j=1 εj, a stochastic trend and yt is I(1) with drift. Simulated− I(1) and I(0) data with κ =5andδ =0.1 are illustrated in P Figure 4.1. The I(0) data with trend follows the trend TDt =5+0.1t very closely and exhibits trend reversion. In contrast, the I(1) data follows an upward drift but does not necessarily revert to TDt. Autoregressive unit root tests are based on testing the null hypothesis that φ =1(difference stationary) against the alternative hypothesis that φ<1 (trend stationary). They are called unit root tests because under the null hypothesis the autoregressive polynomial of zt, φ(z)=(1 φz)=0, has a root equal to unity. − Stationarity tests take the null hypothesis that yt is trend stationary. If yt is then first differenced it becomes ∆yt = δ + ∆zt ∆zt = φ∆zt 1 + εt εt 1 − − − 4.2 Testing for Nonstationarity and Stationarity 113 I(1) I(0) 0 5 10 15 20 25 30 0 50 100 150 200 250 FIGURE 4.1. Simulated trend stationary (I(0)) and difference stationary (I(1)) processes. Notice that first differencing yt, when it is trend stationary, produces a unit moving average root in the ARMA representation of ∆zt.Thatis,the ARMA representation for ∆zt is the non-invertible ARMA(1,1) model ∆zt = φ∆zt 1 + εt + θεt 1 − − with θ = 1. This result is known as overdifferencing. Formally, stationar- − ity tests are based on testing for a unit moving average root in ∆zt. Unit root and stationarity test statistics have nonstandard and nonnor- mal asymptotic distributions under their respective null hypotheses. To complicate matters further, the limiting distributions of the test statistics are affected by the inclusion of deterministic terms in the test regressions. These distributions are functions of standard Brownian motion (Wiener process), and critical values must be tabulated by simulation techniques. MacKinnon (1996) provides response surface algorithms for determining these critical values, and various S+FinMetrics functions use these algo- rithms for computing critical values and p-values. 114 4.UnitRootTests 4.3 Autoregressive Unit Root Tests To illustrate the important statistical issues associated with autoregressive unit root tests, consider the simple AR(1) model 2 yt = φyt 1 + εt, where εt WN(0,σ ) − ∼ Thehypothesesofinterestare H0 : φ =1(unitrootinφ(z)=0) yt I(1) ⇒ ∼ H1 : φ < 1 yt I(0) | | ⇒ ∼ The test statistic is φˆ 1 tφ=1 = − SE(φˆ) where φˆ is the least squares estimate and SE(φˆ) is the usual standard error 1 estimate . The test is a one-sided left tail test. If yt is stationary (i.e., φ < 1) then it can be shown (c.f. Hamilton (1994){ pg.} 216) | | √T (φˆ φ) d N(0, (1 φ2)) − → − or 1 φˆ A N φ, (1 φ2) ∼ T − µ ¶ A and it follows that tφ=1 N(0, 1). However, under the null hypothesis of nonstationarity the above∼ result gives φˆ A N (1, 0) ∼ which clearly does not make any sense. The problem is that under the unit root null, yt is not stationary and ergodic, and the usual sample moments do not converge{ } to fixed constants. Instead, Phillips (1987) showed that the sample moments of yt converge to random functions of Brownian { } 1 The AR(1) model may be re-written as ∆yt = πyt 1 + ut where π = φ 1. Testing φ = 1 is then equivalent to testing π = 0. Unit root tests− are often computed− using this alternative regression and the S+FinMetrics function unitroot follows this convention. 4.3 Autoregressive Unit Root Tests 115 motion2: T 1 3/2 d T − yt 1 σ W (r)dr − t=1 → 0 X Z T 1 2 2 d 2 2 T − yt 1 σ W (r) dr − t=1 → 0 X Z T 1 1 d 2 T − yt 1εt σ W (r)dW (r) − t=1 → 0 X Z where W(r) denotes a standard Brownian motion (Wiener process) defined on the unit interval. Using the above results Phillips showed that under the unit root null H0 : φ =1 1 W (r)dW (r) T (φˆ 1) d 0 (4.1) − → 1 2 R 0 W(r) dr 1 d 0 WR(r)dW (r) tφ=1 1/2 (4.2) → R 1 2 0 W(r) dr ³R ´ The above yield some surprising results: p φˆ is super-consistent;thatis,φˆ φ at rate T instead of the usual • rate T 1/2. → φˆ is not asymptotically normally distributed and tφ=1 is not asymp- • totically standard normal. The limiting distribution of tφ=1 is called the Dickey-Fuller (DF) • distribution and does not have a closed form representation. Conse- quently, quantiles of the distribution must be computed by numerical approximation or by simulation3. Since the normalized bias T (φˆ 1) has a well defined limiting distri- • bution that does not depend on− nuisance parameters it can also be used as a test statistic for the null hypothesis H0 : φ =1. 2A Wiener process W ( ) is a continuous-time stochastic process, associating each date r [0, 1] a scalar random· variable W (r)thatsatisfies: (1) W (0) = 0; (2) for any dates ∈ 0 t1 tk 1 the changes W (t2) W (t1),W(t3) W (t2),...,W(tk) W (tk 1) are≤ independent≤ ···≤ normal≤ with W (s) W (−t) N(0, (s t−)); (3) W (s) is continuous− in− s. 3Dickey and Fuller (1979) first considered− ∼ the unit− root tests and derived the asymp- totic distribution of tφ=1. However, their representation did not utilize functions of Wiener processes. 116 4.UnitRootTests 4.3.1 Simulating the DF and Normalized Bias Distributions As mentioned above, the DF and normalized bias distributions must be ob- tained by simulation methods. To illustrate, the following S-PLUS function wiener produces one random draw from the functions of Brownian motion that appear in the limiting distributions of tφ=1 and T (φˆ 1): − wiener = function(nobs) { e = rnorm(nobs) y = cumsum(e) ym1 = y[1:(nobs-1)] intW2 = nobs^(-2) * sum(ym1^2) intWdW = nobs^(-1) * sum(ym1*e[2:nobs]) ans = list(intW2=intW2, intWdW=intWdW) ans } A simple loop then produces the simulated distributions: > nobs = 1000 > nsim = 1000 > NB = rep(0,nsim) > DF = rep(0,nsim) > for (i in 1:nsim) { + BN.moments = wiener(nobs) + NB[i] = BN.moments$intWdW/BN.moments$intW2 + DF[i] = BN.moments$intWdW/sqrt(BN.moments$intW2) } Figure 4.2 shows the histograms and density estimates of the simulated distributions. The DF density is slightly left-skewed relative to the standard normal, and the normalized bias density is highly left skewed and non- normal.