Advanced Quantitative Methods: Autocorrelation

Total Page:16

File Type:pdf, Size:1020Kb

Advanced Quantitative Methods: Autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Advanced Quantitative Methods: Autocorrelation Jos Elkink University College Dublin February 23, 2011 Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation 1 Consequences 2 Typical processes 3 Stationarity 4 Diagnostics Plots Tests: Autocorrelation Tests: Stationarity 5 Spatial autocorrelation Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Notation: lagged variables Instead of yi to indicate each of n observations, we will use yt to refer to each of T observations on a time-series. Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Notation: lagged variables Instead of yi to indicate each of n observations, we will use yt to refer to each of T observations on a time-series. yt−1 refers to the lagged value, i.e. the value of variable y at time t 1, the observation just one time period before time t. − Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Notation: lagged variables Instead of yi to indicate each of n observations, we will use yt to refer to each of T observations on a time-series. yt−1 refers to the lagged value, i.e. the value of variable y at time t 1, the observation just one time period before time t. − A lag can have any length k (k > 0), yt−k . Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Notation: first differences The difference between yt and yt−1, or the change in variable y at time t, is called the first difference, ∆yt = yt yt− . − 1 Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Notation: first differences The difference between yt and yt−1, or the change in variable y at time t, is called the first difference, ∆yt = yt yt− . − 1 Again, differences can have different lag lengths: ∆yt−k = yt−k yt−k− . − 1 Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Outline 1 Consequences 2 Typical processes 3 Stationarity 4 Diagnostics Plots Tests: Autocorrelation Tests: Stationarity 5 Spatial autocorrelation Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation The problem A key assumption of (linear) regression is that observations are independent. Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation The problem A key assumption of (linear) regression is that observations are independent. Generally, in time-series or observations in space, the observations depend on each other. If GDP is high in 1999, it is likely to be high in 2000. If GDP is high in Germany, it is likely to be high in The Netherlands. Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation The problem A key assumption of (linear) regression is that observations are independent. Generally, in time-series or observations in space, the observations depend on each other. If GDP is high in 1999, it is likely to be high in 2000. If GDP is high in Germany, it is likely to be high in The Netherlands. Treating them as independent observations suggest that you have far more information than you do. Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation The problem Ignoring this autocorrelation leads to: βˆOLS unbiased but inefficient (as long as E(ε X)=0) | Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation The problem Ignoring this autocorrelation leads to: βˆOLS unbiased but inefficient (as long as E(ε X)=0) | V (βˆOLS ) may be an under- or overestimate - the F - and t-tests cannot be trusted. If the autocorrelation is positive, V (βˆOLS ) will be an underestimate. Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation The problem Ignoring this autocorrelation leads to: βˆOLS unbiased but inefficient (as long as E(ε X)=0) | V (βˆOLS ) may be an under- or overestimate - the F - and t-tests cannot be trusted. If the autocorrelation is positive, V (βˆOLS ) will be an underestimate. The residual variance is likely to be underestimated and R2 overestimated. Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation The problem Ignoring this autocorrelation leads to: βˆOLS unbiased but inefficient (as long as E(ε X)=0) | V (βˆOLS ) may be an under- or overestimate - the F - and t-tests cannot be trusted. If the autocorrelation is positive, V (βˆOLS ) will be an underestimate. The residual variance is likely to be underestimated and R2 overestimated. Risk of spurious regressions Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Spurious regressions When two variables are uncorrelated, but nonstationary, they often lead to highly significant estimates of their correlation in“naive”linear regression. Assume: Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Spurious regressions When two variables are uncorrelated, but nonstationary, they often lead to highly significant estimates of their correlation in“naive”linear regression. Assume: yt = yt−1 + ε1,t xt = xt−1 + ε2,t . Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Spurious regressions When two variables are uncorrelated, but nonstationary, they often lead to highly significant estimates of their correlation in“naive”linear regression. Assume: yt = yt−1 + ε1,t xt = xt−1 + ε2,t . Then OLS estimation of: yt = α + βxt + εt will lead to a significant t-test on β. Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Spurious regressions Sample data −10 −5 0 5 0 20 40 60 80 100 Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Spurious regression lm(formula = y ~ x) Estimate Std. Error t value Pr(>|t|) (Intercept) -0.9646 0.3626 -2.660 0.00911 ** x -0.9207 0.1002 -9.185 6.54e-15 *** Residual standard error: 3.021 on 99 degrees of freedom Multiple R-Squared: 0.4601, Adjusted R-squared: 0.4547 F-statistic: 84.37 on 1 and 99 DF, p-value: 6.544e-15 Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Outline 1 Consequences 2 Typical processes 3 Stationarity 4 Diagnostics Plots Tests: Autocorrelation Tests: Stationarity 5 Spatial autocorrelation Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Time-series processes A time-series can have been generated by various different types of processes. Which process generated the data of course affects which econometric model is more appropriate to estimate its parameters. Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Linear model The linear regression model looks like: y = µ + ε, where µ = Xβ, or, if we have no explanatory variables, µ is a constant. Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Linear model The linear regression model looks like: y = µ + ε, where µ = Xβ, or, if we have no explanatory variables, µ is a constant. For now, we will look at the latter case, µt = µ. Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Linear model The linear regression model looks like: y = µ + ε, where µ = Xβ, or, if we have no explanatory variables, µ is a constant. For now, we will look at the latter case, µt = µ. In the linear model, we assume ε to be an IID variable, ε N(0,σ2). ∼ Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Moving average process In the moving average model, we replace the assumption of entirely independent residuals by assuming that the residual at time t is a weighted average between that residual and the one at t 1. − Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Moving average process In the moving average model, we replace the assumption of entirely independent residuals by assuming that the residual at time t is a weighted average between that residual and the one at t 1. − yt = µ +(εt + φεt− ) 1 <φ< 1 1 − Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Moving average process The above is a so-called MA(1) process, a moving average process with one lag. Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Moving average process The above is a so-called MA(1) process, a moving average process with one lag. This model can be generalised to more lags, the MA(q) process: yt = µ +(εt + φ1εt−1 + φ2εt−2) q yt = µ +(εt + φl εt−l ) l =1 Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation Moving average process Theoretically this model can be generalised to infinitely many lags: ∞ yt = µ +(εt + φl εt−l ) l =1 Jos Elkink autocorrelation Consequences Typical processes Stationarity Diagnostics Spatial autocorrelation
Recommended publications
  • Lecture 6A: Unit Root and ARIMA Models
    Lecture 6a: Unit Root and ARIMA Models 1 Big Picture • A time series is non-stationary if it contains a unit root unit root ) nonstationary The reverse is not true. • Many results of traditional statistical theory do not apply to unit root process, such as law of large number and central limit theory. • We will learn a formal test for the unit root • For unit root process, we need to apply ARIMA model; that is, we take difference (maybe several times) before applying the ARMA model. 2 Review: Deterministic Difference Equation • Consider the first order equation (without stochastic shock) yt = ϕ0 + ϕ1yt−1 • We can use the method of iteration to show that when ϕ1 = 1 the series is yt = ϕ0t + y0 • So there is no steady state; the series will be trending if ϕ0 =6 0; and the initial value has permanent effect. 3 Unit Root Process • Consider the AR(1) process yt = ϕ0 + ϕ1yt−1 + ut where ut may and may not be white noise. We assume ut is a zero-mean stationary ARMA process. • This process has unit root if ϕ1 = 1 In that case the series converges to yt = ϕ0t + y0 + (ut + u2 + ::: + ut) (1) 4 Remarks • The ϕ0t term implies that the series will be trending if ϕ0 =6 0: • The series is not mean-reverting. Actually, the mean changes over time (assuming y0 = 0): E(yt) = ϕ0t • The series has non-constant variance var(yt) = var(ut + u2 + ::: + ut); which is a function of t: • In short, the unit root process is not stationary.
    [Show full text]
  • Econometrics Basics: Avoiding Spurious Regression
    Econometrics Basics: Avoiding Spurious Regression John E. Floyd University of Toronto July 24, 2013 We deal here with the problem of spurious regression and the techniques for recognizing and avoiding it. The nature of this problem can be best understood by constructing a few purely random-walk variables and then regressing one of them on the others. The figure below plots a random walk or unit root variable that can be represented by the equation yt = ρ yt−1 + ϵt (1) which can be written alternatively in Dickey-Fuller form as ∆yt = − (1 − ρ) yt−1 + ϵt (2) where yt is the level of the series at time t , ϵt is a series of drawings of a zero-mean, constant-variance normal random variable, and (1 − ρ) can be viewed as the mean-reversion parameter. If ρ = 1 , there is no mean-reversion and yt is a random walk. Notice that, apart from the short-term variations, the series trends upward for the first quarter of its length, then downward for a bit more than the next quarter and upward for the remainder of its length. This series will tend to be correlated with other series that move in either the same or the oppo- site directions during similar parts of their length. And if our series above is regressed on several other random-walk-series regressors, one can imagine that some or even all of those regressors will turn out to be statistically sig- nificant even though by construction there is no causal relationship between them|those parts of the dependent variable that are not correlated directly with a particular independent variable may well be correlated with it when the correlation with other independent variables is simultaneously taken into account.
    [Show full text]
  • Commodity Prices and Unit Root Tests
    Commodity Prices and Unit Root Tests Dabin Wang and William G. Tomek Paper presented at the NCR-134 Conference on Applied Commodity Price Analysis, Forecasting, and Market Risk Management St. Louis, Missouri, April 19-20, 2004 Copyright 2004 by Dabin Wang and William G. Tomek. All rights reserved. Readers may make verbatim copies of this document for non-commercial purposes by any means, provided that this copyright notice appears on all such copies. Graduate student and Professor Emeritus in the Department of Applied Economics and Management at Cornell University. Warren Hall, Ithaca NY 14853-7801 e-mails: [email protected] and [email protected] Commodity Prices and Unit Root Tests Abstract Endogenous variables in structural models of agricultural commodity markets are typically treated as stationary. Yet, tests for unit roots have rather frequently implied that commodity prices are not stationary. This seeming inconsistency is investigated by focusing on alternative specifications of unit root tests. We apply various specifications to Illinois farm prices of corn, soybeans, barrows and gilts, and milk for the 1960 through 2002 time span. The preponderance of the evidence suggests that nominal prices do not have unit roots, but under certain specifications, the null hypothesis of a unit root cannot be rejected, particularly when the logarithms of prices are used. If the test specification does not account for a structural change that shifts the mean of the variable, the results are biased toward concluding that a unit root exists. In general, the evidence does not favor the existence of unit roots. Keywords: commodity price, unit root tests.
    [Show full text]
  • Unit Roots and Cointegration in Panels Jörg Breitung M
    Unit roots and cointegration in panels Jörg Breitung (University of Bonn and Deutsche Bundesbank) M. Hashem Pesaran (Cambridge University) Discussion Paper Series 1: Economic Studies No 42/2005 Discussion Papers represent the authors’ personal opinions and do not necessarily reflect the views of the Deutsche Bundesbank or its staff. Editorial Board: Heinz Herrmann Thilo Liebig Karl-Heinz Tödter Deutsche Bundesbank, Wilhelm-Epstein-Strasse 14, 60431 Frankfurt am Main, Postfach 10 06 02, 60006 Frankfurt am Main Tel +49 69 9566-1 Telex within Germany 41227, telex from abroad 414431, fax +49 69 5601071 Please address all orders in writing to: Deutsche Bundesbank, Press and Public Relations Division, at the above address or via fax +49 69 9566-3077 Reproduction permitted only if source is stated. ISBN 3–86558–105–6 Abstract: This paper provides a review of the literature on unit roots and cointegration in panels where the time dimension (T ), and the cross section dimension (N) are relatively large. It distinguishes between the ¯rst generation tests developed on the assumption of the cross section independence, and the second generation tests that allow, in a variety of forms and degrees, the dependence that might prevail across the di®erent units in the panel. In the analysis of cointegration the hypothesis testing and estimation problems are further complicated by the possibility of cross section cointegration which could arise if the unit roots in the di®erent cross section units are due to common random walk components. JEL Classi¯cation: C12, C15, C22, C23. Keywords: Panel Unit Roots, Panel Cointegration, Cross Section Dependence, Common E®ects Nontechnical Summary This paper provides a review of the theoretical literature on testing for unit roots and cointegration in panels where the time dimension (T ), and the cross section dimension (N) are relatively large.
    [Show full text]
  • Testing for Unit Root in Macroeconomic Time Series of China
    Testing for Unit Root in Macroeconomic Time Series of China Xiankun Gai Shan Dong Province Statistical Bureau Quan Cheng Road 221 Ji Nan, China [email protected];[email protected] 1. Introduction and procedures The immense literature and diversity of unit root tests can at times be confusing even to the specialist and presents a truly daunting prospect to the uninitiated. In order to test unit root in macroeconomic time series of China, we have examined the unit root throry with an emphasis on testing principles and recent developments. Unit root tests are important in examining the stationarity of a time series. Stationarity is a matter of concern in three important areas. First, a crucial question in the ARIMA modelling of a single time series is the number of times the series needs to be first differenced before an ARMA model is fit. Each unit root requires a differencing operation. Second, stationarity of regressors is assumed in the derivation of standard inference procedures for regression models. Nonstationary regressors invalidate many standard results and require special treatment. Third, in cointegration analysis, an important question is whether the disturbance term in the cointegrating vector has a unit root. Consider a time series data as a data generating process(DGP) incorporated with trend, cycle, and seasonality. By removing these deteriministic patterns, the remaining DGP must be stationry. “Spurious” regression with a high R-square but near-zero Durbin-Watson statistic, often found in time series litreature, are mainly due to the use of nonstationary data series. Given a time series DGP, testing for random walk is a test for stationary.
    [Show full text]
  • Notes on Time Series
    Graduate Macro Theory II: Notes on Time Series Eric Sims University of Notre Dame Spring 2013 1 What is a Time Series? A time series is a realization of a sequence of a variable indexed by time. The notation we will use to denote this is xt; t = 1; 2;:::;T . A variable is said to be \random" if its realizations are stochastic. Unlike cross-sectional data, time series data can typically not be modeled as independent across observations (i.e. independent across time). Rather, time series are persistent in the sense that observations of random variables are typically positively correlated across time. Most of what we do in macro involves variables with such dependence across time. Before discussing the unique issues presented by time series data, we quickly review expectations and first and second moments. 2 Expectations The expected value of xt is denoted by E(xt) and is the weighted average of possible realizations of xt. Denote the set of possible realizations by Ωx, and the probability density function of x as p(x). Essentially the expected value is just possible realizations times the probability of each realization. X E(xt) = xp(x) (1) x2Ωx This is a linear operator. As such, it has the following properties, where a is a constant: E(a) = a (2) E(axt) = aE(xt) (3) E(xt + yt) = E(xt) + E(yt) (4) Non-linear operators cannot \go through" an expectation operator: 1 E(xtyt) 6= E(xt)E(yt) (5) E(g(xt)) 6= g(E(xt)) (6) We are often interested in conditional expectations, which are expectations taken conditional on some information.
    [Show full text]
  • Unit Roots and Cointegration in Panels
    UNIT ROOTS AND COINTEGRATION IN PANELS JOERG BREITUNG M. HASHEM PESARAN CESIFO WORKING PAPER NO. 1565 CATEGORY 10: EMPIRICAL AND THEORETICAL METHODS OCTOBER 2005 An electronic version of the paper may be downloaded • from the SSRN website: www.SSRN.com • from the CESifo website: www.CESifo-group.de CESifo Working Paper No. 1565 UNIT ROOTS AND COINTEGRATION IN PANELS Abstract This paper provides a review of the literature on unit roots and cointegration in panels where the time dimension (T) and the cross section dimension (N) are relatively large. It distinguishes between the first generation tests developed on the assumption of the cross section independence, and the second generation tests that allow, in a variety of forms and degrees, the dependence that might prevail across the different units in the panel. In the analysis of cointegration the hypothesis testing and estimation problems are further complicated by the possibility of cross section cointegration which could arise if the unit roots in the different cross section units are due to common random walk components. JEL Code: C12, C15, C22, C23. Keywords: panel unit roots, panel cointegration, cross section dependence, common effects. Joerg Breitung M. Hashem Pesaran University of Bonn Cambridge University Department of Economics Sidgwick Avenue Institute of Econometrics Cambridge, CB3 9DD Adenauerallee 24 – 42 United Kingdom 53113 Bonn [email protected] Germany [email protected] We are grateful to Jushan Bai, Badi Baltagi, George Kapetanios, Uwe Hassler, Serena Ng, Elisa Tosetti, Ron Smith, and Joakim Westerlund for comments on a preliminary version of this paper. 1 Introduction Recent advances in time series econometrics and panel data analysis have focussed attention on unit root and cointegration properties of variables ob- served over a relatively long span of time across a large number of cross sec- tion units, such as countries, regions, companies or even households.
    [Show full text]
  • COINTEGRATION and UNIT ROOTS Juan J
    COINTEGRATION AND UNIT ROOTS Juan J. Dolado Bank of Spain Tim J enkinson Keble College, University of Oxford and CEPR Simon Sosvilla-Rivero University of Birmingham Abstract. This paper provides an updated survey of a burgeoning literature on testing, estimation and model speciftcation in the presence of integrated variables. Integrated variables are a speciftc class of non-stationary variables which seem to characterise faithfully the properties of many macroeconomic time series. The analysis of cointegration develops out of the existence of unit roots and offers a generic route to test the validity of the equilibrium predictions of economic theories. Special emphasis is put on the empirical researcher's point of view. Keywords. Unit root, cointegration, trends, error correction mechanisms. 1. Introduction Economic theory generaHy deals with equilibrium relationships. Most empirical econometric studies are an attempt to evaluate such relationships by summarising economic time series using statistical analysis. To apply standard inference procedures in a dynamic time series model we need the various variables to be stationary, since the majority of econometric theory is built upon the assumption of stationarity. Until recently, this assumption was rarely questioned, and econometric analysis proceeded as if aH the economic time series were stationary, at least around sorne deterministic trend function which could be appropriately removed. However, stationary series should at least have constant unconditional mean and variance over time, a condition which appears rarely to be satisfied in economics. The importance of the stationarity assumption had been recognised for many years, but the important papers by Granger and Newbold (1974), Nelson and Kang (1981) and Nelson and Plosser (1982) alerted many to the econometric implications of non-stationarity.
    [Show full text]
  • Lecture 8: Nonstationarity, Unit Roots and Cointegration
    Lecture 8: Nonstationarity, Unit Roots and Cointegration R.G. Pierse 1 Introduction Definition 1.1. Weak stationarity A variable Yt is weakly stationary if its mean and its variance are constant over time, and its autocovariances cov(YtYt−s) are a function solely of s and not of t. The assumption of stationarity is necessary for econometric estimators and tests to have the standard distributions. Most economic variables do not satisfy the conditions of weak stationarity. In this case they need to be transformed in order to make them stationary. 1.1 The Autoregressive model Consider the autoregressive model from lecture 6 with the addition of an intercept term: Yt = c + ρYt−1 + ut ; t = −∞; ··· ; 0; 1; ··· ;T (1.1) where −1 < ρ < 1 : Note that instead of defining an initial condition Y0 as previously, we now assume that the process has been running indefinitely from t = −∞. The error process ut retains all the usual assumptions of the classical model. The parameter ρ is known as the root of the autoregressive process. Substituting for lagged Yt, we get 2 3 2 3 Yt = c(1 + ρ + ρ + ρ + ··· ) + ut + ρut−1 + ρ ut−2 + ρ ut−3 + ··· so that 2 3 E(Yt) = c(1 + ρ + ρ + ρ + ··· ) c = 1 − ρ 1 and 2 s 2 4 6 cov(YtYt−s) = σ ρ (1 + ρ + ρ + ρ + ··· ) σ2ρs = ; s ≥ 0 : 1 − ρ2 This model satisfies the conditions of weak stationarity since both mean and autocovariances are constant over time. 1.2 The Random Walk Model Now consider what happens to the properties of the model if ρ = 1.
    [Show full text]
  • 07-Metrics-Chap07 252..277
    7 Improving Size and Power in Unit Root Testing Niels Haldrup and Michael Jansson Abstract A frequent criticism of unit root tests concerns the poor power and size properties that many such tests exhibit. However, during the past decade or so intensive research has been conducted to alleviate these problems and great advances have been made. The present paper provides a selective survey of recent contributions to improve upon both the size and power of unit root tests and, in so doing, the approach of using rigorous statistical optimality criteria in the development of such tests is stressed. In addition to presenting tests where improved size can be achieved by modifying the standard Dickey–Fuller class of tests, the paper presents the theory of optimal testing and the construction of power envelopes for unit root tests under different conditions allowing for serial correlation, deterministic components, assumptions regarding the initial condition, non-Gaussian errors, and the use of covariates. 7.1 Introduction 253 7.2 Unit root testing 254 7.2.1 The augmented Dickey–Fuller and Phillips–Perron classes of tests 256 7.2.2 Size distortions of unit root tests 257 7.2.3 Modified unit root tests with good size 259 7.2.4 Deterministics 260 7.3 Power envelopes for unit root tests 262 7.3.1 The leading special case 262 7.3.2 Serial correlation 265 7.3.3 Deterministics 266 7.3.4 The initial condition 267 7.3.5 Non-Gaussian errors 269 7.3.6 Covariates 271 7.4 Conclusion 272 252 Niels Haldrup and Michael Jansson 253 7.1 Introduction Since the mid-1980s there has been a veritable explosion of research on the importance of unit roots in the analysis of economic and other time series data.
    [Show full text]
  • Lecture 16 Unit Root Tests
    RS – EC2 - Lecture 16 Lecture 16 Unit Root Tests 1 Autoregressive Unit Root • A shock is usually used to describe an unexpected change in a variable or in the value of the error terms at a particular time period. • When we have a stationary system, effect of a shock will die out gradually. But, when we have a non-stationary system, effect of a shock is permanent. • We have two types of non-stationarity. In an AR(1) model we have: - Unit root: | 1 | = 1: homogeneous non-stationarity - Explosive root: | 1 | > 1: explosive non-stationarity • In the last case, a shock to the system become more influential as time goes on. It can never be seen in real life. We will not consider them. 1 RS – EC2 - Lecture 16 Autoregressive Unit Root • Consider the AR(p) process: 1 2 p (L) yt t where (L) 11L L 2 .... p L As we discussed before, if one of the rj’s equals 1, Φ(1)=0, or 1 2 .... p 1 • We say yt has a unit root. In this case, yt is non-stationary. Example: AR(1): yt 1 yt 1 t Unit root: 1 = 1. H0 (yt non-stationarity): 1 = 1 (or, 1 –1 = 0) H1 (yt stationarity): 1 < 1 (or, 1 –1 < 0) • A t-test seems natural to test H0. But, the ergodic theorem and MDS CLT do not apply: the t-statistic does not have the usual distributions. Autoregressive Unit Root • Now, let’s reparameterize the AR(1) process. Subtract yt-1 from yt: yt yt yt1 (1 1) yt1 t 0 yt1 t • Unit root test: H0: α0 = 1 –1 = 0 H1: α0 < 0.
    [Show full text]
  • Augmented Dickey–Fuller Unit-Root Test
    Title stata.com dfuller — Augmented Dickey–Fuller unit-root test Syntax Menu Description Options Remarks and examples Stored results Methods and formulas References Also see Syntax dfuller varname if in , options options Description Main noconstant suppress constant term in regression trend include trend term in regression drift include drift term in regression regress display regression table lags(#) include # lagged differences You must tsset your data before using dfuller; see [TS] tsset. varname may contain time-series operators; see [U] 11.4.4 Time-series varlists. Menu Statistics > Time series > Tests > Augmented Dickey-Fuller unit-root test Description dfuller performs the augmented Dickey–Fuller test that a variable follows a unit-root process. The null hypothesis is that the variable contains a unit root, and the alternative is that the variable was generated by a stationary process. You may optionally exclude the constant, include a trend term, and include lagged values of the difference of the variable in the regression. Options Main £ £noconstant suppresses the constant term (intercept) in the model and indicates that the process under the null hypothesis is a random walk without drift. noconstant cannot be used with the trend or drift option. trend specifies that a trend term be included in the associated regression and that the process under the null hypothesis is a random walk, perhaps with drift. This option may not be used with the noconstant or drift option. drift indicates that the process under the null hypothesis is a random walk with nonzero drift. This option may not be used with the noconstant or trend option.
    [Show full text]