Stationary Vector Autoregressive Representation of Error Correction Models
Total Page:16
File Type:pdf, Size:1020Kb
Theoretical Economics Letters, 2012, 2, 152-156 http://dx.doi.org/10.4236/tel.2012.22027 Published Online May 2012 (http://www.SciRP.org/journal/tel) Stationary Vector Autoregressive Representation of Error Correction Models Yun-Yeong Kim Department of International Trade, Dankook University, Yong-In, Korea Email: [email protected] Received December 16, 2011; revised January 20, 2012; accepted January 28, 2012 ABSTRACT The paper introduces a stationary vector autoregressive (VAR) representation of the error correction model (ECM). This representation explicitly regards the cointegration error a dependent variable, making the direct implementation of standard dynamic analyses using standard VAR models possible, particularly with respect to the cointegration error. Of course, an ECM does not have an explicit VAR form, and thus, it is not convenient for conducting such dynamic analy- ses. In this regard, we transform the original nonstationary VAR model into a VAR model with the cointegration error and stationary variables. Finally, we employ the model to dynamically analyze the real exchange rate between the US dollar and the Japanese yen. Keywords: VAR Model; Cointegration; Error Correction Model; Stationary VAR 1. Introduction the real exchange rate between the US dollar and the Japanese yen. The dynamic analysis of the cointegration error and sta- The rest of this paper is organized as follows. Section tionary variables in the short run is important as the long- 2 introduces the model and assumptions. Section 3 dis- run equilibrium for practitioners and policy makers. Of cusses on the stationary VAR model representation of course, such work is possible through the classical error ECMs. Section 4 presents the empirical results, and Sec- correction model (ECM), which was popularized by tion 5 concludes. Engle and Granger (1987). Furthermore, the persistence profiles of Pesaran and Shin (1996) and Hansen (2005) 2. Model and Assumptions are also useful alternatives for this purpose. Another possible option is to follow the vector autore- We consider the -dimensionaland integrated of one gressive (VAR) approach of Sims (1980), which is now a VAR(p) process of zt given by standard method. In particular, such work may be possi- zztt 11 2 z t 2 ptp z t (1) ble if we transform the ECM into a VAR form of the or cointegration error and stationary variables, which would zz p1 z , allow the more direct exploitation of the rich tools of tt1 i1 itit (2) VAR analyses (i.e., the Granger causality test, impulse where response analysis, variance decomposition, and optimal p ,, I forecasting). Obviously, an ECM does not have an ex- i1 i , plicit VAR form, and thus, it is not convenient for con- iii 12 p ducting such dynamic analyses. Then the question is whether we can transform the and t is an 1 vector of an independently and iden- ECM into a finite-order VAR model of the cointegration tically distributed disturbance term with a finite variance 0 , where I denotes an -dimensional identity error and stationary variables. In this regard, this paper matrix and zz z. derives a finite-order VAR model with the cointegration ttt1 Further we assume the cointegration of Model (1) (e.g., error that is conformable to the ECM for the short-run Johansen, 1995) as: adjustment. In the VAR model, the cointegration error is regarded as a dependent variable, and VAR-type dy- Assumption namic analyses may be conducted directly. Finally, we employ the model to dynamically analyze We suppose , where and are r Copyright © 2012 SciRes. TEL Y.-Y. KIM 153 matrices of the full-column rank r . * Ir 1 Note that Model (2) may be written in an ECM as 0' Ir p1 (3) zutt 1 i1 i zti t where 1 is rr , 2 is rr , and 12, . under Assumption 2.1, where uztt . Proof of Proposition 3.1. We first write from th e Model (3) obviously consists of all stationary variables definition of z and u under Assumption 2.1. We are now in- *1 t t T T terested in the dynamic interaction between these vari- ables z and u . In this regard, we may apply the IIrr0 ππ11 12 0 t t (6) results of Pesaran and Shin (1996) and Hansen (2005). IIrrππ21 22 However Sims’ (1980) VAR approach is sometimes ππ π convenient for practical reasons. For instance, the opti- 11 12 12 mal forecasting of the k-th-period-ahead cointegration ππ11 21 π 12 π 22 ππ12 22 error utk is executed mechanically in a VAR system. by using Thus, we now show that Model (3) may be represented I 0 as a stationary VAR model of a part of variables z 1 r t T I and ut when there is an ECM. r 3. Stationary VAR Representation of ECM for the first equality, where π11 π12 To obtain a stationary VAR representation, assume that a . ππ21 22 given cointegration vector is normalized as , Ir of the rank r , where is r r. Then a confirm- We now represent the submatrices of in the third able non-singular square matrix can be defined as equality of (6) by using and , where Ir 0 under Assumption 2.1. For this, note that T . Ir ππ11 12 I Noteworthy is that the above lower triangular matrix ππ21 22 (7) T transforms the VAR variable zxyttt , into a 11 Ir variable wt of xt and cointegration error ut . I . 22r Tzttt x, u wt because the matrix Φ can be written as Note that xt is the explanatory variable of yt in a I (8) cointegration relation as yxt tut.We then transform the above VAR model (1) of z into a VAR model of t from the coefficient definition of (2) for the second the variable w . In particular, we multiply the T on the t equality. left of Model (1) and modify the VAR coefficients to get Then the third equality in Equation (7) directly implies the following VAR model: that ** wwtt11 ptp w et (4) π11 1 Ir (9) *1 where iiTT for ip1, 2,, and eTtt . π (10) Note that Model (4) is an observationally equivalent 12 1 π (11) form of Model (1) or (3) if t has a Gaussian distribu- 21 2 tion. However, we cannot determine the correct model by π I . (12) simply analyzing observed data. 22 2 r Then it is helpful to write Model (4) as Consequently, if we plug the submatrices (9)-(12) into **p1 the last term in Equation (6), then we get following: wwtt1 i1 iti w et (5) 1) ππ11 12 1 IIrr 1 where *** * and **p . iii12 p i1 i 2) π12 1 * Then we may simply arrange in Equation (5) as 3) ππ π π follows: 11 21 12 22 1212 0 3.1. Proposition 4) ππ12 22 1 2 IIrr . Suppose that Assumption 2.1 holds. Then The above results 1)-4), together with the equality in Copyright © 2012 SciRes. TEL 154 Y.-Y. KIM (6), proves the claimed result as VAR model of wt : I ww w we. (16) ■ * r 1 tt11 2 t 2 ptpt 0 Ir Note the restrictions from (6). ■ p1 0 (17) Note that both the dependent and explanatory variables 1 are imposed on from Theorem 3.2, and x does in (5), have the nonstationary variables xt and xt1 in p tp not appear in Equati on (13). If these restrictions (1 7) are wt and wt1 . Thus, we show how we may transform not exploited and the coefficients p1 is estimated with Model (5) of wt into a VAR model of a purely station- 1 errors, then the efficiency of dynamic analyses by using ary variable wxuttt, . these estimated coefficients may be reduced. For instance, 3.2. Theorem if we provide forecasts using a VAR model, then fore- casting mean squared error may be increased not by us- Suppose that Assumption 2.1 holds. Then ing restrictions (17). Campbell and Shiller (1987, Equa- wwtt11 2 w t 2 p wtpet (13) tion (5)) used the system (13) without referring to the VAR model (1) of level variable z , the rank deficiency where wxu, , t ti ti ti of matrix in Assumption 2.1, and the restric- tion of Theorem 3.2 as p1 0 of the coefficient . * ii for i,,p1 2 , , 1 p i 12 Further, note that the vector m oving average model of ()-r r 1 wt is defined as wLett from (13) if L 11 ii i1 11,, 2 i 122, is invertible, where L denotes a time-lag operator and p pp11 LI 1 L p L. This is conformable to the for i = 2, 3,···, p − 1 and p 12, and p1 result in Hansen (2005, Corollary1). 1 0 . In the following example, we transform an ECM into a Proof of Theorem 3.2. Using Proposition 3.1, we first VAR model of stationary variables. rewrite Model (5) as Ir 1 p1 * 3.3. Example wwtti1 i1 wtiet 0' Ir For a VAR (2), an ECM representation can be given as or zutt 111 z tt (18) x p1 t 1 * under Assumption 2.1. The second-order VAR represen- uwtiti1 i1 et. (14) utr I tation of wt is also possible as The right hand side of (14) may be written as xxtt1 * 1 uett11 (19) p1 ii uxuue (15) uutt Ir 1 ttititi1121i1 t defining from Model (14) by using Proposition 3.1. The right- hand side of (19) may be written as 1 . uxuutt11121 tt 2 e t (20) Ir defining * , conformably and Finally, the term in (15) may be written as 112 1 1 p1 i .