Var Models & Granger Causality

Var Models & Granger Causality

VAR MODELS & GRANGER CAUSALITY 1 VECTOR TIME SERIES • A vector series consists of multiple single series. • Why we need multiple series? – To be able to understand the relationship between several components – To be able to get better forecasts 2 VECTOR TIME SERIES • Price movements in one market can spread easily and instantly to another market. For this reason, financial markets are more dependent on each other than ever before. So, we have to consider them jointly to better understand the dynamic structure of global market. Knowing how markets are interrelated is of great importance in finance. • For an investor or a financial institution holding multiple assets play an important role in decision making. 3 VECTOR TIME SERIES 4 VECTOR TIME SERIES 5 VECTOR TIME SERIES 6 7 8 VECTOR TIME SERIES • Consider an m-dimensional time series Yt=(Y1,Y2,…,Ym)’. The series Yt is weakly stationary if its first two moments are time invariant and the cross covariance between Yit and Yjs for all i and j are functions of the time difference (t-s) only – however, please notice that in general, they are not symmetric, that is, cov(Yit, Yjs )≠ cov(Yis, Yjt ). 9 VECTOR TIME SERIES • The mean vector: EYt 1,2,,m • The cross-covariance matrix function k CovYtk ,Yt EYtk Yt 11k 12 k 1m k k k k 21 22 2m m1k m2 k mm k 10 VECTOR TIME SERIES • The cross-correlation matrix function: 1/ 2 1/ 2 k D kD ij k where D is a diagonal matrix in which the i-th diagonal element is the variance of the i-th process, i.e. D diag 110, 22 0,, mm 0. 11 VECTOR WHITE NOISE PROCESS • {at}~WN(0,) iff {at} is stationary with mean 0 vector and ,k 0 k 0,o.w. 12 VECTOR TIME SERIES • {Yt} is a linear process if it can be expressed as Yt j at j for at ~WN0, j0 where {j} is a sequence of mxn matrix whose entries are absolutely summable, i.e. j i,l for i,l 1,2,...,m. j 13 VECTOR TIME SERIES • For a linear process, E(Yt)=0 and k jkj ,k 0,1,2,... j 14 MA (WOLD) REPRESENTATION Yt Bat s where B s B s0 • For the process to be stationary, s should be square summable in the sense that each of the mxm sequence ij.s is square summable. 15 AR REPRESENTATION BYt at s where B 1 s B s0 • For the process to be invertible, s should be absolute summable. 16 THE VECTOR AUTOREGRESSIVE MOVING AVERAGE (VARMA) PROCESSES • VARMA(p,q) process: p BYt q Bat p where p B 0 1B p B q q B 0 1B q B q 0 p BYt at VARp p 0 Yt q Bat VMA(q) 17 VARMA PROCESS • VARMA process is stationary, if the zeros of |p(B)| are outside the unit circle. 1 Yt Bat p B q B • VARMA process is invertible, if the zeros of |q(B)| are outside the unit circle. BYt at 1 q B p BYt at 18 IDENTIFIBILITY PROBLEM • Multiplying matrices by some arbitrary matrix polynomial may give us an identical covariance matrix. So, the VARMA(p,q) model is not identifiable. We cannot uniquely determine p and q. 19 IDENTIFIBILITY PROBLEM • Example: VARMA(1,1) process Y1,t 0 mY1,t1 a1,t 0 ma1,t1 Y2,t 0 0 Y2,t1 a2,t 0 0 a2,t1 1 mBY1,t 1 mBa1,t 0 1 Y2,t 0 1 a2,t 1 Y1,t 1 mB 1 mBa1,t 1 Ba1,t Y2,t 0 1 0 1 a2,t 0 1 a2,t MA()=VMA(1) 20 21 IDENTIFIBILITY • To eliminate this problem, there are three methods suggested by Hannan (1969, 1970, 1976, 1979). – From each of the equivalent models, choose the minimum MA order q and AR order p. The resulting representation will be unique if Rank(p(B))=m. – Represent p(B) in lower triangular form. If the order of ij(B) for i,j=1,2,…,m, then the model is identifiable. – Represent p(B) in a form p(B) =p(B)I where p(B) is a univariate AR(p). The model is identifiable if p0. 22 VAR(1) PROCESS • Yi,t depends on not only the lagged values of Yit but also the lagged values of the other variables. I BYt at • Always invertible. • Stationary if I B 0 outside the unit circle. Let =B1. I B 0 I 0 The zeros of |IB| is related to the eigenvalues of . 23 VAR(1) PROCESS • Hence, VAR(1) process is stationary if the eigenvalues of ; i, i=1,2,…,m are all inside the unit circle. • The autocovariance matrix: k EYtkYt EYtk Yt1 at EYtkYt1 Ytkat 1 ,k 0 k k k 1 0 ,k 1 24 VAR(1) PROCESS • k=1, 1 0 110 1 01 01 1 1 1 0 1 0001 0 0 25 VAR(1) PROCESS • Then, 0 0 vec0 I 1vec where Kronecker product vecABC C AvecB 3 4 a B a B 3 2 11 1n 1 e.g. A B e.g.X 4 6 vecX 2 1 7 am1B amn B 6 7 26 VAR(1) PROCESS • Example: 1.1 0.3 Yt Yt1 at 0.6 0.2 1.1 0.3 I 0.6 0.2 det I 1.1 0.2 0.60.3 2 1.3 0.4 0 1 0.8,2 0.5 The process is stationary. 27 VMA(1) PROCESS Yt at at1 where at ~ WN0,. • Always stationary. • The autocovariance function: 0 ,k 1 k ,k 1 0,o.w. • The autocovariance matrix function cuts off after lag 1. 28 VMA(1) PROCESS • Hence, VMA(1) process is invertible if the eigenvalues of ; i, i=1,2,…,m are all inside the unit circle. 29 IDENTIFICATION OF VARMA PROCESSES • Same as univariate case. • SAMPLE CORRELATION MATRIC FUNCTION: Given a vector series of n observations, the sample correlation matrix function is ˆ k ˆ ij k where ˆ ij k ‘s are the crosscorrelation for the i-th and j-th component series. • It is very useful to identify VMA(q). 30 SAMPLE CORRELATION MATRIC FUNCTION • Tiao and Box (1981): They have proposed to use +, and . signs to show the significance of the cross correlations. + sign: the value is greater than 2 times the estimated standard error sign: the value is less than 2 times the estimated standard error . sign: the value is within the 2 times estimated standard error 31 PARTIAL AUTOREGRESSION OR PARTIAL LAG CORRELATION MATRIX FUNCTION • They are useful to identify VAR order. The partial autoregression matrix function is proposed by Tiao and Box (1981) but it is not a proper correlation coefficient. Then, Heyse and Wei (1985) have proposed the partial lag correlation matrix function which is a proper correlation coefficient. Both of them can be used to identify the VARMA(p,q). 32 EXAMPLE OF VAR MODELING IN R • “vars” package deals with VAR models. • Let’s consider the Canadian data for an application of the model. • Canadian time series for labour productivity (prod), employment (e), unemployment rate (U) and real wages (rw) (source: OECD database) • Series is quarterly. The sample range is from the 1stQ 1980 until ¨ 4thQ 2000. 33 Canadian example > library(vars) > data(Canada) > layout(matrix(1:4, nrow = 2, ncol = 2)) > plot.ts(Canada$e, main = "Employment", ylab = "", xlab = "") > plot.ts(Canada$prod, main = "Productivity", ylab = "", xlab = "") > plot.ts(Canada$rw, main = "Real Wage", ylab = "", xlab = "") > plot.ts(Canada$U, main = "Unemployment Rate", ylab = "", xlab = "") 34 35 • An optimal lag-order can be determined according to an information criteria or the final prediction error of a VAR(p) with the function VARselect(). > VARselect(Canada, lag.max = 5, type = "const") $selection AIC(n) HQ(n) SC(n) FPE(n) 3 2 2 3 • According to the more conservative SC(n) and HQ(n) criteria, the empirical optimal lag-order is 2. 36 • In a next step, the VAR(2) is estimated with the function VAR() and as deterministic regressors a constant is included. > var.2c <- VAR(Canada, p = 2, type = "const") > names(var.2c) [1] "varresult" "datamat" "y" "type" "p" [6] "K" "obs" "totobs" "restrictions" "call“ > summary(var.2c) > plot(var.2c) 37 • The OLS results of the example are shown in separate tables 1 – 4 below. It turns out, that not all lagged endogenous variables enter significantly into the equations of the VAR(2). 38 39 40 The stability of the system of difference equations has to be checked. If the moduli of the eigenvalues of the companion matrix are less than one, the system is stable. > roots(var.2c) [1] 0.9950338 0.9081062 0.9081062 0.7380565 0.7380565 0.1856381 0.1428889 0.1428889 Although, the first eigenvalue is pretty close to unity, for the sake of simplicity, we assume a stable VAR(2)- process with a constant as deterministic regressor. 41 Restricted VARs • From tables 1-4 it is obvious that not all regressors enter significantly. • With the function restrict() the user has the option to re- estimate the VAR either by significance (argument method = ’ser’) or by imposing zero restrictions manually (argument method = ’manual’). • In the former case, each equation is re-estimated separately as long as there are t-values that are in absolute value below the threshold value set by the function’s argument thresh. • In the latter case, a restriction matrix has to be provided that consists of 0/1 values, thereby selecting the coefficients to be retained in the model.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    83 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us