
18.5. BIVARIATE TIME SERIES 45 18.5 Bivariate Time Series Suppose x1, x2,...,xn and y1,y2,...,yn are sequences of observations made across time, and the problem is to describe their sequential relationship. For example, an increase in yt may tend to occur following some increase or decrease in a linear combination of some of the preceding xt values. This is the sort of possibility that bivariate time series analysis aims to describe. Example See Fig. 10 of Kaminski et al., 2001, Biol. Cybern.; Fig. 2 of Brovelli et al., 2004, PNAS. 2 The theoretical framework of such efforts begins, again, with station- arity. A joint process {(Xt,Yt), t ∈ Z} is said to be strictly stationary if the joint distribution of {(Xt,Yt),..., (Xt+h,Yt+h)} is the same as that of {(Xs,Ys),..., (Xs+h,Ys+h)} for all integers s, t, h. The process is weakly sta- tionary if each of Xt and Yt is weakly stationary with means and covariance functions µX , γX (h) and µY , γY (h), and, in addition, the cross-covariance function γXY (s, t)= E((Xs − µX )(Yt − µY )) depends on s and t only through their difference h = s − t, in which case we write it in the form γXY (h)= E((Xth − µX )(Yt − µY )). Note that γXY (h) = γY X (−h). The cross-correlation function of {(Xt,Yt)} is γXY (h) ρXY (h)= σX σY where σX = γX (0) and similarly for Yt. An importantp result is the following. Suppose we have Yt = βXt−ℓ + Wt (18.43) 2 where Xt is stationary and Wt is stationary with mean zero and variance σW , independently of Xs for all s. Here, ℓ is the lag of Yt behind Xt. Then µY = E(βXt−ℓ + Wt)= βµX 46 CHAPTER 18. TIME SERIES and γXY (h) = E((Yt+h − µY )(Xt − µX )) = E((βXt+h−ℓ + Wt − µY )(Xt − µX )) = βE((Xt+h−ℓ − µX )(Xt − µX ))+0 which gives γXY (h)= βγX (h − ℓ). (18.44) Thus, when Yt is given by (18.43), the cross-covariance function will look like the autocovariance function of Xt, but shifted by lag ℓ. Similarly, the autocorrelation function will be βγX(h − ℓ) ρXY (h) = 2 2 2 β σX + σW σX 1 2 2 p βσX = ρX (h). βσ2 + σ2 X W This provides one of the basic intuitions behind much practical use of cross- correlations: one examines them to look for lead/lag relationships. The cross-covariance and cross-correlation functions are estimated by their sample counterparts: n−h 1 γˆXY (h)= (xt+h − x¯)(yt − y¯) n t=1 X withγ ˆXY (−h)=ˆγY X (h), and γˆXY (h) ρˆ(h)= . σˆX σˆY 18.5.1 The coherence ρXY (ω) between two series X and Y may be considered the correlation of their ω- frequency components. A second use of cross-correlation comes from decomposing the cross-covariance function into its spectral components. The starting point for the intuition 18.5. BIVARIATE TIME SERIES 47 −1.0 0.0 0.5 1.0 0 20 40 60 80 100 t Cross Cor −1.0 0.0 0.5 1.0 −15 −10 −5 0 5 10 15 Lag Figure 18.20: A pair of phase-shifted cosine functions and their cross- correlation function. behind this class of methods comes from Figure 18.20. There, a pair of phase- shifted cosine functions have a periodic cross-correlation function. Similarly, if {Xt} and {Yt} act somewhat like phase-shifted quasi-periodic signals, one might expect to see quasi-periodic behavior in their cross-covariance function. However, the situation is a bit more subtle than this. Recall that for a stationary process {Xt; t ∈ Z}, if ∞ |γX(h)| < ∞ h=−∞ X then there is a spectral density function fX (ω) for which 1 2 2πiωh γX (h)= e fX (ω)dω (18.45) − 1 Z 2 and ∞ −2πiωh fX (ω)= γX (h)e . h=−∞ X 48 CHAPTER 18. TIME SERIES Equation (18.29) showed that the periodogram is the DFT of the sample au- tocovariance function. (The periodogram is also equal to the squared magni- tude of the DFT of the data.) Smoothed versions of the DFT of the sample autocovariance function thus became estimates of the spectral density. The bivariate and, more generally, multivariate versions of these basic results are analogous: if ∞ |γXY (h)| < ∞ h=−∞ X then there is a cross-spectral density function fXY (ω) for which 1 2 2πiωh γXY (h)= e fXY (ω)dω (18.46) − 1 Z 2 and ∞ −2πiωh fXY (ω)= γXY (h)e . h=−∞ X The cross-spectral density is, in general, complex valued. Because γY X (h)= γXY (−h) we have fY X (ω)= fXY (ω). (18.47) An estimate fˆXY (ω) of fXY (ω) may be obtained by smoothing the DFT of sample covariance functionγ ˆXY (h). (The cross-periodogram is the un- smoothed DFT ofγ ˆXY (h).) To assess dependence across stationary pair of time series there is a re- markably simple analogy with ordinary least squares regression. In regres- sion, where Y = Xβ +ǫ, geometrical arguments (discussed earlier) show that the sum of squares is minimized when β makes the residual orthogonal to the fit, i.e., β = βˆ satisfies (Y − Xβ)T Xβ = 0, and this set of equations (a version of the normal equations) determines the value of βˆ. The residual sum of squares may be written in terms of R2 = ||Xβˆ||2/||Y ||2 as 2 2 2 2 SSerror = ||Y || − ||Xβˆ|| = ||Y || (1 − R ). (18.48) The analogous result is obtained by returning to the linear process (18.43) and generalizing it by writing ∞ Yt = βhXt−h + Wt, (18.49) h=−∞ X 18.5. BIVARIATE TIME SERIES 49 where Wt is a stationary white noise process independent of {Xt}. As an analogy with SSerror we consider minimizing the mean squared error ∞ 2 MSE = E Yt − βhXt−h . (18.50) h=−∞ ! X Some manipulations show that the solution satisfies 1 2 2 min MSE = fY (ω)(1 − ρY X (ω) )dω (18.51) − 1 Z 2 where 2 2 |fXY (ω)| ρXY = . fX (ω)fY (ω) 2 is the squared coherence. Thus, as an analogue to (18.48), fY (ω)(1−ρY X (ω) ) is the ω-component of MSE in the minimum-MSE fit of (18.49). In (18.51) MSE ≥ 0 and fY (ω) ≥ 0 imply that 0 ≤ ρY X (ω) ≤ 1 for all ω, and when ∞ Yt = βhXt−h h=−∞ X we have ρY X (ω) = 1 for all ω. Taking this together with (18.51) gives the interpretation that the coherence is a frequency-component measure of linear association between two theoretical time series. We now consider data-based evaluation of coherence. The coherence may be estimated by ˆ 2 2 |fXY (ω)| ρˆXY = . (18.52) fˆX (ω)fˆY (ω) However, the smoothing in this estimation process is crucial. The raw cross- periodogram IXY (ω) satisfies the relationship 2 |IXY (ω)| = IX (ω)IY (ω) so that plugging the raw periodograms into (18.52) will always yield the value 1. Thus, again, it is imperative to smooth periodograms before interpreting them. 50 CHAPTER 18. TIME SERIES 18.5.2 Granger causality measures the linear predictabil- ity of one time series by another. A way of examining the directional dependence among two stationary pro- cesses {Xt, t ∈ Z} and {Yt, t ∈ Z} begins by assuming they may be repre- sented in the form of time-dependent regression. A paper by Geweke (1982,J. Amer. Statist. Assoc.) spelled this out nicely, based on earlier work by Granger. The idea is very simple. In ordinary regression we assess the influence of a variable (or set of variables) X2 on Y in the presence of another variable (or set of variables) X1 by examining the reduction in variance when we compare the regression of Y on (X1,X2) with the regression of Y on X1 alone. If the variance is reduced sufficiently much, then we conclude that X2 helps explain (predict) Y . Here, we replace Y with Yt, replace X1 with {Ys,s<t} and X2 with {Xs,s<t}. In other words, we examine the additional contribution to predicting Yt made by the past observations of Xs after accounting for the autocorrelation in {Yt}. The “causality” part comes when the past of Xs helps predict Yt but the past of Ys does not help predict Xt. In Geweke’s notation, suppose ∞ Xt = E1sXt−s + U1t s=1 X∞ Yt = G1sYt−s + V1t s=1 X with V (U1t) = Σ1 and V (V1t)= T1, and then suppose further that ∞ ∞ Xt = E2sXt−s + F2sYt−s + U2t s=1 s=1 X∞ X∞ Yt = G2sYt−s + H2sXt−s + V2t s=1 s=1 X X where now V (U1t) = Σ2 and V (V1t)= T2. The residual variation in predicting Yt from the past of Yt is given by T1 and that in predicting Yt from the past 18.5. BIVARIATE TIME SERIES 51 of Yt together with the past of Xt is given by T2. Granger suggested the strength of “causality” (predictability) be measured by |T1| GC =1 − . |T2| Geweke recommended the modified form |T1| FX→Y = log . |T2| Geweke also suggested a decomposition of FX→Y into frequency components. That is, he gave an expression for fX→Y (ω) such that 1 2 FX→Y = fX→Y (ω)dω. 1 Z 2 In applications, the basic procedure is to (i) fit a bivariate AR model (using, say, AIC to choose the orders of each of the terms); this modifies the equations above by making them finite-dimensional; then (ii) test the hypothesis H2s = 0 for all s and also the hypothesis F2s = 0 for all s.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-