18.5 Bivariate Time Series

Total Page:16

File Type:pdf, Size:1020Kb

18.5 Bivariate Time Series 18.5. BIVARIATE TIME SERIES 45 18.5 Bivariate Time Series Suppose x1, x2,...,xn and y1,y2,...,yn are sequences of observations made across time, and the problem is to describe their sequential relationship. For example, an increase in yt may tend to occur following some increase or decrease in a linear combination of some of the preceding xt values. This is the sort of possibility that bivariate time series analysis aims to describe. Example See Fig. 10 of Kaminski et al., 2001, Biol. Cybern.; Fig. 2 of Brovelli et al., 2004, PNAS. 2 The theoretical framework of such efforts begins, again, with station- arity. A joint process {(Xt,Yt), t ∈ Z} is said to be strictly stationary if the joint distribution of {(Xt,Yt),..., (Xt+h,Yt+h)} is the same as that of {(Xs,Ys),..., (Xs+h,Ys+h)} for all integers s, t, h. The process is weakly sta- tionary if each of Xt and Yt is weakly stationary with means and covariance functions µX , γX (h) and µY , γY (h), and, in addition, the cross-covariance function γXY (s, t)= E((Xs − µX )(Yt − µY )) depends on s and t only through their difference h = s − t, in which case we write it in the form γXY (h)= E((Xth − µX )(Yt − µY )). Note that γXY (h) = γY X (−h). The cross-correlation function of {(Xt,Yt)} is γXY (h) ρXY (h)= σX σY where σX = γX (0) and similarly for Yt. An importantp result is the following. Suppose we have Yt = βXt−ℓ + Wt (18.43) 2 where Xt is stationary and Wt is stationary with mean zero and variance σW , independently of Xs for all s. Here, ℓ is the lag of Yt behind Xt. Then µY = E(βXt−ℓ + Wt)= βµX 46 CHAPTER 18. TIME SERIES and γXY (h) = E((Yt+h − µY )(Xt − µX )) = E((βXt+h−ℓ + Wt − µY )(Xt − µX )) = βE((Xt+h−ℓ − µX )(Xt − µX ))+0 which gives γXY (h)= βγX (h − ℓ). (18.44) Thus, when Yt is given by (18.43), the cross-covariance function will look like the autocovariance function of Xt, but shifted by lag ℓ. Similarly, the autocorrelation function will be βγX(h − ℓ) ρXY (h) = 2 2 2 β σX + σW σX 1 2 2 p βσX = ρX (h). βσ2 + σ2 X W This provides one of the basic intuitions behind much practical use of cross- correlations: one examines them to look for lead/lag relationships. The cross-covariance and cross-correlation functions are estimated by their sample counterparts: n−h 1 γˆXY (h)= (xt+h − x¯)(yt − y¯) n t=1 X withγ ˆXY (−h)=ˆγY X (h), and γˆXY (h) ρˆ(h)= . σˆX σˆY 18.5.1 The coherence ρXY (ω) between two series X and Y may be considered the correlation of their ω- frequency components. A second use of cross-correlation comes from decomposing the cross-covariance function into its spectral components. The starting point for the intuition 18.5. BIVARIATE TIME SERIES 47 −1.0 0.0 0.5 1.0 0 20 40 60 80 100 t Cross Cor −1.0 0.0 0.5 1.0 −15 −10 −5 0 5 10 15 Lag Figure 18.20: A pair of phase-shifted cosine functions and their cross- correlation function. behind this class of methods comes from Figure 18.20. There, a pair of phase- shifted cosine functions have a periodic cross-correlation function. Similarly, if {Xt} and {Yt} act somewhat like phase-shifted quasi-periodic signals, one might expect to see quasi-periodic behavior in their cross-covariance function. However, the situation is a bit more subtle than this. Recall that for a stationary process {Xt; t ∈ Z}, if ∞ |γX(h)| < ∞ h=−∞ X then there is a spectral density function fX (ω) for which 1 2 2πiωh γX (h)= e fX (ω)dω (18.45) − 1 Z 2 and ∞ −2πiωh fX (ω)= γX (h)e . h=−∞ X 48 CHAPTER 18. TIME SERIES Equation (18.29) showed that the periodogram is the DFT of the sample au- tocovariance function. (The periodogram is also equal to the squared magni- tude of the DFT of the data.) Smoothed versions of the DFT of the sample autocovariance function thus became estimates of the spectral density. The bivariate and, more generally, multivariate versions of these basic results are analogous: if ∞ |γXY (h)| < ∞ h=−∞ X then there is a cross-spectral density function fXY (ω) for which 1 2 2πiωh γXY (h)= e fXY (ω)dω (18.46) − 1 Z 2 and ∞ −2πiωh fXY (ω)= γXY (h)e . h=−∞ X The cross-spectral density is, in general, complex valued. Because γY X (h)= γXY (−h) we have fY X (ω)= fXY (ω). (18.47) An estimate fˆXY (ω) of fXY (ω) may be obtained by smoothing the DFT of sample covariance functionγ ˆXY (h). (The cross-periodogram is the un- smoothed DFT ofγ ˆXY (h).) To assess dependence across stationary pair of time series there is a re- markably simple analogy with ordinary least squares regression. In regres- sion, where Y = Xβ +ǫ, geometrical arguments (discussed earlier) show that the sum of squares is minimized when β makes the residual orthogonal to the fit, i.e., β = βˆ satisfies (Y − Xβ)T Xβ = 0, and this set of equations (a version of the normal equations) determines the value of βˆ. The residual sum of squares may be written in terms of R2 = ||Xβˆ||2/||Y ||2 as 2 2 2 2 SSerror = ||Y || − ||Xβˆ|| = ||Y || (1 − R ). (18.48) The analogous result is obtained by returning to the linear process (18.43) and generalizing it by writing ∞ Yt = βhXt−h + Wt, (18.49) h=−∞ X 18.5. BIVARIATE TIME SERIES 49 where Wt is a stationary white noise process independent of {Xt}. As an analogy with SSerror we consider minimizing the mean squared error ∞ 2 MSE = E Yt − βhXt−h . (18.50) h=−∞ ! X Some manipulations show that the solution satisfies 1 2 2 min MSE = fY (ω)(1 − ρY X (ω) )dω (18.51) − 1 Z 2 where 2 2 |fXY (ω)| ρXY = . fX (ω)fY (ω) 2 is the squared coherence. Thus, as an analogue to (18.48), fY (ω)(1−ρY X (ω) ) is the ω-component of MSE in the minimum-MSE fit of (18.49). In (18.51) MSE ≥ 0 and fY (ω) ≥ 0 imply that 0 ≤ ρY X (ω) ≤ 1 for all ω, and when ∞ Yt = βhXt−h h=−∞ X we have ρY X (ω) = 1 for all ω. Taking this together with (18.51) gives the interpretation that the coherence is a frequency-component measure of linear association between two theoretical time series. We now consider data-based evaluation of coherence. The coherence may be estimated by ˆ 2 2 |fXY (ω)| ρˆXY = . (18.52) fˆX (ω)fˆY (ω) However, the smoothing in this estimation process is crucial. The raw cross- periodogram IXY (ω) satisfies the relationship 2 |IXY (ω)| = IX (ω)IY (ω) so that plugging the raw periodograms into (18.52) will always yield the value 1. Thus, again, it is imperative to smooth periodograms before interpreting them. 50 CHAPTER 18. TIME SERIES 18.5.2 Granger causality measures the linear predictabil- ity of one time series by another. A way of examining the directional dependence among two stationary pro- cesses {Xt, t ∈ Z} and {Yt, t ∈ Z} begins by assuming they may be repre- sented in the form of time-dependent regression. A paper by Geweke (1982,J. Amer. Statist. Assoc.) spelled this out nicely, based on earlier work by Granger. The idea is very simple. In ordinary regression we assess the influence of a variable (or set of variables) X2 on Y in the presence of another variable (or set of variables) X1 by examining the reduction in variance when we compare the regression of Y on (X1,X2) with the regression of Y on X1 alone. If the variance is reduced sufficiently much, then we conclude that X2 helps explain (predict) Y . Here, we replace Y with Yt, replace X1 with {Ys,s<t} and X2 with {Xs,s<t}. In other words, we examine the additional contribution to predicting Yt made by the past observations of Xs after accounting for the autocorrelation in {Yt}. The “causality” part comes when the past of Xs helps predict Yt but the past of Ys does not help predict Xt. In Geweke’s notation, suppose ∞ Xt = E1sXt−s + U1t s=1 X∞ Yt = G1sYt−s + V1t s=1 X with V (U1t) = Σ1 and V (V1t)= T1, and then suppose further that ∞ ∞ Xt = E2sXt−s + F2sYt−s + U2t s=1 s=1 X∞ X∞ Yt = G2sYt−s + H2sXt−s + V2t s=1 s=1 X X where now V (U1t) = Σ2 and V (V1t)= T2. The residual variation in predicting Yt from the past of Yt is given by T1 and that in predicting Yt from the past 18.5. BIVARIATE TIME SERIES 51 of Yt together with the past of Xt is given by T2. Granger suggested the strength of “causality” (predictability) be measured by |T1| GC =1 − . |T2| Geweke recommended the modified form |T1| FX→Y = log . |T2| Geweke also suggested a decomposition of FX→Y into frequency components. That is, he gave an expression for fX→Y (ω) such that 1 2 FX→Y = fX→Y (ω)dω. 1 Z 2 In applications, the basic procedure is to (i) fit a bivariate AR model (using, say, AIC to choose the orders of each of the terms); this modifies the equations above by making them finite-dimensional; then (ii) test the hypothesis H2s = 0 for all s and also the hypothesis F2s = 0 for all s.
Recommended publications
  • Autocovariance Function Estimation Via Penalized Regression
    Autocovariance Function Estimation via Penalized Regression Lina Liao, Cheolwoo Park Department of Statistics, University of Georgia, Athens, GA 30602, USA Jan Hannig Department of Statistics and Operations Research, University of North Carolina, Chapel Hill, NC 27599, USA Kee-Hoon Kang Department of Statistics, Hankuk University of Foreign Studies, Yongin, 449-791, Korea Abstract The work revisits the autocovariance function estimation, a fundamental problem in statistical inference for time series. We convert the function estimation problem into constrained penalized regression with a generalized penalty that provides us with flexible and accurate estimation, and study the asymptotic properties of the proposed estimator. In case of a nonzero mean time series, we apply a penalized regression technique to a differenced time series, which does not require a separate detrending procedure. In penalized regression, selection of tuning parameters is critical and we propose four different data-driven criteria to determine them. A simulation study shows effectiveness of the tuning parameter selection and that the proposed approach is superior to three existing methods. We also briefly discuss the extension of the proposed approach to interval-valued time series. Some key words: Autocovariance function; Differenced time series; Regularization; Time series; Tuning parameter selection. 1 1 Introduction Let fYt; t 2 T g be a stochastic process such that V ar(Yt) < 1, for all t 2 T . The autoco- variance function of fYtg is given as γ(s; t) = Cov(Ys;Yt) for all s; t 2 T . In this work, we consider a regularly sampled time series f(i; Yi); i = 1; ··· ; ng. Its model can be written as Yi = g(i) + i; i = 1; : : : ; n; (1.1) where g is a smooth deterministic function and the error is assumed to be a weakly stationary 2 process with E(i) = 0, V ar(i) = σ and Cov(i; j) = γ(ji − jj) for all i; j = 1; : : : ; n: The estimation of the autocovariance function γ (or autocorrelation function) is crucial to determine the degree of serial correlation in a time series.
    [Show full text]
  • LECTURES 2 - 3 : Stochastic Processes, Autocorrelation Function
    LECTURES 2 - 3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series fXtg is a series of observations taken sequentially over time: xt is an observation recorded at a specific time t. Characteristics of times series data: observations are dependent, become available at equally spaced time points and are time-ordered. This is a discrete time series. The purposes of time series analysis are to model and to predict or forecast future values of a series based on the history of that series. 2.2 Some descriptive techniques. (Based on [BD] x1.3 and x1.4) ......................................................................................... Take a step backwards: how do we describe a r.v. or a random vector? ² for a r.v. X: 2 d.f. FX (x) := P (X · x), mean ¹ = EX and variance σ = V ar(X). ² for a r.vector (X1;X2): joint d.f. FX1;X2 (x1; x2) := P (X1 · x1;X2 · x2), marginal d.f.FX1 (x1) := P (X1 · x1) ´ FX1;X2 (x1; 1) 2 2 mean vector (¹1; ¹2) = (EX1; EX2), variances σ1 = V ar(X1); σ2 = V ar(X2), and covariance Cov(X1;X2) = E(X1 ¡ ¹1)(X2 ¡ ¹2) ´ E(X1X2) ¡ ¹1¹2. Often we use correlation = normalized covariance: Cor(X1;X2) = Cov(X1;X2)=fσ1σ2g ......................................................................................... To describe a process X1;X2;::: we define (i) Def. Distribution function: (fi-di) d.f. Ft1:::tn (x1; : : : ; xn) = P (Xt1 · x1;:::;Xtn · xn); i.e. this is the joint d.f. for the vector (Xt1 ;:::;Xtn ). (ii) First- and Second-order moments. ² Mean: ¹X (t) = EXt 2 2 2 2 ² Variance: σX (t) = E(Xt ¡ ¹X (t)) ´ EXt ¡ ¹X (t) 1 ² Autocovariance function: γX (t; s) = Cov(Xt;Xs) = E[(Xt ¡ ¹X (t))(Xs ¡ ¹X (s))] ´ E(XtXs) ¡ ¹X (t)¹X (s) (Note: this is an infinite matrix).
    [Show full text]
  • Lecture 1: Stationary Time Series∗
    Lecture 1: Stationary Time Series∗ 1 Introduction If a random variable X is indexed to time, usually denoted by t, the observations {Xt, t ∈ T} is called a time series, where T is a time index set (for example, T = Z, the integer set). Time series data are very common in empirical economic studies. Figure 1 plots some frequently used variables. The upper left figure plots the quarterly GDP from 1947 to 2001; the upper right figure plots the the residuals after linear-detrending the logarithm of GDP; the lower left figure plots the monthly S&P 500 index data from 1990 to 2001; and the lower right figure plots the log difference of the monthly S&P. As you could see, these four series display quite different patterns over time. Investigating and modeling these different patterns is an important part of this course. In this course, you will find that many of the techniques (estimation methods, inference proce- dures, etc) you have learned in your general econometrics course are still applicable in time series analysis. However, there are something special of time series data compared to cross sectional data. For example, when working with cross-sectional data, it usually makes sense to assume that the observations are independent from each other, however, time series data are very likely to display some degree of dependence over time. More importantly, for time series data, we could observe only one history of the realizations of this variable. For example, suppose you obtain a series of US weekly stock index data for the last 50 years.
    [Show full text]
  • STA 6857---Autocorrelation and Cross-Correlation & Stationary Time Series
    Announcements Autocorrelation and Cross-Correlation Stationary Time Series Homework 1c STA 6857—Autocorrelation and Cross-Correlation & Outline Stationary Time Series (§1.4, 1.5) 1 Announcements 2 Autocorrelation and Cross-Correlation 3 Stationary Time Series 4 Homework 1c Arthur Berg STA 6857—Autocorrelation and Cross-Correlation & Stationary Time Series (§1.4, 1.5) 2/ 25 Announcements Autocorrelation and Cross-Correlation Stationary Time Series Homework 1c Announcements Autocorrelation and Cross-Correlation Stationary Time Series Homework 1c Homework Questions from Last Time We’ve seen the AR(p) and MA(q) models. Which one do we use? In scientific modeling we wish to choose the model that best Our TA, Aixin Tan, will have office hours on Thursdays from 1–2pm describes or explains the data. Later we will develop many in 218 Griffin-Floyd. techniques to help us choose and fit the best model for our data. Homework 1c will be assigned today and the last part of homework Is this white noise process in the models unique? 1, homework 1d, will be assigned on Friday. Homework 1 will be We will see later that any stationary time series can be described as collected on Friday, September 7. Don’t wait till the last minute to do a MA(1) + “deterministic part” by the Wold decomposition, where the homework, because more homework will follow next week. the white noise process in the MA(1) part is unique. So in short, the answer is yes. We will also see later how to estimate the white noise process which will aid in forecasting.
    [Show full text]
  • Statistics 153 (Time Series) : Lecture Five
    Statistics 153 (Time Series) : Lecture Five Aditya Guntuboyina 31 January 2012 1 Contents 1. Definitions of strong and weak stationary sequences. 2. Examples of stationary sequences. 2 Stationarity The concept of a stationary time series is the most important thing in this course. Stationary time series are typically used for the residuals after trend and seasonality have been removed. Stationarity allows a systematic study of time series forecasting. In order to avoid confusion, we shall, from now on, make a distinction be- tween the observed time series data x1; : : : ; xn and the sequence of random vari- ables X1;:::;Xn. The observed data x1; : : : ; xn is a realization of X1;:::;Xn. It is convenient of think of the random variables fXtg as forming a doubly infinite sequence: :::;X−2;X−1;X0;X1;X2;:::: The notion of stationarity will apply to the doubly infinite sequence of ran- dom variables fXtg. Strictly speaking, stationarity does not apply to the data x1; : : : ; xn. However, one frequently abuses terminology and uses stationary data to mean that the random variables of which the observed data are a real- ization have a stationary distribution. 1 Definition 2.1 (Strict or Strong Stationarity). A doubly infinite se- quence of random variables fXtg is strictly stationary if the joint dis- tribution of (Xt1 ;Xt2 ;:::;Xtk ) is the same as the joint distribution of (Xt1+h;Xt2+h;:::;Xtk+h) for every choice of times t1; : : : ; tk and h. Roughly speaking, stationarity means that the joint distribution of the ran- dom variables remains constant over time. For example, under stationarity, the joint distribution of today's and tomorrow's random variables is the same as the joint distribution of the variables from any two successive days (past or future).
    [Show full text]
  • Cos(X+Y) = Cos(X) Cos(Y) - Sin(X) Sin(Y)
    On 1.9, you will need to use the facts that, for any x and y, sin(x+y) = sin(x) cos(y) + cos(x) sin(y). cos(x+y) = cos(x) cos(y) - sin(x) sin(y). (sin(x))2 + (cos(x))2 = 1. 28 1 Characteristics of Time Series n/2 1/2 1 1 f(xxx )=(2⇡)− Γ − exp (xxx µµµ )0Γ − (xxx µµµ ) , (1.31) | | −2 − − ⇢ where denotes the determinant. This distribution forms the basis for solving problems|·| involving statistical inference for time series. If a Gaussian time series, xt , is weakly stationary, then µt = µ and γ(ti,tj)=γ( ti tj ), so that{ the} vector µµµ and the matrix Γ are independent of time. These| − facts| imply that all the finite distributions, (1.31), of the series xt depend only on time lag and not on the actual times, and hence the series{ must} be strictly stationary. 1.6 Estimation of Correlation Although the theoretical autocorrelation and cross-correlation functions are useful for describing the properties of certain hypothesized models, most of the analyses must be performed using sampled data. This limitation means the sampled points x1,x2,...,xn only are available for estimating the mean, autocovariance, and autocorrelation functions. From the point of view of clas- sical statistics, this poses a problem because we will typically not have iid copies of xt that are available for estimating the covariance and correlation functions. In the usual situation with only one realization, however, the as- sumption of stationarity becomes critical. Somehow, we must use averages over this single realization to estimate the population means and covariance functions.
    [Show full text]
  • 13 Stationary Processes
    13 Stationary Processes In time series analysis our goal is to predict a series that typically is not deterministic but contains a random component. If this random component is stationary, then we can develop powerful techniques to forecast its future values. These techniques will be developed and discussed in this chapter. 13.1 Basic Properties In Section 12.4 we introduced the concept of stationarity and defined the autocovariance function (ACVF) of a stationary time series Xt at lag h as f g γX (h) = Cov(Xt+h;Xt); h = 0; 1; 2 ::: ± ± and the autocorrelation function as γX (h) ρX (h) := : γX (0) The autocovariance function and autocorrelation function provide a useful measure of the degree of dependence among the values of a time series at different times and for this reason play an important role when we consider the prediction of future values of the series in terms of the past and present values. They can be estimated from observations of X1;:::;Xn by computing the sample autocovariance function and autocorrelation function as described in Definition 12.4.4. Proposition 13.1.1. Basic properties of the autocovariance function γ( ): · γ(0) 0; ≥ γ(h) γ(0) for all h; j j ≤ γ(h) = γ( h) for all h; i.e., γ( ) is even. − · Definition 13.1.2. A real-valued function κ defined on the integers is non-negative definite if n aiκ(i j)aj 0 i;j=1 − ≥ X 0 for all positive integers n and vectors a = (a1; : : : ; an) with real-valued components ai.
    [Show full text]
  • Cross Correlation Functions 332 D
    STATISTICAL ANALYSIS OF WIND LOADINGS AND RESPONSES OF A TRANSMISSION TOWER STRUCTURE by SIEW HOCK LIEW, B.S. A DISSERTATION IN CIVIL ENGINEERING Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY Approved Accepted May, 1988 ©1989 SIEW HOCK LIEW All Rights Reserved ACKNOWLEDGMENTS I wish to express my deepest appreciation to my committee chairman. Dr. H. Scott Norville, for his confi­ dence in me during the entire course of this researcr. project. His encouragement and valuable advice as a committee chairman, as a major advisor, and as a friend, has instilled a value of perseverance in me. I also ^-'ish to express my sincere appreciation to Dr. K. C. M^?nt^^, for his expertise and advice in wind engineering, to Dr. T. 0. Lewis for his advice in statistics and in time series, to Dr. W. P. Vann and Dr. E. Blair for t-heir valuable suggestions and comments on this dissertation manuscript, and to Mr. M. L. Levitan for his permisf5ion to use some of the time histories plots. I am most grateful to my wife, Lily, for her patience and understanding throughout the course of my graduate work, and to my parents for their moral support -^nd instilling in me the value of education from the very beginning of my life. 11 TABLE OF CONTENTS PAGE ACKNOWLEDGMENTS i i LIST OF TABLES vi LIST OF FIGURES vii 1. INTRODUCTION 1 2. LITERATURE REVIEW 7 3. SITE DESCRIPTION, INSTRUMENTATION, AND DATA COLLECTION SYSTEM 20 3.1 Introduction 20 3.2 Location 20 3.3 Instrumentation 27 4.
    [Show full text]
  • The Autocorrelation and Autocovariance Functions - Helpful Tools in the Modelling Problem
    The autocorrelation and autocovariance functions - helpful tools in the modelling problem J. Nowicka-Zagrajek A. Wy lomanska´ Institute of Mathematics and Computer Science Wroc law University of Technology, Poland E-mail address: [email protected] One of the important steps towards constructing an appropriate mathematical model for the real-life data is to determine the structure of dependence. A conventional way of gaining information concerning the dependence structure (in the second-order case) of a given set of observations is estimating the autocovariance or the autocorrelation function (ACF) that can provide useful guidance in the choice of satisfactory model or family of models. As in some cases calculations of ACF for the real-life data may turn out to be insufficient to solve the model selection problem, we propose to consider the autocorrelation function of the squared series as well. Using this approach, in this paper we investigate the dependence structure for several cases of time series models. PACS: 05.45 Tp, 02.50 Cw, 89.65 Gh 1. Motivation One of the important steps towards constructing an appropriate mathematical model for the real-life data is to determine the structure of dependence. A conventional way of gaining information concerning the dependence structure (in the second-order case) of a given set of observations is estimating one of the most popular measure of dependence – the autocovariance (ACVF) or the autocorrelation (ACF) function – and this can provide useful guidance in the choice of satisfactory model or family of models. This traditional method is well-established in time series textbooks, see e.g.
    [Show full text]
  • Arxiv:1803.05048V2 [Q-Bio.NC] 7 Jun 2018 VII the Performance of the MTD Approach in Inferring the Underlying Network Connectivity Is Examined
    On the pros and cons of using temporal derivatives to assess brain functional connectivity Jeremi K. Ochab,∗ Wojciech Tarnowski,y Maciej A. Nowak,z and Dante R. Chialvox Marian Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Center Jagiellonian University, ul. Łojasiewicza 11, PL30–348 Kraków, Poland and Center for Complex Systems & Brain Sciences (CEMSC3), Escuela de Ciencia y Tecnología, Universidad Nacional de San Martín, & Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), 25 de Mayo 1169, San Martín, (1650), Buenos Aires, Argentina (Dated: June 8, 2018) The study of correlations between brain regions is an important chapter of the analysis of large-scale brain spatiotemporal dynamics. In particular, novel methods suited to extract dynamic changes in mutual correlations are needed. Here we scrutinize a recently reported metric dubbed “Multiplication of Temporal Derivatives” (MTD) which is based on the temporal derivative of each time series. The formal comparison of the MTD for- mula with the Pearson correlation of the derivatives reveals only minor differences, which we find negligible in practice. A comparison with the sliding window Pearson correlation of the raw time series in several stationary and non-stationary set-ups, including a realistic stationary network detection, reveals lower sensitivity of deriva- tives to low frequency drifts and to autocorrelations but also lower signal-to-noise ratio. It does not indicate any evident mathematical advantages of the proposed metric over commonly used correlation methods. Keywords: fMRI, resting state networks, functional connectivity; I. INTRODUCTION An important challenge and a very active area of research in the neuroimaging community is the study of correlations between brain regions, as a central point in the analysis of large-scale brain spatiotemporal dynamics.
    [Show full text]
  • Stat 6550 (Spring 2016) – Peter F. Craigmile Introduction to Time Series Models and Stationarity Reading: Brockwell and Davis
    Stat 6550 (Spring 2016) – Peter F. Craigmile A time series model Introduction to time series models and stationarity Remember that a time series model is a specification of the Reading: Brockwell and Davis [2002, Chapter 1, Appendix A] • probabilistic distribution of a sequence of random variables (RVs) A time series model Xt : t T0 . • { ∈ } – Example: IID noise (The observed time series x is a realization of the RVs). { t} – Mean function The simplest time series model is IID noise: • – Autocovariance and autocorrelation functions – Here X is a sequence of independent and identically dis- { t} Strictly stationary processes • tributed (IID) RVs with (typically) zero mean. (Weakly) stationary processes – There is no dependence in this model • – Autocovariance (ACVF) and autocorrelation function (ACF). (but this model can be used as a building block for construct- ing more complicated models) – The white noise process – We cannot forecast future values from IID noise – First order moving average process, MA(1) (There are no dependencies between the random variables at – First order autoregressive process, AR(1) different time points). The random walk process • Gaussian processes • 1 2 Example realizations of IID noise Means and autocovariances 1. Gaussian (normal) IID noise with mean 0, variance σ2 > 0. A time series model could also be a specification of the means 2 1 xt • pdf is fXt(xt)= exp √ 2 −2σ2 and autocovariances of the RVs. 2πσ The mean function of X is • { t} µX(t) = E(Xt). Think of µX(t) are being the theoretical mean at time t, taken IID Normal(0, 0.25) noise −3 −2 −1 0 1 2 3 over the possible values that could have generated Xt.
    [Show full text]
  • Chapter 9 Autocorrelation
    Chapter 9 Autocorrelation One of the basic assumptions in the linear regression model is that the random error components or disturbances are identically and independently distributed. So in the model y Xu , it is assumed that 2 u ifs 0 Eu(,tts u ) 0 if0s i.e., the correlation between the successive disturbances is zero. 2 In this assumption, when Eu(,tts u ) u , s 0 is violated, i.e., the variance of disturbance term does not remain constant, then the problem of heteroskedasticity arises. When Eu(,tts u ) 0, s 0 is violated, i.e., the variance of disturbance term remains constant though the successive disturbance terms are correlated, then such problem is termed as the problem of autocorrelation. When autocorrelation is present, some or all off-diagonal elements in E(')uu are nonzero. Sometimes the study and explanatory variables have a natural sequence order over time, i.e., the data is collected with respect to time. Such data is termed as time-series data. The disturbance terms in time series data are serially correlated. The autocovariance at lag s is defined as sttsEu( , u ); s 0, 1, 2,... At zero lag, we have constant variance, i.e., 22 0 Eu()t . The autocorrelation coefficient at lag s is defined as Euu()tts s s ;s 0,1,2,... Var() utts Var ( u ) 0 Assume s and s are symmetrical in s , i.e., these coefficients are constant over time and depend only on the length of lag s . The autocorrelation between the successive terms (uu21 and ) (uuuu32 and ),...,(nn and 1 ) gives the autocorrelation of order one, i.e., 1 .
    [Show full text]