Chapter 2 Fundamental Concepts This chapter describes the fundamental concepts in the theory of time series models. In particular we introduce the concepts of stochastic process, mean and covariance function, stationary process, and autocorrelation function. 2.1 Time Series and Stochastic Processes ± ± ± The sequence of random variables {Zt: t = 0, 1, 2, 3,...} is called a stochastic process. It is known that the complete probabilistic structure of such a process is determined by the set of distributions of all finite collections of the Z’s. Fortunately, we will not have to deal explicitly with these multivariate distributions. Much of the information in these joint distributions can be described in terms of means, variances, and covariances. Consequently, we concentrate our efforts on these first and second moments. (If the joint distributions of the Z’s are multivariate normal distributions, then the first and second moments completely determine all the joint distributions.) 2.2 Means, Variances, and Covariances ± ± ± For a stochastic process {Zt: t = 0, 1, 2, 3,...} the mean function is defined by µ () ± , ± , t =forEZt t = 0, 1 2 ... (2.2.1) µ µ that is, t is just the expected value of the process at time t. In general t can be different at each time point t. γ The autocovariance function, t,s, is defined as γ (), ± , ± , ts, =forCov Zt Zs t = 0, 1 2... (2.2.2) − µ − µ − µ µ where Cov(Zt, Zs) = E[(Zt t)(Zs s)] = E(ZtZs) t s. ρ The autocorrelation function, t,s, is given by ρ (), ± , ± , ts, =forCorr Zt Zs t = 0, 1 2 ... (2.2.3) where 1 Chapter 2 Fundamental Concepts (), γ Cov Zt Zs ts, Corr() Z , Z ==--------------------------------------------- ---------------------- (2.2.4) t s () () γ γ Var Zt Var Zs tt, ss, We review the basic properties of expectation, variance, covariance, and correlation in “Appendix: Expectation, Variance, Covariance, and Correlation” on page 16. Recall that both covariance and correlation are measures of the (linear) dependence between random variables but that the unitless correlation is somewhat easier to interpret. The following important properties follow from known results and our definitions: γ = Var() Z ρ tt, t tt, = 1 γ = γ ρ ρ ts, st, ts, = st, (2.2.5) γ ≤ γ γ ρ ≤ 1 ts, tt, ss, ts, ρ ± Values of t,s near 1 indicate strong (linear) dependence, whereas values near zero ρ indicate weak (linear) dependence. If t,s = 0, we say that Zt and Zs are uncorrelated. To investigate the covariance properties of various time series models, the following result will be used repeatedly: If c1, c2,..., cm and d1, d2,..., dn are constants and t1, t2,..., tm and s1, s2,..., sn are time points, then m n m n Cov∑ c Z , ∑ d Z = ∑ ∑ c d Cov() Z , Z (2.2.6) i ti j sj i j ti sj i = 1 j = 1 i = 1 j = 1 The proof of Equation (2.2.6), though tedious, is a straightforward application of the linear properties of expectation. As a special case, we obtain the following well-known result n n n i – 1 Var∑ c Z = ∑ c2Var() Z + 2∑ ∑ c c Cov() Z , Z (2.2.7) i ti i ti i j ti tj i = 1 i = 1 i = 2 j = 1 The Random Walk Let a1, a2,... be a sequence of independent, identically distributed random variables each σ2 with zero mean and variance a . The observed time series, {Zt: t = 1, 2,...}, is constructed as follows: 2 2.2 Means, Variances, and Covariances Z1 = a1 Z = a + a …2 1 2 (2.2.8) … Zt = a1 +++a2 at Alternatively, we can write Zt = Zt – 1 + at (2.2.9) If the a’s are interpreted as the sizes of the “steps” taken (forward or backward) along a number line, then Zt is the position of the “random walker” at time t. From Equation (2.2.8) we obtain the mean function: µ () ()… () ()… () t ==EZt Ea1 +++a2 at =Ea1 +++Ea2 Eat = 00++… +0 so that µ t = 0 for all t (2.2.10) We also have () ()… () ()… () Var Zt ==Var a1 +++a2 at Var a1 +++Var a2 Var at = σ2 σ2 … σ2 a +++a a t terms σ2 = t a so that () σ2 Var Zt = t a (2.2.11) Notice that the process variance increases linearly with time. To investigate the covariance function, suppose that 1 ≤ t ≤ s. Then we have γ (), ()… , … … ts, ==Cov Zt Zs Cov a1 +++a2 at a1 ++++a2 at at + 1 ++as From Equation (2.2.6) we have s t γ (), ts, = ∑ ∑ Cov ai aj i = 1 j = 1 σ2 However, these covariances are zero unless i = j in which case they equal Var(ai) = a . γ σ2 There are exactly t of these so that t,s = t.a 3 Chapter 2 Fundamental Concepts γ γ Since t,s = s,t, this specifies the autocovariance function for all time points t and s and we can write γ σ2 ≤≤ ts, =fort a 1ts (2.2.12) The autocorrelation function for the random walk is now easily obtained as γ ts, t ρ =---------------------- =- for 1 ≤≤ts (2.2.13) ts, γ γ s tt, ss, The following numerical values help us understand the behavior of the random walk. 1 8 ρ ==---0.707 ρ ==--- 0.943 12, 2 89, 9 24 1 ρ ==------0.980 ρ ==------0.200 24, 25 25 125, 25 The values of Z at neighboring time points are more and more strongly and positively correlated as time goes by. On the other hand, the values of Z at distant time points are less and less correlated. A simulated random walk is shown in Exhibit (2.1) where the a’s were selected from a standard normal distribution. Note that even though the theoretical mean function is zero for all time points, the facts that the variance increases over time and that the correlation between process values nearby in time is nearly 1 indicate that we should expect long excursions of the process away from the mean level of zero. The simple random walk process provides a good model (at least to a first approximation) for phenomena as diverse as the movement of common stock prices, and the position of small particles suspended in a fluid (so-called Brownian motion). 4 2.2 Means, Variances, and Covariances Exhibit 2.1 A Random Walk 8 3 RandWalk -2 Index 10 20 30 40 50 60 A Moving Average As a second example, suppose that {Zt} is constructed as at + at – 1 Z = ----------------------- (2.2.14) t 2 where (as always throughout this book), the a’s are assumed to be independent and σ2 identically distributed with zero mean and variance a . Here () () at + at – 1 Eat + Eat – 1 µ ==EZ() E----------------------- =----------------------------------------- t t 2 2 = 0 and () () at + at – 1 Var at + Var at – 1 Var() Z ==Var----------------------- ----------------------------------------------------- t 2 4 σ2 = 0.5 a Also 5 Chapter 2 Fundamental Concepts at + at – 1 at – 1 + at – 2 Cov() Z , Z = Cov----------------------- , -------------------------------- t t – 1 2 2 (), (), (), Cov at at – 1 ++Cov at at – 2 Cov at – 1 at – 1 = ------------------------------------------------------------------------------------------------------------------------------------ 4 (), Cov at – 1 at – 2 + -------------------------------------------- 4 (), Cov at – 1 at – 1 =-------------------------------------------- (as all the other covariances are zero) 4 σ2 = 0.25 a or γ σ2 tt, – 1 =0.25 a for all t (2.2.15) Furthermore, at + at – 1 at – 2 + at – 3 Cov() Z , Z = Cov----------------------- , -------------------------------- t t – 2 2 2 =0 since the as are independent Similarly, Cov(Zt, Zt−k) = 0 for k > 1, so we may write 0.5σ2 for ts–0= a γ ts, = σ2 0.25 a for ts–1= 0 for ts–1> For the autocorrelation function we have 1 for ts–0= ρ , = 0.5 for ts–1= (2.2.16) ts 0 for ts–1> σ2 σ2 since 0.25a /0.5a =0.5. ρ ρ ρ ρ Notice that 2,1 = 3,2 = 4,3 = 9,8 = 0.5. Values of Z precisely one time unit apart have ρ ρ exactly the same correlation no matter where they occur in time. Furthermore, 3,1 = 4,2 ρ ρ = t,t−2 and, more generally, t,t−k is the same for all values of t. This leads us to the important concept of stationarity. 6 2.3 Stationarity 2.3 Stationarity To make statistical inferences about the structure of a stochastic process on the basis of an observed record of that process, we must usually make some simplifying (and presumably reasonable) assumptions about that structure. The most important such assumption is that of stationarity. The basic idea of stationarity is that the probability laws that govern the behavior of the process do not change over time. In a sense, the process is in statistical equilibrium. Specifically, a process {Zt} is said to be strictly stationary if the joint distribution of Z ,,,Z … Z is the same as the joint t1 t2 tn ,,,… distribution of Z Z Z for all choices of time points t1, t2,..., tn and all t1 – k t2 – k tn – k choices of time lag k. Thus, when n = 1 the (univariate) distribution of Zt is the same as that of Zt−k for all t and k; in other words, the Z’s are (marginally) identically distributed. It then follows that E(Zt) = E(Zt−k) for all t and k so that the mean function is constant for all time. Additionally, Va r (Zt) = Va r (Zt−k) for all t and k so that the variance is also constant in time.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages18 Page
-
File Size-