<<

Stationarity and Properties of ACVF and ACF Moving Average Process MA(q) Sample Autocovariance and Autocorrelation Linear Processes Autoregressive Processes AR(p) Autoregressive Moving Average Model ARMA(1,1) Chapter 4: Stationary TS Models §4.1 Stationarity and Autocorrelation

Consider a {Xt : t ∈ T}.

Suppose that (t1, t2,..., tn) is a vector of members of T.

Then the vector (Xt1 , Xt2 ,..., Xtn ) has the joint distribution function

Ft1,t2,...,tn (x1, x2,..., xn) = P(Xt1 ≤ x1, Xt2 ≤ x2,..., Xtn ≤ xn).

The collection {Ft1,t2,...,tn } as (t1, t2,..., tn) over all the vectors of any finite length n with components from T, is called the collection of finite-dimensional distributions of {Xt : t ∈ T}

99 of 295

Time Series This collection contains all information available about the series from the joint distributions of its constituent variables Xt. However, this way of describing a time series is often very complicated and impractical. A simpler, albeit generally incomplete, description of a time series is by the moments of the series, particularly, the first and second moments which are, in general, functions of time t:

µXt = E(Xt) σ2 = var(X ) Xt t Autocovariance (ACVF)

γ(t1, t2) = cov(Xt1 , Xt2 ), for all t1 and t2.

Note that the variance function is a special case of the autocovariance function when t1 = t2.

100 of 295 Definition 4.1

A time series {Xt} is called strongly (or strictly) stationary if for n = 1, 2,... and for all τ = 0, ±1, ±2,... and for all times t1, t2,..., tn, the two families

{Xt1 , Xt2 ,..., Xtn } and {Xt1+τ, Xt2+τ,..., Xtn+τ} have a common joint distribution. 

101 of 295 This condition states that finite dimensional distributions are invariant under time shifts. Note that if {Xt} is strictly stationary, then the distribution of Xt is the same for all t, provided additionally that the mean and variance exist, we have µ = µ, σ2 = σ2, Xt Xt

the distribution of the vector (Xt1 , Xt2 ) depends only on the time difference t2 − t1. So, the ACVF γ(t1, t2) also depends only on t2 − t1, i.e.

γ(τ) = γ(t + τ, t) = cov(Xt+τ, Xt) for all times τ and for all times t.

Definition 4.2 Similarly, we define so called autocorrelation function (ACF) as γ(τ) ρ(τ) = = corr(X , X ) for all t, τ. (4.1) γ(0) t+τ t

102 of 295 Definition 4.3

2 A time series {Xt} with E(Xt ) < ∞ is called weakly stationary or just stationary if

E(Xt1 ) = E(Xt2 ) and

cov(Xt1 , Xt2 ) = cov(Xt1+τ, Xt2+τ) for all t1, t2 and τ. 

If {Xt} is a weakly stationary TS then obviously

the expectation of Xt does not depend on t, i.e. µXt = µ for some µ and for all times t, the ACVF γ(t + τ, t) = γ(τ, 0) = γ(τ) may be viewed as a function of a single variable τ.

103 of 295 Note that γ(0) = var(Xt), that is, the variance is also constant for all t.

Remark 4.1 If the first two moments exist and are finite, then strict stationarity implies weak stationarity. Note that the multivariate is completely

specified by its first and second moments, i.e. µXt and γ(t1, t2). It therefore follows that weak stationarity implies strict stationarity for Gaussian time series. However, µ and γ(τ) may not adequately describe a stationary processes which is very “non-Gaussian”.

Remark 4.2 The following example shows that for non-Gaussian processes the weak stationarity does not imply strict stationarity.

104 of 295 Example 4.1

Let Zt ∼ 풩(0, 1). Define iid ⎧ ⎪ Zt, if t is even, X = ⎨ t ⎩⎪ √1 (Z2 − 1) if t is odd. 2 t Then ⎧ ⎪ E Zt = 0, if t is even, E(X ) = ⎨ [︂ ]︂ t ⎪ E √1 (Z2 − 1) = √1 E[Z2 − 1] = 0 if t is odd. ⎩ 2 t 2 t

Also, ⎧ ⎪ var(Zt) = 1, if t is even, var(X ) = ⎨ t ⎩⎪ var( √1 (Z2 − 1)) = 1 var(Z2) = 1 if t is odd, 2 t 2 t

105 of 295 Example 4.1cont-d and, since Xt and Xt+τ are independent for any τ , 0, we obtain that for any τ , 0, cov(Xt, Xt+τ) = 0.

Hence {Xt} is a weakly stationary TS. Are the X identically distributed? t √ √ Note that P(X1 < −1/ 2) = 0 < P(X2 < −1/ 2), hence the distributions of X1 and X2 are different and the series Xt is not strictly stationary.

106 of 295 Example 4.2

I.i.d. noise

Suppose that {Xt} is a sequence of r.v.s which are independent and identically distributed (i.i.d.). Then the joint c.d.f. can be written as

P(Xt1 ≤ x1,..., Xtn ≤ xn)

= P(Xt1 ≤ x1) ... P(Xtn ≤ xn) = F(x1) ··· F(xn).

Since the joint distribution does not depend on the choice of the indices {t1,..., tn}, it follows that {Xt} is strictly stationary.

107 of 295 Example 4.2cont-d: 2 If Xt has finite second E(Xt ) < ∞, then {Xt} is also weakly 2 stationary. Let var(Xt) = σ , then

⎧ 2 ⎨⎪ σ , if τ = 0, γ(τ) = ⎪ ⎩⎪ 0, if τ , 0.

Also, the conditional distribution of Xn+τ given values of (X1,..., Xn) is P(Xn+τ ≤ x|X1 = x1,..., Xn = xn) = P(Xn+τ ≤ x). This confirms that knowledge of the past has no value for predicting future in this case.

108 of 295 Definition 4.4

A sequence {Xt} of uncorrelated r.v.s, each with zero mean and variance σ2 is called white noise. It is denoted by

2 {Xt} ∼ WN(0, σ ).

Example 4.3 White noise meets the requirements of the definition of weak stationarity. Note that the TS in Example 4.1 is white noise.

Note that if a Gaussian TS {Xt} is white noise, then {Xt} is a sequence of Gaussian iid r.vs, which we denote by

2 {Xt} ∼ 풩(0, σ ). iid

109 of 295 Stationarity and Autocorrelation Properties of ACVF and ACF Moving Average Process MA(q) Sample Autocovariance and Autocorrelation Linear Processes Autoregressive Processes AR(p) Autoregressive Moving Average Model ARMA(1,1)

2

1

Gaussian White Noise 0

-1

-2

-3 10 30 50 70 90 time

Figure 4.1: Simulated Gaussian White Noise Time Series

Time Series 110 of 295 Remark 4.3 Note that every iid series with mean 0 and variance σ2 is WN(0, σ2), but not conversely. That is, in general zero correlation does not imply independence. Gaussian white noise is however an iid process.

111 of 295 Example 4.4

MA(1) process Let Xt = Zt + θZt−1, t = 0, ±1, ±2,..., (4.2) where 2 {Zt} ∼ WN(0, σ ), and θ , 0 is a constant. Then {Xt} is called first order moving average, which we denote by MA(1). Is the MA(1) a weakly stationary series?

112 of 295 Example 4.4cont-d: From equation 4.2 we obtain

E(Xt) = E(Zt + θZt−1) = E(Zt) + θ E(Zt−1) = 0.

Now, we need to check if the autocovariance function does not depend on time, i.e., it depends only on lag τ.

cov(Xt, Xt+τ) = cov(Zt + θZt−1, Zt+τ + θZt−1+τ)

= E[(Zt + θZt−1)(Zt+τ + θZt−1+τ)]

−E(Zt + θZt−1)E(Zt+τ + θZt−1+τ) =0 =0

= E(ZtZt+τ) + θ E(ZtZt−1+τ) + θ E(Zt−1Zt+τ) 2 +θ E(Zt−1Zt−1+τ).

113 of 295 Example 4.4cont-d: Now, considering all possible values of the lag τ we obtain

⎧ (1 + θ2)σ2, if τ = 0, ⎪ cov(X , X ) = ⎨ θσ2, if τ = ±1, (4.3) t t+τ ⎪ ⎩⎪ 0, if |τ| > 1.

Hence, the (auto)covariance does not depend on t and we can write the autocovariance as γX(τ), i.e. function of lag τ only:

γX(τ) = cov(Xt, Xt+τ) for any τ.

The conclusion is that MA(1) is a weakly .

114 of 295 Example 4.4cont-d: Also, from (4.3) we obtain the form of the autocorrelation function ⎧ ⎪ 1, if τ = 0, ⎪ ρ (τ) = ⎨ θ if τ = ±1, (4.4) X ⎪ 1+θ2 ⎩⎪ 0, if |τ| > 1.

Figure 4.2 shows an MA(1) process which is simulated by using a Gaussian white noise and θ = 0.5. 

115 of 295 MA(1) 2

1

0

-1

-2

-3

0 20 40 60 80 100 time

Figure 4.2: Simulated MA(1) Time Series

116 of 295 Stationarity and Autocorrelation Properties of ACVF and ACF Moving Average Process MA(q) Sample Autocovariance and Autocorrelation Linear Processes Autoregressive Processes AR(p) Autoregressive Moving Average Model ARMA(1,1) §4.1.1 Sample Autocovariance and Autocorrelation

The ACVF and ACF are helpful tools for assessing the degree, or time range, of dependence and recognising if a TS follows a well-known model. However, in practice we generally are not given the ACVF or ACF, but are given a sample from, or realisaion of, a time series. When we try to fit a model to the observed realisation, we often use sample autocovariance and autocorrelation functions which are defined in terms of the observed data.

117 of 295

Time Series Definition 4.5

Let x1,..., xn be observations of a TS. The sample autocovariance function is defined as

n−|τ| 1 ∑︁ γ(τ) = (x − x¯)(x − x¯), −n < τ < n (4.5) ̂︀ n t t+|τ| t=1 where 1 ∑︁n x¯ = x . n t t=1 The sample autocorrelation function is defined as

̂︀γ(τ) ̂︀ρ(τ) = , −n < τ < n. (4.6) ̂︀γ(0)

118 of 295 Remark 4.4 For lag τ ≥ 0 the sample autocovariance function is approximately equal to the sample covariance of the n − τ pairs (x1, x1+τ),..., (xn−τ, xn). Note that, in (4.5), we divide the sum by n, not by n − τ and also we use the overall mean x¯ for both xt and xt+τ.

Remark 4.5 The sample autocovariance function ̂︀γ(τ) and the sample autocorrelation function ̂︀ρ(τ) are the most commonly used estimators of the theoretical autocovariance function γ(τ) and autocorrelation function ρ(τ) respectively.

119 of 295 A graph of the sample autocorrelation (autocovariance) function is called a (covariogram). Figures 4.3 and 4.4, respectively, show the correlogram of the Gaussian white noise time series given in Figure 4.1 and the correlogram of the MA(1) TS with θ = 0.5 calculated from the white noise. As expected, there is no significant correlation for lag τ ≥ 1 for the white noise, but there is one for the MA(1) for lag τ = 1.

120 of 295 Series : GaussianWN$Sample ACF -0.2 0.0 0.2 0.4 0.6 0.8 1.0

0 5 10 15 20 Lag

Figure 4.3: Correlogram of the Simulated Gaussian White Noise Time Series

121 of 295 Series : MA1$X ACF -0.2 0.0 0.2 0.4 0.6 0.8 1.0

0 5 10 15 20 Lag

Figure 4.4: Correlogram of the Simulated MA(1) Time Series

122 of 295 Stationarity and Autocorrelation Properties of ACVF and ACF Moving Average Process MA(q) Linear Processes Autoregressive Processes AR(p) Autoregressive Moving Average Model ARMA(1,1) §4.2 Properties of ACVF and ACF

First we examine some basic properties of the Autocovariance function (ACVF). Proposition 4.1

The ACVF of a stationary TS is a function γ(·) such that 1 γ(0) ≥ 0, 2 |γ(τ)| ≤ γ(0) for all τ, 3 γ(·) is even, i.e.,

γ(τ) = γ(−τ), for all τ.

123 of 295

Time Series Definition 4.6 We say that a real-valued function κ defined on the integers is nonnegative definite if

∑︁n aiκ(i − j)a j ≥ 0 (4.7) i, j=1

T for all positive integers n and real-valued vectors 푎 = (a1,..., an) .

Theorem 4.1

A real-valued function defined on the integers is the autocovariance function of a stationary TS if and only if it is even and nonnegative definite.

124 of 295 Proof

We will only show that the ACVF of a stationary TS {Xt} is nonnegative definite and omit the rest. T Let 푎 = (a1,..., an) be a real n-dimensional vector. Define a matrix ⎛ γ(0) γ(1 − 2) . . . γ(1 − n) ⎞ ⎜ ⎟ ⎜ − − ⎟ ⎜ γ(2 1) γ(0) . . . γ(2 n) ⎟ 푉 = ⎜ . . . . ⎟ . ⎜ . . .. . ⎟ ⎜ ⎟ ⎝ γ(n − 1) γ(n − 2) . . . γ(0) ⎠ It is easy to see that 푉 is the covariance matrix of the vector T 푋 = (X1,..., Xn) . Then,

n T T ∑︁ 0 ≤ var(푎 푋) = 푎 푉 푎 = aiγ(i − j)a j. i, j=1

Hence γ(τ) is a nonnegative definite function. 

125 of 295 Stationarity and Autocorrelation Properties of ACVF and ACF Moving Average Process MA(q) Non-uniqueness of MA Models Linear Processes Invertibility of MA Processes Autoregressive Processes AR(p) Autoregressive Moving Average Model ARMA(1,1) §4.3 Moving Average Process MA(q)

Definition 4.7

{Xt} is a moving-average process of order q if

Xt = Zt + θ1Zt−1 + ... + θqZt−q,

where 2 Zt ∼ WN(0, σ )

and θ1, . . . , θq are constants.

126 of 295

Time Series Remark 4.6

Xt is a linear combination of q + 1 white noise variables and we say that it is q-correlated, that is Xt and Xt+τ are uncorrelated for all lags τ > q.

Remark 4.7

If {Zt} is an i.i.d process then Xt is a strictly stationary TS since for any n and for any t1, t2,..., tn

푍T 푍T 푍T D 푍T 푍T 푍T ( t1 , t2 ,..., tn ) = ( t1+τ, t2+τ,..., tn+τ)

T for all τ, where 푍t = (Zt−q, Zt+1−q,..., Zt−1, Zt). Then also {Xt} is called q-dependent, that is Xt and Xt+τ are independent for all lags τ > q .

127 of 295 Remark 4.8 (Some obvious observations:) IID noise is a 0-dependent TS. White noise is a 0-correlated TS. MA(1) is 1-correlated TS, and it is also 1-dependent if the WN {Zt} is an iid noise. 2 If {Zt} ∼ 풩(0, σ ), then Xts are also normally distributed and iid hence we have a strictly stationary Gaussian TS.

Remark 4.9 The MA(q) process can also be written in the following equivalent form Xt = θ(B)Zt, (4.8) where the moving average operator

2 q θ(B) = 1 + θ1B + θ2B + ... + θqB (4.9) defines a linear combination of the first q powers of the backward k shift operator B Zt = Zt−k.

128 of 295 Example 4.5 (MA(2) process)

This process is written as

2 Xt = Zt + θ1Zt−1 + θ2Zt−2 = (1 + θ1B + θ2B )Zt. (4.10)

What are the properties of MA(2)? As it is a combination of a zero mean white noise, it also has zero mean, i.e.,

E Xt = E(Zt + θ1Zt−1 + θ2Zt−2) = 0.

It is easy to calculate the covariance of Xt and Xt+τ. We get ⎧ (1 + θ2 + θ2)σ2 for τ = 0, ⎪ 1 2 ⎪ (θ + θ θ )σ2 for τ = ±1, γ(τ) = cov(X , X ) = ⎨ 1 1 2 t t+τ ⎪ θ σ2 for τ = ±2, ⎪ 2 ⎩⎪ 0 for |τ| > 2, which shows that the autocovariances depend on lag, but not on time.

129 of 295 Example 4.5cont-d: Dividing γ(τ) by γ(0) we obtain the autocorrelation function, ⎧ ⎪ 1 for τ = 0, ⎪ ⎪ θ1+θ1θ2 for τ = ±1, ⎪ 1+θ2+θ2 ρ(τ) = ⎨ 1 2 ⎪ θ2 for τ = ±2 ⎪ 1+θ2+θ2 ⎪ 1 2 ⎩⎪ 0 for |τ| > 2. 

130 of 295 In summary: MA(2) process is a weakly stationary, 2-correlated TS.

The following graph shows MA(2) processes obtained from the simulated Gaussian white noise shown in Figure 4.1 for various values of the parameters (θ1, θ2). The blue series (with smaller variance) is

xt = zt + 0.5zt−1 + 0.5zt−2, while the purple series (with larger variance) is

xt = zt + 5zt−1 + 5zt−2, where zt are realizations of an i.i.d. Gaussian noise.

131 of 295 10 Simulated MA(2)

0

-10

-20

10 30 50 70 90 time index

Figure 4.5: Two simulated MA(2) processes, both from the white noise shown in Figure 4.1, but for different sets of parameters.

132 of 295 Series : GaussianWN$xt Series : GaussianWN$xt55 ACF ACF -0.2 0.0 0.2 0.4 0.6 0.8 1.0 -0.2 0.0 0.2 0.4 0.6 0.8 1.0

0 5 10 15 20 0 5 10 15 20 Lag Lag (a) (b)

Figure 4.6: (a) Sample ACF for xt = zt + 0.5zt−1 + 0.5zt−2 and (b) for xt = zt + 5zt−1 + 5zt−2.

133 of 295 As we can see, very different processes can be obtained by varying the parameters. This is an important property of MA(q) processes, which gives a very large family of models. This property is reinforced by the following Proposition. Proposition 4.2

If {Xt} is a stationary q-correlated time series with mean zero, then it can be represented as an MA(q) process. 

134 of 295 The following theorem gives the form of ACF for a general MA(q). Theorem 4.2 An MA(q) process as in Definition 4.7 is a weakly stationary TS with the ACVF

⎧ ∑︀q−|τ| ⎪ σ2 θ θ , if |τ| ≤ q, γ(τ) = ⎨ j=0 j j+|τ| (4.11) ⎩⎪ 0, if |τ| > q, where θ0 is defined to be 1.  It follows from the above theorem that the ACF is given by

⎧ ∑︀q−|τ| ∑︀q ⎪ θ θ / θ2 if |τ| ≤ q, ρ(τ) = ⎨ j=0 j j+|τ| j=0 j (4.12) ⎩⎪ 0, if |τ| > q,

135 of 295 Remark 4.10 The ACVF (ACF) of a MA(q) has a distinct “cut-off” at lag τ = q.

Remark 4.11 Note that an arbitrary constant, say µ, can be added to the right hand side of Definition 4.7 to give a TS with mean µ. This does not affect the ACF and has been omitted for simplicity.

136 of 295 Stationarity and Autocorrelation Properties of ACVF and ACF Moving Average Process MA(q) Non-uniqueness of MA Models Linear Processes Invertibility of MA Processes Autoregressive Processes AR(p) Autoregressive Moving Average Model ARMA(1,1) §4.3.1 Non-uniqueness of MA Models

Consider an example of MA(1)

Xt = Zt + θZt−1

whose ACF is ⎧ ⎪ 1, if τ = 0, ⎪ ρ(τ) = ⎨ θ if τ = ±1, ⎪ 1+θ2 ⎩⎪ 0, if |τ| > 1.

137 of 295

Time Series Note that for q = 1, the maximum value of |ρ(1)| is 0.5. This can be verified directly from the above formula for the ACF. Treating ρ(1) as a function of θ we can calculate its extrema. Let θ f (θ) = . 1 + θ2 Then 1 − θ2 f ′(θ) = . (1 + θ2)2 The derivative is equal to 0 at θ = ±1 and the function f attains its maximum at θ = 1 and minimum at θ = −1. We have 1 1 f (1) = , f (−1) = − . 2 2 This fact can be helpful in recognizing MA(1) processes. In fact, MA(1) with |θ| = 1 may be uniquely identified from the autocorrelation function.

138 of 295 1 However, it is easy to see that θ and θ give the same ACF! Take, 1 for example 5, and 5 . In both cases ⎧ ⎪ 1 if τ = 0, ⎪ ρ(τ) = ⎨ 5 if τ = ±1, ⎪ 26 ⎩⎪ 0 if |τ| > 1.

Also, the pair σ2 = 1, θ = 5 gives the same ACVF as the pair 2 1 σ = 25, θ = 5 , namely

⎧ (1 + θ2)σ2 = 26, if τ = 0, ⎪ γ(τ) = ⎨ θσ2 = 5, if τ = ±1, ⎪ ⎩⎪ 0, if |τ| > 1.

139 of 295 Hence, the MA(1) processes 1 Xt = Zt + Zt−1, Zt ∼ 풩(0, 25) 5 iid and Xt = Yt + 5Yt−1, Yt ∼ 풩(0, 1) iid are the same.

Since we can only observe variables Xt and not the noise variables, we cannot distinguish between these two models. Is there any reason to prefer one of the two processes over the other? If yes, which one? Next, we develop the notion of invertibility which will help us answer these questions.

140 of 295 Stationarity and Autocorrelation Properties of ACVF and ACF Moving Average Process MA(q) Non-uniqueness of MA Models Linear Processes Invertibility of MA Processes Autoregressive Processes AR(p) Autoregressive Moving Average Model ARMA(1,1) §4.3.2 Invertibility of MA Processes

The MA(1) process can be expressed in terms of lagged values of Xt by substituting repeatedly for lagged values of Zt.

Thus, the substitution Zt = Xt − θZt−1 yields

Zt = Xt − θZt−1 = Xt − θ(Xt−1 − θZt−2) 2 = Xt − θXt−1 + θ Zt−2 2 = Xt − θXt−1 + θ (Xt−2 − θZt−3) 2 3 = Xt − θXt−1 + θ Xt−2 − θ Zt−3 = ... 2 3 4 n = Xt − θXt−1 + θ Xt−2 − θ Xt−3 + θ Xt−4 + ... + (−θ) Zt−n.

141 of 295

Time Series This can be rewritten as

n−1 n ∑︁ j (−θ) Zt−n = Zt − (−θ) Xt− j. j=0

However, if |θ| < 1, then

⎛ ⎞2 ⎜ ∑︁n−1 ⎟ (︁ )︁ ⎜ j ⎟ 2n 2 2n 2 E ⎜Zt − (−θ) Xt− j⎟ = E θ Zt−n = θ σ −→ 0 ⎝⎜ ⎠⎟ n→∞ j=0 and we say that the sum is convergent in the mean square sense. Hence, we obtain another representation of this model

∞ ∑︁ j Zt = (−θ) Xt− j. j=0

142 of 295 This turns out to be a representation of another class of models, called infinite autoregressive (AR) models. Thus, we inverted MA(1) to an infinite AR. This was possible due to the assumption that |θ| < 1. MA(1) processes satisfying this assumption are called invertible. For various reasons (related to estimation and prediction) it is desirable that a TS be invertible. Thus, in the previous example (§4.3.1) we would choose the 2 1 instance with σ = 25, θ = 5 .

143 of 295 Stationarity and Autocorrelation Properties of ACVF and ACF Moving Average Process MA(q) Linear Processes Autoregressive Processes AR(p) Autoregressive Moving Average Model ARMA(1,1) §4.4 Linear Processes

Definition 4.8 2 Let Zt ∼ WN(0, σ ), −∞ < t < ∞. A TS {Xt} is called a linear process if it has the representation

∑︁∞ Xt = ψ jZt− j, (4.13) j=−∞

for all t, where {ψ j} is a sequence of constants such that ∑︀∞ j=−∞ |ψ j| < ∞. 

144 of 295

Time Series Remark 4.12 ∑︀∞ The condition j=−∞ |ψ j| < ∞ ensures that Xt is well-defined, i.e. that the infinite sum converges with probability one, or almost surely, and also in the mean square sense, that is

n (︀ ∑︁ )︀2 E Xt − ψ jZt− j → 0 as n → ∞. j=−n

∑︀∞ It can be shown that j=−∞ |ψ j| < ∞ ensures convergence of the infinite sum in (4.13) also in the mean. Consequently, ∑︀∞ j=−∞ |ψ j| < ∞ allows us to establish that ∑︀∞ E(Xt) = j=−∞ E(ψ jZt− j) = 0.

145 of 295 Remark 4.13

MA(∞) is a linear process with ψ j = 0 for j < 0, that is MA(∞) has the representation ∑︁∞ Xt = ψ jZt− j. j=0

146 of 295 Note that the formula (4.13) can be written using the backward shift operator B. We have

j Zt− j = B Zt.

Hence ∞ ∞ ∑︁ ∑︁ j Xt = ψ jZt− j = ψ jB Zt. j=−∞ j=−∞ Denoting ∞ ∑︁ j ψ(B) = ψ jB , (4.14) j=−∞ we can write the linear process in a neat way

Xt = ψ(B)Zt.

The operator ψ(B) is a linear filter, which when applied to a stationary process produces a stationary process. This fact is proved in the following proposition. 147 of 295 Proposition 4.3

Let {Yt} be a stationary TS with mean zero and autocovariance ∑︀∞ function γY . If j=−∞ |ψ j| < ∞, then the process ∑︁∞ Xt = ψ jYt− j = ψ(B)Yt (4.15) j=−∞ is stationary with mean zero and autocovariance function

∑︁∞ ∑︁∞ γX(τ) = ψ jψkγY (τ − k + j). (4.16) j=−∞ k=−∞

148 of 295 Proof ∑︀∞ The assumption j=−∞ |ψ j| < ∞ assures convergence of the series. When combined with the Dominated Convergence Theorem and Cauchy-Schwarz-Bunyakovsky inequality (not examinable), this assumption also allows us to change the order of expectation and infinite summations in the expressions given below:

First, since E Yt = 0, we have

⎛ ∞ ⎞ ∞ ⎜ ∑︁ ⎟ ∑︁ E X = E ⎜ ψ Y − ⎟ = ψ E(Y − ) = 0 t ⎝⎜ j t j⎠⎟ j t j j=−∞ j=−∞ and

149 of 295 Proof cont-d: ⎡⎛ ⎞ ⎛ ⎞⎤ ⎢⎜ ∞ ⎟ ∞ ⎥ ⎢⎜ ∑︁ ⎟ ⎜ ∑︁ ⎟⎥ E(XtXt+τ) = E ⎢⎜ ψ jYt− j⎟ ⎜ ψkYt+τ−k⎟⎥ ⎣⎢⎝⎜ ⎠⎟ ⎝⎜ ⎠⎟⎦⎥ j=−∞ k=−∞ ∑︁∞ ∑︁∞ = ψ jψk E(Yt− jYt+τ−k) j=−∞ k=−∞ ∑︁∞ ∑︁∞ = ψ jψkγY (τ − k + j). j=−∞ k=−∞

Hence, {Xt} is a stationary TS with the autocovariance function given by formula (4.16). 

150 of 295 Corrolary 4.1

If {Yt} is a white noise process, then {Xt} given by (4.15) is a stationary linear process with zero mean and the ACVF

∞ 2 ∑︁ γX(τ) = σ ψ jψ j+τ. (4.17) j=−∞ 

151 of 295 Stationarity and Autocorrelation Properties of ACVF and ACF AR(1) Moving Average Process MA(q) Random Walk Linear Processes Explosive AR(1) Model and Causality Autoregressive Processes AR(p) Autoregressive Moving Average Model ARMA(1,1) §4.5 Autoregressive Processes AR(p)

The idea behind the autoregressive models is to explain the present value of the series, Xt, by a function of p past values, Xt−1, Xt−2,..., Xt−p. Definition 4.9

An autoregressive process of order p is written as

Xt = φ1Xt−1 + φ2Xt−2 + ... + φpXt−p + Zt, (4.18)

2 where {Zt} is white noise, i.e., {Zt} ∼ WN(0, σ ) and Zt is uncorrelated with Xs for any s < t.

152 of 295

Time Series Remark 4.14

We assume for simplicity of notation that the mean of Xt is zero. If the mean is E Xt = µ , 0, then we replace Xt by Xt − µ or, equivalently, we can write

Xt = α + φ1Xt−1 + φ2Xt−2 + ... + φpXt−p + Zt, where α = µ(1 − φ1 − ... − φp).

153 of 295 Other ways of writing AR(p) model use: Vector notation: Denote

T 휑 = (φ1, φ2, . . . , φp) , T 푋t−1 = (Xt−1, Xt−2,..., Xt−p) .

Then the formula (4.18) can be written as

T Xt = 휑 푋t−1 + Zt.

154 of 295 Backshift operator: Namely, writing the model (4.18) in the form

Xt − φ1Xt−1 − φ2Xt−2 − ... − φpXt−p = Zt,

and applying BXt = Xt−1 we get

2 p (1 − φ1B − φ2B − ... − φpB )Xt = Zt

or, using the concise notation we write

φ(B)Xt = Zt, (4.19) where φ(B) denotes the autoregressive operator

2 p φ(B) = 1 − φ1B − φ2B − ... − φpB .

Then the AR(p) can be viewed as a solution to the equation (4.19), i.e., 1 X = Z . (4.20) t φ(B) t

155 of 295 Stationarity and Autocorrelation Properties of ACVF and ACF AR(1) Moving Average Process MA(q) Random Walk Linear Processes Explosive AR(1) Model and Causality Autoregressive Processes AR(p) Autoregressive Moving Average Model ARMA(1,1) §4.5.1 AR(1)

According to the Definition 4.9 an autoregressive process of order 1 has to satisfy

Xt = φXt−1 + Zt, (4.21) 2 where Zt ∼ WN(0, σ ) and φ is a non-zero constant. There are (infinitely) many processes satisfying (4.21) (since we can freely choose, say, X0.) Are any of these AR(1) processes stationary? If a stationary AR(1) processes exists, E(Xt) would have to be zero. Also, we are interested in stationary processes with finite 2 variance v = E(Xt ) < ∞. 156 of 295 Time Series Corollary 4.1 says that an infinite combination of white noise variables is a stationary process. Here, due to the recursive form of the TS we can write AR(1) in such a form. Namely

Xt = φXt−1 + Zt

= φ(φXt−2 + Zt−1) + Zt 2 = φ Xt−2 + φZt−1 + Zt . . k−1 k ∑︁ j = φ Xt−k + φ Zt− j. j=0

This can be rewritten as

k−1 k ∑︁ j φ Xt−k = Xt − φ Zt− j. j=0

157 of 295 What would we obtain if we have continued the backwards operation, i.e., what happens when k → ∞? Taking the expectation we obtain

⎛ ⎞2 ⎜ ∑︁k−1 ⎟ ⎜ j ⎟ 2k lim E ⎜Xt − φ Zt− j⎟ = lim φ v = 0 k→∞ ⎝⎜ ⎠⎟ k→∞ j=0 provided that |φ| < 1. Hence, a stationary AR(1) does exist as a limit ∞ ∑︁ j Xt = φ Zt− j (4.22) j=0 in the mean square sense. This is a linear process (4.13) with {︃ φ j for j ≥ 0, ψ = (4.23) j 0 for j < 0.

158 of 295 This technique of iterating backwards works well for AR(1) but not for higher orders. A more general way to convert the series into a linear process form is the following method of matching coefficients. The AR(1) model is φ(B)Xt = Zt, where φ(B) = 1 − φB and |φ| < 1. We want to write the model as a linear process ∑︁∞ Xt = ψ jZt− j = ψ(B)Zt, j=0 ∑︀∞ j where ψ(B) = j=0 ψ jB .

159 of 295 It we want to find the coefficients ψ j. Substituting Zt from the AR model into the linear process model we obtain

Xt = ψ(B)Zt = ψ(B)φ(B)Xt. (4.24)

In full, the coefficients can be written as

2 3 1 = (1 + ψ1B + ψ2B + ψ3B + ...)(1 − φB) 2 3 = 1 + ψ1B + ψ2B + ψ3B + ... 2 3 4 −φB − ψ1φB − ψ2φB − ψ3φB − ... 2 3 = 1 + (ψ1 − φ)B + (ψ2 − ψ1φ)B + (ψ3 − ψ2φ)B + ...

160 of 295 Now, equating coefficients on the LHS and RHS of this equation we see that all the coefficients of B j, j ≥ 1, must be zero, i.e.,

ψ1 = φ 2 ψ2 = ψ1φ = φ 3 ψ3 = ψ2φ = φ . . j ψ j = ψ j−1φ = φ .

So, we obtained the linear process form of the AR(1)

∞ ∑︁ j Xt = φ Zt− j. j=0

161 of 295 Remark 4.15

Note, that from the equation (4.24) it follows that ψ(B) is an inverse of φ(B) written formally as 1 ψ(B) = . (4.25) φ(B)

For an AR(1) we have 1 ψ(B) = = 1 + φB + φ2B2 + φ3B3 + ... (4.26) 1 − φB

162 of 295 ∑︀∞ j So, since |φ| < 1, we have j=0 |φ | < ∞ and hence AR(1) is stationary with ∞ ∑︁ j E Xt = φ E(Zt− j) = 0 (4.27) j=0 and autocovariance function given by (4.17), that is

∑︁∞ ∑︁∞ γ(τ) = σ2 φ jφ j+τ = σ2φτ φ2 j. j=0 j=0

The infinite sum in this expression is the sum of a geometric progression as |φ| < 1, i.e.,

∞ ∑︁ 1 φ2 j = . 1 − φ2 j=0

163 of 295 Corollary 4.1 also gives us the ACVF of the newly defined stationary AR(1) process.

σ2φ|τ| γ(τ) = . (4.28) 1 − φ2

Then the variance of AR(1) is

σ2 γ(0) = . 1 − φ2

Hence, the autocorrelation function of AR(1) is γ(τ) ρ(τ) = = φ|τ|. (4.29) γ(0)

However, when |φ| = 1, there is no stationary solution of (4.21) Figures 4.7, 4.9 and 4.8, 4.10 show simulated AR(1) processes for four different values of the coefficient φ (equal to -0.9, 0.9, -0.5 and 0.5) and the respective sample ACF functions.

164 of 295 4

2

0

-2

-4

4

2

0

-2

-4

0 50 100 150 200

Figure 4.7: Simulated AR(1) processes for φ = −0.9 (top) and for φ = 0.9 (bottom).

165 of 295 (a) (b)

Figure 4.8: Sample ACF for AR(1): (a) xt = −0.9xt−1 + zt and (b) xt = 0.9xt−1 + zt.

166 of 295 2 1 0 -1 -2

2 1 0 -1 -2

30 80 130 180

Figure 4.9: Simulated AR(1) processes for φ = −0.5 (top) and for φ = 0.5 (bottom).

167 of 295 (a) (b)

Figure 4.10: Sample ACF for AR(1): (a) xt = −0.5xt−1 + zt and (b) xt = 0.5xt−1 + zt.

168 of 295 Looking at these graphs we can see that for the positive values of φ we obtain smoother TS than for the negative ones. Also, the ACFs are very different. We see that if φ is negative the neighboring observations are negatively correlated, but those two time points apart are positively correlated. In fact, if φ is negative the neighboring TS values have typically opposite signs. This is more evident if φ is close to -1.

169 of 295 Stationarity and Autocorrelation Properties of ACVF and ACF AR(1) Moving Average Process MA(q) Random Walk Linear Processes Explosive AR(1) Model and Causality Autoregressive Processes AR(p) Autoregressive Moving Average Model ARMA(1,1) §4.5.2 Random Walk

This is a TS which at each point of time moves randomly away from its current position. The model can then be written as

Xt = Xt−1 + Zt, (4.30)

where Zt is a white noise variable with zero mean and constant variance σ2. The model has the same form as AR(1) process, but since φ = 1, it is not stationary. Such process is called Random Walk.

170 of 295

Time Series Repeatedly substituting for past values gives

Xt = Xt−1 + Zt

= Xt−2 + Zt−1 + Zt

= Xt−3 + Zt−2 + Zt−1 + Zt = ... ∑︁t−1 = X0 + Zt− j. j=0

If the initial value, X0, is constant, then the mean value of Xt is equal to X0, that is

⎡ t−1 ⎤ ⎢ ∑︁ ⎥ E X = E ⎢X + Z − ⎥ = X . t ⎣⎢ 0 t j⎦⎥ 0 j=0

So, the mean is constant, but as we see next, the variance and covariance do depend on time, not just on lag.

171 of 295 ⎛ t−1 ⎞ ⎜ ∑︁ ⎟ var(X ) = var ⎜X + Z − ⎟ t ⎝⎜ 0 t j⎠⎟ j=0 ⎛ t−1 ⎞ ⎜∑︁ ⎟ = var ⎜ Z − ⎟ ⎝⎜ t j⎠⎟ j=0 t−1 ∑︁ 2 = var(Zt− j) = tσ j=0 since Zt are uncorrelated. Also, ⎛ ⎞ t1−1 t2−1 ⎜∑︁ ∑︁ ⎟ cov(X , X ) = cov ⎜ Z − , Z − ⎟ t1 t2 ⎝⎜ t1 j t2 k⎠⎟ j=0 k=0 ⎡⎛ ⎞ ⎛ ⎞⎤ ⎢⎜t1−1 ⎟ t2−1 ⎥ ⎢⎜∑︁ ⎟ ⎜∑︁ ⎟⎥ = E ⎢⎜ Zt − j⎟ ⎜ Zt −k⎟⎥ ⎣⎢⎝⎜ 1 ⎠⎟ ⎝⎜ 2 ⎠⎟⎦⎥ j=0 k=0 2 = σ min{t1, t2}.

172 of 295 Two simulated series of this form are shown in Figure 4.11. As we can see, the random walk meanders away from its starting value in no particular direction. It does not exhibit any clear trend, but at the same time is not stationary. However, the first difference of random walk is stationary as it is just white noise, namely

∇Xt = Xt − Xt−1 = Zt.

The differenced random walk and its sample ACF are shown in Figure 4.12.

173 of 295 Figure 4.11: Two simulated Random Walks with x0 = 0, xt = xt−1 + zt, Zt ∼ N(0, 1).

174 of 295 Figure 4.12: Differenced Random Walk (from bottom of Figure 4.11) ∇xt = zt (top) and its sample ACF (bottom).

175 of 295 Stationarity and Autocorrelation Properties of ACVF and ACF AR(1) Moving Average Process MA(q) Random Walk Linear Processes Explosive AR(1) Model and Causality Autoregressive Processes AR(p) Autoregressive Moving Average Model ARMA(1,1) §4.5.3 Explosive AR(1) Model and Causality

As we have seen in the previous section, random walk, which is AR(1) with φ = 1, is not a stationary process. So, the question is if a stationary AR(1) process with |φ| > 1 exists? Also, what are the properties of AR(1) models for φ > 1? ∑︀k−1 j Clearly, the sum j=0 φ Zt− j will not converge in mean square sense as k → ∞ and we will not get a linear process representation of the AR(1).

176 of 295

Time Series 1 However, if |φ| > 1 then |φ| < 1 and we can express a past value of the TS in terms of a future value rewriting

Xt+1 = φXt + Zt+1 as −1 −1 Xt = φ Xt+1 − φ Zt+1.

Then, substituting for Xt+ j several times we obtain

−1 −1 Xt = φ Xt+1 − φ Zt+1 −1 −1 −1 −1 = φ (φ Xt+2 − φ Zt+2) − φ Zt+1 −2 −2 −1 = φ Xt+2 − φ Zt+2 − φ Zt+1 = ... k −k ∑︁ − j = φ Xt+k − φ Zt+ j j=1

177 of 295 Since |φ−1| < 1 and since we seek a stationary process, we obtain ∞ ∑︁ − j Xt = − φ Zt+ j, j=1 which is a future dependent stationary TS. This however, does not have much practical meaning because it requires knowledge of future values to define the present value. When the current value of a process does not involve any observations from the future, e.g. AR(1) with |φ| < 1, we say that such a process is causal. Figure 4.13 shows a simulated causal series xt = 1.02xt−1 + zt. As we can see the values of the time series quickly become large in magnitude, even for φ just slightly above 1. Such process is called explosive.

178 of 295 90

70

50 AR1.6

30

10

-10

30 80 130 180

Figure 4.13: A simulated causal explosive AR(1): xt = 1.02xt−1 + zt.

179 of 295 Stationarity and Autocorrelation Properties of ACVF and ACF Moving Average Process MA(q) Causality and invertibility of ARMA(1,1) Linear Processes ACVF and ACF of ARMA(1,1) Autoregressive Processes AR(p) Autoregressive Moving Average Model ARMA(1,1) §4.6 Autoregressive Moving Average Model ARMA(1,1)

This section is an introduction to a wide class of models ARMA(p,q) which we will consider in more detail later in this course. The special case, ARMA(1,1), is defined by linear difference equations with constant coefficients as follows.

180 of 295

Time Series Definition 4.10

Let φ and θ be non-zero constants. A TS {Xt} is an ARMA(1,1) process if it satisifies

Xt − φXt−1 = Zt + θZt−1, for every t, −∞ < t < ∞, (4.31)

2 where {Zt} ∼ WN(0, σ ).

Such a model may be viewed as a generalization of the two previously introduced models AR(1) and MA(1). Compare

AR(1) Xt = φXt−1 + Zt

MA(1) Xt = Zt + θZt−1

ARMA(1,1) Xt − φXt−1 = Zt + θZt−1

181 of 295 Hence, when φ = 0 then ARMA(1,1) ≡ MA(1) and we denote such a process as ARMA(0,1). Similarly, when θ = 0 then ARMA(1,1) ≡ AR(1) and we denote such process as ARMA(1,0). Here, as in the MA and AR models, we can use the backshift operator to write the ARMA model more concisely as

φ(B)Xt = θ(B)Zt, (4.32) where φ(B) and θ(B) are the linear filters

φ(B) = 1 − φB, θ(B) = 1 + θB.

182 of 295 Stationarity and Autocorrelation Properties of ACVF and ACF Moving Average Process MA(q) Causality and invertibility of ARMA(1,1) Linear Processes ACVF and ACF of ARMA(1,1) Autoregressive Processes AR(p) Autoregressive Moving Average Model ARMA(1,1) §4.6.1 Causality and invertibility of ARMA(1,1)

For which values of the parameters φ and θ does a stationary causal ARMA(1,1) process exist? What about invertibility of ARMA(1,1)? A solution to 4.31, or to 4.32, can be formally written as 1 X = θ(B)Z . t φ(B) t

183 of 295

Time Series However, for |φ| < 1 we have (see Remark 4.15)

1 θ(B) = (1 + φB + φ2B2 + φ3B3 + ...)(1 + θB) φ(B) = 1 + φB + φ2B2 + φ3B3 + ... + θB + φθB2 + φ2θB3 + φ3θB4 + ... = 1 + (φ + θ)B + (φ2 + φθ)B2 + (φ3 + φ2θ)B3 + ... = 1 + (φ + θ)B + (φ + θ)φB2 + (φ + θ)φ2B3 + ... ∞ ∑︁ j = ψ jB , j=0

j−1 where ψ0 = 1 and ψ j = (φ + θ)φ for j = 1, 2,....

184 of 295 Thus, we can write the solution to 4.32 in the form of an MA(∞) model, i.e., ∞ ∑︁ j−1 Xt = Zt + (φ + θ) φ Zt− j. (4.33) j=1 By Corollary 4.1, this is a stationary process, and it is evidently causal. Now, suppose that |φ| > 1. Then, by similar arguments as in the AR(1) model, it can be shown that

∞ −1 ∑︁ − j−1 Xt = −θφ Zt − (φ + θ) φ Zt+ j. j=1

Here too, we obtained a noncausal process which depends on future noise values, hence of no practical value.

185 of 295 If |φ| = 1 then there is no stationary solution to 4.32 (neither causal nor non-causal).

While causality means that the process {Xt} is expressible in terms of past and present values of {Zt}, the dual property of invertibility means that the process {Zt} is expressible in the past and present values of {Xt}. Is ARMA(1,1) invertible? ARMA(1,1) model is φ(B)Xt = θ(B)Zt.

Writing the solution for Zt we have 1 1 Z = φ(B)X = (1 − φB)X . (4.34) t θ(B) t 1 + θB t

186 of 295 It can be made rigorous that the inverse of the operator (1 + θB) exists if and only if |θ| < 1, in which case

∞ 1 ∑︁ = (−θ) jB j, 1 + θB j=0

When this infinite expansion is applied to 4.34, it gives

∞ ∑︁ j j Zt = (−θ) B (1 − φB)Xt j=0 ∞ ∑︁ j−1 = Xt − (φ + θ) (−θ) Xt− j. j=1

Thus, ARMA(1,1) is invertible if and only if |θ| < 1.

187 of 295 When combined, these two properties, causality and invertibility, determine the admissible region for the values of parameters φ and θ, which is the square − 1 < φ < 1 − 1 < θ < 1.

188 of 295 Stationarity and Autocorrelation Properties of ACVF and ACF Moving Average Process MA(q) Causality and invertibility of ARMA(1,1) Linear Processes ACVF and ACF of ARMA(1,1) Autoregressive Processes AR(p) Autoregressive Moving Average Model ARMA(1,1) §4.6.2 ACVF and ACF of ARMA(1,1)

The fact that we can express an ARMA(1,1) TS as a linear process of the MA form ∑︁∞ Xt = ψ jZt− j, j=0

where Zt is a white noise, is very helpful in deriving the ACVF and ACF of the process. By Corollary 4.1 we have

∞ 2 ∑︁ γ(τ) = σ ψ jψ j+|τ|. j=0

189 of 295

Time Series For a stationary ARMA(1,1) the coefficients ψ j are given in §(5) as follows: ψ0 = 1 j−1 ψ j = (φ + θ)φ for j = 1, 2,...

Knowing ψ j, we can now derive expressions for γ(0) and γ(1).

190 of 295 ∞ 2 ∑︁ 2 γ(0) = σ ψ j j=0 ⎡ ∞ ⎤ ⎢ ∑︁ ⎥ = σ2 ⎢1 + (φ + θ)2 φ2( j−1)⎥ ⎣⎢ ⎦⎥ j=1 ⎡ ∞ ⎤ ⎢ ∑︁ ⎥ = σ2 ⎢1 + (φ + θ)2 φ2 j⎥ ⎣⎢ ⎦⎥ j=0 [︃ ]︃ (φ + θ)2 = σ2 1 + 1 − φ2 and

191 of 295 ∞ 2 ∑︁ γ(1) = σ ψ jψ j+1 j=0 [︁ = σ2 (φ + θ) + (φ + θ)(φ + θ)φ + (φ + θ)φ(φ + θ)φ2+ ]︁ (φ + θ)φ2(φ + θ)φ3 + ... [︁ ]︁ = σ2 (φ + θ) + (φ + θ)2φ(1 + φ2 + φ4 + ...) ⎡ ∞ ⎤ ⎢ ∑︁ ⎥ = σ2 ⎢(φ + θ) + (φ + θ)2φ φ2 j⎥ ⎣⎢ ⎦⎥ j=0 [︃ ]︃ (φ + θ)2φ = σ2 (φ + θ) + 1 − φ2

192 of 295 Similar derivations for |τ| ≥ 2 give

γ(τ) = φ|τ|−1γ(1). (4.35)

Hence, we can calculate the autocorrelation function ρ(τ). For |τ| = 1 we obtain γ(1) (φ + θ)(1 + φθ) ρ(1) = = (4.36) γ(0) 1 + 2φθ + θ2 and for |τ| ≥ 2 we have

ρ(τ) = φ|τ|−1ρ(1). (4.37)

From these formulae we can see that when φ = −θ the ACF ρ(τ) = 0 for τ = 1, 2,... and the process is just a white noise. Graph 4.14 shows the admissible region for the parameters φ and θ for a stationary ARMA(1,1) series, and indicates the regions when we have special cases of ARMA(1,1), which are white noise, AR(1) and MA(1). 193 of 295 θ MA(1) 1

AR(1)

-1 1 φ

-1 φ = −θ

WN

Figure 4.14: Admissible parameter region for ARMA(1,1)

194 of 295 30 80 130 180

phi = -0.9, theta = 0.5 phi = 0.9, theta = 0.5 10

5

0

-5

-10 phi = -0.9, theta = -0.5 phi = 0.9, theta = -0.5 10

5

0

-5

-10 30 80 130 180

Figure 4.15: ARMA(1,1) for various values of the parameters φ and θ.

195 of 295 Series : SimulatedARMA11$V2 Series : SimulatedARMA11$V1 ACF ACF -0.5 0.0 0.5 1.0 0.0 0.2 0.4 0.6 0.8 1.0

0 5 10 15 20 25 30 0 5 10 15 20 25 30 Lag Lag Series : SimulatedARMA11$V3 Series : SimulatedARMA11$V4 ACF ACF -0.2 0.0 0.2 0.4 0.6 0.8 1.0 -1.0 -0.5 0.0 0.5 1.0 0 5 10 15 20 25 30 0 5 10 15 20 25 30 Lag Lag

Figure 4.16: ACF of the ARMA(1,1) processes with the parameter values as in Figure 4.15, respectively.

196 of 295 Bibliography

Box, G.E.P., & Jenkins, G.M. 1976. Time series analysis: Forecasting and control. revised edition. Holden Day. Brockwell, P.J., & Davis, R.A. 2002. An introduction to time series and forecasting. second edition. Springer-Verlag. Chatfield, C. 2004. The analysis of time series: An introduction. sixth edition. Chapman and Hall. J.Hansen, & S.Lebedeff. 1987. Global trends of measured surface air temperature. Journal of geophysical research, 92, 13.345–13.372.

295 of 295