119 4. Autoregressive, MA and ARMA processes
4.1 Autoregressive processes Outline: • Introduction • The first-order autoregressive process, AR(1) • The AR(2) process • The general autoregressive process AR(p) • The partial autocorrelation function
Recommended readings:
B Chapter 4 of D. Pe˜na(2008). B Chapter 2 of P.J. Brockwell and R.A. Davis (1996). B Chapter 3 of J.D. Hamilton (1994).
Time series analysis - Module 1 Andr´esM. Alonso 120 Introduction
B In this section we will begin our study of models for stationary processes which are useful in representing the dependency of the values of a time series on its past.
B The simplest family of these models are the autoregressive, which generalize the idea of regression to represent the linear dependence between a dependent variable y (zt) and an explanatory variable x (zt−1), using the relation:
zt = c + bzt−1 + at
2 where c and b are constants to be determined and at are i.i.d N (0, σ ). Above relation define the first order autoregressive process.
B This linear dependence can be generalized so that the present value of the series, zt, depends not only on zt−1, but also on the previous p lags, zt−2, ..., zt−p. Thus, an autoregressive process of order p is obtained.
Time series analysis - Module 1 121 The first-order autoregressive process, AR(1)
B We say that a series zt follows a first order autoregressive process, or AR(1), if it has been generated by:
zt = c + φzt−1 + at (33) where c and −1 < φ < 1 are constants and at is a white noise process with 2 variance σ . The variables at, which represent the new information that is added to the process at each instant, are known as innovations.
Example 36. We will consider zt as the quantity of water at the end of the month in a reservoir. During the month, c + at amount of water comes into the reservoir, where c is the average quantity that enters and at is the innovation, a random variable of zero mean and constant variance that causes this quantity to vary from one period to the next.
If a fixed proportion of the initial amount is used up each month, (1 − φ)zt−1, and a proportion, φzt−1 , is maintained the quantity of water in the reservoir at the end of the month will follow process (33).
Time series analysis - Module 1 122 The first-order autoregressive process, AR(1)
B The condition −1 < φ < 1 is necessary for the process to be stationary. To prove this, let us assume that the process begins with z0 = h, with h being any fixed value. The following value will be z1 = c + φh + a1, the next, z2 = c + φz1 + a2 = c+ φ(c + φh + a1) + a2 and, substituting successively, we can write: z1 = c + φh + a1 2 z2 = c(1 + φ) + φ h + φa1 + a2 2 3 2 z3 = c(1 + φ + φ ) + φ h + φ a1 + φa2 + a3 . . . Pt−1 i t Pt−1 i zt = c i=0 φ + φ h + i=0 φ at−i
If we calculate the expectation of zt, as E[at] = 0, t−1 X i t E [zt] = c φ + φ h. i=0
For the process to be stationary it is a necessary condition that this function does not depend on t.
Time series analysis - Module 1 123 The first-order autoregressive process, AR(1)
B The mean is constant if both summands are, which requires that on increasing t the first term converges to a constant and the second is canceled. Pt−1 i Both conditions are verified if |φ| < 1 , because then i=0 φ is the sum of an geometric progression with ratio φ and converges to c/(1− φ), and the term φt converges to zero, thus the sum converges to the constant c/(1− φ).
B With this condition, after an initial transition period, when t → ∞, all the variables zt will have the same expectation, µ = c/(1 − φ), independent of the initial conditions.
B We also observe that in this process the innovation at is uncorrelated with the previous values of the process, zt−k for positive k since zt−k depends on the values of the innovations up to that time, a1, ..., at−k, but not on future values. Since the innovation is a white noise process, its future values are uncorrelated with past ones and, therefore, with previous values of the process, zt−k.
Time series analysis - Module 1 124 The first-order autoregressive process, AR(1)
B The AR(1) process can be written using the notation of the lag operator, B, defined by Bzt = zt−1. (34)
Letting zet = zt − µ and since Bzet = zet−1 we have:
(1 − φB)zet = at. (35)
B This condition indicates that a series follows an AR(1) process if on applying the operator (1 − φB) a white noise process is obtained.
B The operator (1 − φB) can be interpreted as a filter that when applied to the series converts it into a series with no information, a white noise process.
Time series analysis - Module 1 125 The first-order autoregressive process, AR(1)
B If we consider the operator as an equation, in B the coefficient φ is called the factor of the equation.
B The stationarity condition is that this factor be less than the unit in absolute value.
B Alternatively, we can talk about the root of the equation of the operator, which is obtained by making the operator equal to zero and solving the equation with B as an unknown; 1 − φB = 0 which yields B = 1/φ.
B The condition of stationarity is then that the root of the operator be greater than one in absolute value.
Time series analysis - Module 1 126 The first-order autoregressive process, AR(1)
Expectation
B Taking expectations in (33) assuming |φ| < 1, such that E [zt] = E[zt−1] = µ, we obtain µ = c + φµ
Then, the expectation (or mean) is c µ = (36) 1 − φ
Replacing c in (33) with µ(1 − φ), the process can be written in deviations to the mean: zt − µ = φ (zt−1 − µ) + at and letting zet = zt − µ, zet = φzet−1 + at (37) which is the most often used equation of the AR(1).
Time series analysis - Module 1 127 The first-order autoregressive process, AR(1)
Variance
B The variance of the process is obtained by squaring the expression (37) and taking expectations, which gives us: 2 2 2 2 E(zet ) = φ E(zet−1) + 2φE(zet−1at) + E(at ).
2 We let σz be the variance of the stationary process. The second term of this expression is zero, since as zet−1 and at are independent and both variables have null expectation. The third is the variance of the innovation, σ2, and we conclude that: 2 2 2 2 σz = φ σz + σ , from which we find that the variance of the process is: σ2 σ2 = . (38) z 1 − φ2
2 Note that in this equation the condition |φ| < 1 appears, so that σz is finite and positive.
Time series analysis - Module 1 128 The first-order autoregressive process, AR(1)
B It is important to differentiate the marginal distribution of a variable from the conditional distribution of this variable in the previous value. The marginal distribution of each observation is the same, since the process is stationary: it 2 has mean µ and variance σz. Nevertheless, the conditional distribution of zt if we know the previous value, zt−1, has a conditional mean:
E(zt|zt−1) = c + φzt−1
2 2 and variance σ , which according to (38), is always less than σz.
B If we know zt−1 it reduces the uncertainty in the estimation of zt, and this reduction is greater when φ2 is greater.
B If the AR parameter is close to one, the reduction of the variance obtained from knowledge of zt−1 can be very important.
Time series analysis - Module 1 129 The first-order autoregressive process, AR(1)
Autocovariance function
B Using (37), multiplying by zt−k and taking expectations gives us γk, the covariance between observations separated by k periods, or the autocovariance of order k: γk = E [(zt−k − µ)(zt − µ)] = E [zet−k (φzet−1 + at)] and as E [zet−kat] = 0, since the innovations are uncorrelated with the past values of the series, we have the following recursion:
γk = φγk−1 k = 1, 2, ... (39)
2 where γ0 = σz.
B This equation shows that since |φ| < 1 the dependence between observations decreases when the lag increases.
B In particular, using (38):
Time series analysis - Module 1 130 φσ2 γ = (40) 1 1 − φ2
Time series analysis - Module 1 131 The first-order autoregressive process, AR(1)
Autocorrelation function, ACF
B Autocorrelations contain the same information as the autocovariances, with the advantage of not depending on the units of measurement. From here on we will use the term simple autocorrelation function (ACF) to denote the autocorrelation function of the process in order to differentiate it from other functions linked to the autocorrelation that are defined at the end of this section.
B Let ρk be the autocorrelation of order k, defined by: ρk = γk/γ0, using (39), we have: ρk = φγk−1/γ0 = φρk−1.
Since, according to (38) and (40), ρ1 = φ, we conclude that: k ρk = φ (41) and when k is large, ρk goes to zero at a rate that depends on φ.
Time series analysis - Module 1 132 The first-order autoregressive process, AR(1)
Autocorrelation function, ACF
B The expression (41) shows that the autocorrelation function of an AR(1) process is equal to the powers of the AR parameter of the process and decreases geometrically to zero.
B If the parameter is positive the linear dependence of the present on past values is always positive, whereas if the parameter is negative this dependence is positive for even lags and negative for odd ones.
B When the parameter is positive the value at t is similar to the value at t − 1, due to the positive dependence, thus the graph of the series evolves smoothly. Whereas, when the parameter is negative the value at t is, in general, the opposite sign of that at t − 1, thus the graph shows many changes of signs.
Time series analysis - Module 1 133 Autocorrelation function - Example
Parameter, φ = -0.5 Parameter, φ = -0.5 3 1
2
1 0.5
0
-1 0
-2 Autocorrelation Sample
-3 -0.5 0 50 100 150 200 5 10 15 20 Lag
Parameter, φ = +0.7 Parameter, φ = +0.7 4 1
2 0.5
0
0 -2 Sample Autocorrelation Sample
-4 -0.5 0 50 100 150 200 5 10 15 20 Lag
Time series analysis - Module 1 134 Representation of an AR(1) process as a sum of innovations
B The AR(1) process can be expressed as a function of the past values of the innovations. This representation is useful because it reveals certain properties of the process. Using zet−1 in the expression (37) as a function of zet−2, we have 2 zet = φ(φzet−2 + at−1) + at = at + φat−1 + φ zet−2.
If we now replace zet−2 with its expression as a function of zet−3, we obtain 2 3 zet = at + φat−1 + φ at−2 + φ zet−2 and repeatedly applying this substitution gives us: 2 t−1 t zet = at + φat−1 + φ at−2 + .... + φ a1 + φ ze1
t B If we assume t to be large, since φ will be close to zero we can represent the series as a function of all the past innovations, with weights that decrease geometrically.
Time series analysis - Module 1 135 Representation of an AR(1) process as a sum of innovations
B Other possibility is to assume that the series starts in the infinite past: ∞ X j zt = φ at−j e j=0 and this representation is denoted as the infinite order moving average, MA(∞), of the process.
B Observe that the coefficients of the innovations are precisely the coefficients of the simple autocorrelation function.
B The expression MA(∞) can also be obtained directly by multiplying the equation (35) by the operator (1 − φB)−1 = 1 + φB + φ2B2 + ..., thus obtaining:
−1 2 zet = (1 − φB) at = at + φat−1 + φ at−2 + ...
Time series analysis - Module 1 136 Representation of an AR(1) process as a sum of innovations - Example Example 37. The figures show the monthly series of relative changes in the annual interest rate, defined by zt = log(yt/yt−1) and the ACF. The AC coefficients decrease with the lag: the first is of order .4, the second close to .42 = .16, the third is a similar value and the rest are small and not significant. .15
.10 0.8
.05 0.6
.00 0.4
-.05 Sample Autocorrelation Sample 0.2 -.10
0 -.15 88 89 90 91 92 93 94 95 96 97 98 99 00 01
-0.2 Relative changes in the annual interest rates, Spain 1 2 3 4 5 6 7 8 9 10 Lag
Datafile interestrates.wf1
Time series analysis - Module 1 137 The AR(2) process
B The dependency between present and past values which an AR(1) establishes can be generalized allowing zt to be linearly dependent not only on zt−1 but also on zt−2. Thus the second order autoregressive, or AR(2) is obtained:
zt = c + φ1zt−1 + φ2zt−2 + at (42) where c, φ1 and φ2 are now constants and at is a white noise process with variance σ2.
B We are going to find the conditions that must verify the parameters for the process to be stationary. Taking expectations in (42) and imposing that the mean be constant, results in:
µ = c + φ1µ + φ2µ which implies c µ = , (43) 1 − φ1 − φ2 and the condition for the process to have a finite mean is that 1 − φ1 − φ2 6= 0.
Time series analysis - Module 1 138 The AR(2) process
B Replacing c with µ(1 − φ1 − φ2) and letting zet = zt − µ be the process of deviations to the mean, the AR(2) process is:
zet = φ1zet−1 + φ2zet−2 + at. (44)
B In order to study the properties of the process it is advisable to use the operator notations. Introducing the lag operator, B, the equation of this process is: 2 (1 − φ1B − φ2B )zet = at. (45)
2 B The operator (1 − φ1B − φ2B ) can always be expressed as (1 − G1B)(1 − −1 −1 G2B), where G1 and G2 are the roots of the equation of the operator considering B as a variable and solving
2 1 − φ1B − φ2B = 0. (46)
Time series analysis - Module 1 139 The AR(2) process
B The equation (46) is called the characteristic equation of the operator.
B G1 and G2 are also said to be factors of the characteristic polynomial of the process. These roots can be real or complex conjugates.
B It can be proved that the condition of stationarity is that |Gi| < 1 , i = 1, 2.
B This condition is analogous to that studied for the AR(1).
B Note that this result is consistent with the condition found for the mean to be finite. If the equation
2 1 − φ1B − φ2B = 0 has a unit root it is verified that 1 − φ1 − φ2 = 0 and the process is not stationary, since it does not have a finite mean.
Time series analysis - Module 1 140 The AR(2) process
Autocovariance function
B Squaring expression (44) and taking expectations, we find that the variance must satisfy: 2 2 2 γ0 = φ1γ0 + φ2γ0 + 2φ1φ2γ1 + σ . (47) B In order to calculate the autocovariance, multiplying the equation (44) by zet−k and taking expectations, we obtain: γk = φ1γk−1 + φ2γk−2. k ≥ 1 (48)
B Specifying this equation for k = 1, since γ−1 = γ1, we have γ1 = φ1γ0 + φ2γ1, which provides γ1 = φ1γ0/(1 − φ2). Using this expression in (47) results in the formula for the variance: 2 2 (1 − φ2) σ σz = γ0 = . (49) (1 + φ2) (1 − φ1 − φ2) (1 + φ1 − φ2)
Time series analysis - Module 1 141 The AR(2) process
Autocovariance function
B For the process to be stationary this variance must be positive, which will occur if the numerator and the denominator have the same sign. It can be proved that the values of the parameters that make AR(2) a stationary process are those included in the region:
2
−1 < φ2 < 1 1.5
φ1 + φ2 < 1 1 φ = 1 - φ φ = 1 + φ 2 1 2 1 φ2 − φ1 < 1 0.5
2 0 φ
-0.5
-1
-1.5
-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 φ 1 In the represents the admissible values of the parameters for the process to be stationary.
Time series analysis - Module 1 142 The AR(2) process
B In this process it is important again to differentiate the marginal and conditional properties. Assuming that the conditions of stationarity are verified, the marginal mean is given by c µ = 1 − φ1 − φ2 and the marginal variance is 2 2 (1 − φ2) σ σz = . (1 + φ2) (1 − φ1 − φ2) (1 + φ1 − φ2)
B Nevertheless, the conditional mean of zt given the previous values is:
E(zt|zt−1, zt−2) = c + φ1zt−1 + φ2zt−2 and its variance will be σ2, the variance of the innovations which will always 2 be less than the marginal variance of the process σz.
Time series analysis - Module 1 143 The AR(2) process
Autocorrelation function
B Dividing by the variance in equation (48), we obtain the relationship between the autocorrelation coefficients:
ρk = φ1ρk−1 + φ2ρk−2 k ≥ 1 (50) specifying (50) for k = 1, as in a stationary process ρ1 = ρ−1, we obtain:
φ1 ρ1 = (51) 1 − φ2 and specifying (50) for k = 2 and using (51):
2 φ1 ρ2 = + φ2. (52) 1 − φ2
Time series analysis - Module 1 144 The AR(2) process
Autocorrelation function
B For k ≥ 3 the autocorrelation coefficients can be obtained recursively starting from the difference equation (50). It can be proved that the general solution to this equation is:
k k ρk = A1G1 + A2G2 (53) where G1 and G2 are the factors of the characteristic polynomial of the process and A1 and A2 are constants to be determined from the initial conditions ρ0 = 1, (which implies A1+ A2 = 1) and ρ1 = φ1/ (1 − φ2).
B According to (53) the coefficients ρk will be less than or equal to the unit if |G1| < 1 and |G2| < 1, which are the conditions of stationarity of the process.
Time series analysis - Module 1 145 The AR(2) process
Autocorrelation function √ B If the factors G√1 and G2 are complex of type a ± bi, where i = −1, then this condition is a2 + b2 < 1. We may find ourselves in the following cases:
1. The two factors G1 and G2 are real. The decrease of (53) is the sum of the two exponentials and the shape of the autocorrelation function will depend on whether G1 and G2 have equal or opposite signs.
2. The two factors G1 and G2 are complex conjugates. It is proved in Appendix 4.1 that the function ρk will decrease sinusoidally.
B The four types of possible autocorrelation functions for an AR(2) are shown in the next figure.
Time series analysis - Module 1 146 Autocorrelation function - Examples
Real roots = 0.75 and 0.5 Complex roots = 0.25 ± 0.5i 1 0.5
0.5
0
0 Sample Autocorrelation Sample Autocorrelation Sample
-0.5 -0.5 5 10 15 20 5 10 15 20
Real roots = -0.75 and 0.5 Complex roots = -0.25 ± 0.5i 0.5 0.5
0 0 Sample Autocorrelation Sample Autocorrelation Sample
-0.5 -0.5 5 10 15 20 5 10 15 20 Lag Lag
Time series analysis - Module 1 147 Representation of an AR(2) process as a sum of innovations
B The AR(2) process can be represented, as with an AR(1), as a linear combination of the innovations. Writing (45) as
(1 − G1B)(1 − G2B)zet = at and inverting these operators, we have
2 2 2 2 zet = (1 + G1B + G1B + ...)(1 + G2B + G2B + ...)at (54) which leads to the MA(∞) expression of the process:
zet = at + ψ1at−1 + ψ2at−2 + ... (55)
B We can obtain the coefficients ψi as a function of the roots equating powers of B in (54) and (55).
Time series analysis - Module 1 148 Representation of an AR(2) process as a sum of innovations
B We can also obtain the coefficients ψi as a function of the coefficients φ1 and 2 2 −1 φ2. Letting ψ(B) = 1 + ψ1B + ψ2B + ... since ψ(B) = (1 − φ1B − φ2B ) , we have 2 2 (1 − φ1B − φ2B )(1 + ψ1B + ψ2B + ...) = 1. (56)
B Imposing the restriction that all the coefficients of the powers of B in (56) are null, the coefficient of B in this equation is ψ1 −φ1, which implies ψ1 = φ1. 2 The coefficient of B is ψ2 − φ1ψ1 − φ2, which implies the equation:
ψk = φ1ψk−1 + φ2ψk−2 (57)
k for k = 2, since ψ0 = 1. The coefficients of B for k ≥ 2 verify the equation (57) which is similar to the one that must verify the autocorrelation coefficients.
B We conclude that the shape of the coefficients ψi will be similar to that of the autocorrelation coefficients.
Time series analysis - Module 1 149 Representation of an AR(2) process as a sum of innovations - Examples
Example 38. The figures show the number of mink sighted yearly in an area of Canada and the ACF. The series shows a cyclical evolution that could be explained by an AR(2) with negative roots corresponding to the sinusoidal structure of the autocorrelation. 120000 1
100000 0.8
80000 0.6
60000 0.4
40000 0.2 Sample Autocorrelation Sample
20000 0
0 -0.2 50 55 60 65 70 75 80 85 90 95 00 05 10
-0.4 Number of minks, Canada 2 4 6 8 10 12 14 16 18 20 Lag
Time series analysis - Module 1 150 Representation of an AR(2) process as a sum of innovations - Examples
Example 39. Write the autocorrelation function of the AR(2) process
zt = 1.2zt−1 − 0.32zt−2 + at
B The characteristic equation of that process is:
0.32X2 − 1.2X + 1 = 0 whose solution is: √ 1.2 ± 1.22 − 4 × 0.32 1.2 ± 0.4 X = = 0.64 0.64
−1 −1 B The solutions are G1 = 2.5 and G2 = 1.25 and the factors are G1 = 0.4 and G2 = 0.8.
Time series analysis - Module 1 151
B The characteristic equation can be written:
0.32X2 − 1.2X + 1 = (1 − 0.4X)(1 − 0.8X).
Therefore, the process is stationary with real roots and the autocorrelation coefficients verify: k k ρk = A10.4 + A20.8 .
B To determine A1 and A2 we impose the initial conditions ρ0 = 1, ρ1 = 1.2/ (1.322) = 0.91. Then, for k = 0:
1 = A1 + A2 and for k = 1, 0.91 = 0.4A1 + 0.8A2 solving these equations we obtain A2 = 0.51/0.4 and A1 = −0.11/0.4.
Time series analysis - Module 1 152
B Therefore, the autocorrelation function is: 0.11 0.51 ρ = − 0.4k + 0.8k k 0.4 0.4 which gives us the following table: k 0 1 2 3 4 5 6 7 8 ρk 1 0.91 0.77 0.63 0.51 0.41 0.33 0.27 0.21
B To obtain the representation as a function of the innovations, writing
(1 − 0.4B)(1 − 0.8B)zt = at and inverting both operators:
2 3 2 zt = (1 + 0.4B + .16B + .06B + ...)(1 + 0.8B + .64B + ...)at yields: 2 zt = (1 + 1.2B + 1.12B + ...)at.
Time series analysis - Module 1 153 The general autoregressive process, AR(p)
B We say that a stationary time series zt follows an autoregressive process of order p if: zet = φ1zet−1 + ... + φpzet−p + at (58) where zet = zt − µ, with µ being the mean of the stationary process zt and at a white noise process.
B Utilizing the operator notation, the equation of an AR(p) is:
p (1 − φ1B − ... − φpB ) zet = at (59)
p and letting φp(B) = 1 − φ1B − ... − φpB be the polynomial of degree p in the lag operator, whose first term is the unit, we have:
φp (B) zet = at (60) which is the general expression of an autoregressive process.
Time series analysis - Module 1 154 The general autoregressive process, AR(p)
B The characteristic equation of this process is define by:
φp (B) = 0 (61) considered as a function of B.
−1 −1 B This equation has p roots G1 ,...,Gp , which are generally different, and we can write: p Y φp (B) = (1 − GiB) i=1 such that the coefficients Gi are the factors of the characteristic equation.
B It can be proved that the process is stationary if |Gi| < 1, for all i.
Time series analysis - Module 1 155 The general autoregressive process, AR(p)
Autocorrelation function
B Operating with (58), we find that the autocorrelation coefficients of an AR(p) verify the following difference equation:
ρk = φ1ρk−1 + ... + φpρk−p, k > 0.
B In the above sections we saw particular cases in this equation for p = 1 and p = 2. We can conclude that the autocorrelation coefficients satisfy the same equation as the process:
φp (B) ρk = 0 k > 0. (62)
B The general solution to this equation is: p X k ρk = AiGi , (63) i=1 where the Ai are constants to be determined from the initial conditions and the Gi are the factors of the characteristic equation.
Time series analysis - Module 1 156 The general autoregressive process, AR(p)
Autocorrelation function
B For the process to be stationary the modulus of Gi must be less than one or, the roots of the characteristic equation (61) must be greater than one in modulus, which is the same.
B To prove this, we observe that the condition |ρk| < 1 requires that there not be any Gi greater than the unit in (63), since in that case, when k increases k the term Gi will increase without limit.
B Furthermore, we observe that for the process to be stationary there cannot k be a root Gi equal to the unit, since then its component Gi would not decrease and the coefficients ρk would not tend to zero for any lag.
B Equation (63) shows that the autocorrelation function of an AR(p) process is a mixture of exponents, due to the terms with real roots, and sinusoids, due to the complex conjugates. As a result, their structure can be very complex.
Time series analysis - Module 1 157 Yule-Walker equations
B Specifying the equation (62) for k = 1, ..., p, a system of p equations is obtained that relate the first p autocorrelations with the parameters of the process. This is called the Yule-Walker system: ρ1 = φ1 + φ2ρ1 + ... + φpρp−1
ρ2 = φ1ρ1 + φ2 + ... + φpρp−2 . .
ρp = φ1ρp−1 + φ2ρp−2 + ... + φp. B Defining: 1 ρ1 ... ρp−1 0 0 . . . φ = [φ1, ..., φp] , ρ = [ρ1, ..., ρp] , R = . . . ρp−1 ρp−2 ... 1 the above system is written as a matrix: ρ = Rφ (64) and the parameters can be determined using: φ = R−1ρ.
Time series analysis - Module 1 158 Yule-Walker equations - Example
Example 40. Obtain the parameters of an AR(3) process whose first autocorrelations are ρ1 = 0.9; ρ2 = 0.8; ρ3 = 0.5. Is the process stationary?
B The Yule-Walker equation system is: 0.9 1 0.9 0.8 φ1 0.8 = 0.9 1 0.9 φ2 0.5 0.8 0.9 1 φ3 whose solution is: φ1 5.28 −5 0.28 0.9 0.89 φ2 = −5 10 −5 0.8 = 1 . φ3 0.28 −5 5.28 0.5 −1.11
Time series analysis - Module 1 159
B As a result, the AR(3) process with these correlations is: