<<

Chapter 3 Autoregressive and Moving-Average Models

3.1 Introduction

Let y be a . We consider the elements of an observed {y0,y1,y2,..., yt } as being realizations of this randoms variable. We also define a white- process. A sequence {εt } is a white-noise process if each value of the sequence has zero, has a constant , and is uncorrelated with all other realizations. Formally, {εt } is a white-noise process if, for each t,

E(εt ) = E(εt−1) = ··· = 0 (3.1) 2 2 2 E(εt ) = E(εt−1) = ··· = σ E(εt εt−s) = E(εt− jεt− j−s) = 0 for all j and s

For the rest of these notes, {εt } will always denote a white-noise process. Figure 3.1 illustrates a white-noise process generated in Stata with the following code:

clear set obs 150 set seed 1000 gen time=_n tsset time gen white=invnorm(uniform()) twoway line white time, m(o) c(l) scheme(sj) /// ytitle( "white-noise" ) /// title( "White-Noise Process" )

3.2 Stationarity

A is said to be -stationary if it has a finite mean and variance. That is, for all t, j, and t − s,

E(yt ) = E(yt−s) = µ (3.2)

11 12 3 Autoregressive and Moving-Average Models

White-Noise Process 2 1 0 white-noise -1 -2

0 50 100 150 time

Fig. 3.1 White-Noise Process, {εt }

2 2 2 E[( yt − µ) ] = E[( yt−s − µ) ] = σy (3.3) E[( yt − µ)( yt−s − µ)] = E[( yt− j − µ)( yt− j−s − µ)] = γs (3.4)

2 where µ, σy , and γs are all constants. For a covariance-stationary series, we can define the between yt and yt−s as

γs ρs = (3.5) γ0 where both, γs and γ0, are defined in Equation 3.4. Obviously, ρ0 = 1.

3.3 The Moving-Average Processes

3.3.1 The MA(1) Process

The first-order moving average process, or MA(1) process, is

yt = εt + θε t−1 = ( 1 + θL)εt (3.6) 3.3 The Moving-Average Processes 13

The MA(1) process expresses an observed series as a function of the current and lagged unobserved shocks. To develop an intuition of the behavior of the MA(1) process, we show the following three simulated realizations:

y1t = εt + 0.08 εt−1 = ( 1 + 0.08 L)εt (3.7)

y2t = εt + 0.98 εt−1 = ( 1 + 0.98 L)εt

y3t = εt − 0.98 εt−1 = ( 1 − 0.98 L)εt In the first two variables ( y1 and y2), past shocks feed positively into the current value of the series, with a small weight of θ = 0.08 in the first case, and a large weight of θ = 0.98 in the second case. While one may think that the second case would produce a more persistent series, it doesn’t. The structure of the MA(1) pro- cess, in which only the first lag of the shock appears on the right, forces it to have a very short memory, and hence weak dynamics. Figure 3.2 illustrates the gener- ated series y11 and y2t . This figure shows the weak dynamics of MA(1) processes. In addition, it also shows that the y2 series is more volatile than y1. Following the previous Stata code, we can generate Figure 3.2 with:

gen Y1 = white+0.08 *l.white gen Y2 = white+0.98 *l.white twoway (line Y1 Y2 time, clcolor(blue red) ), scheme(sj) /// ytitle( "Y1 and Y2" ) /// title( "Two MA(1) Processes" ) It is easy to see that the unconditional mean and variance of an MA(1) process are

E(yt ) = E(εt ) + θE(εt−1) (3.8) = 0 and

2 var (yt ) = var (εt ) + θ var (εt−1) (3.9) = σ 2 + θ 2σ 2 = σ 2(1 + θ 2)

Notice that for a given σ 2, as θ increases in absolute value, the unconditional vari- ance increases as well. This explains why the y2 is more volatile than y1. The conditional mean and variance of an MA(1), where the conditioning infor- mation set is Ωt−1 = {εt−1,εt−2,... }, are

E(yt |Ωt−1) = E(εt + θε t−1|Ωt−1) (3.10)

= E(εt |Ωt−1) + E(θε t−1|Ωt−1)

= θε t−1 and

2 var (yt |Ωt−1) = E(( yt − E(yt |Ωt−1)) |Ωt−1) (3.11) 2 = E(εt |Ωt−1) 14 3 Autoregressive and Moving-Average Models

Two MA(1) Processes 4 2 0 Y1 and Y2 -2 -4 0 50 100 150 time

Y1 Y2

Fig. 3.2 Two MA(1) Processes

2 = E(εt ) = σ 2

The conditional mean explicitly adapts to the information set, in contrasts to the unconditional mean, which is constant. We will study more the y1, y2, and y3 series once we learn about the autocorre- lation and the partial autocorrelation functions.

3.3.2 The MA( q) Process

The general finite-order moving average process of order q, or MA( q), is expressed as

y1 = εt + θ1εt−1 + θ2εt−2 + ··· + θqεt−q = B(L)εt (3.12) where as you know

2 q B(L) = 1 + θ1L + θ2L + ··· + θqL . 3.4 The Autoregressive Processes 15 is a qth-order lag operator polynomial. The MA( q) process is a natural generaliza- tion of the MA(1). By allowing for more lags of the shocks on the right hand side of the equation, the MA( q) process can capture richer dynamic patterns.

3.4 The Autoregressive Processes

3.4.1 The AR(1) Process

The autoregressive process has a simple motivation: it is simply a stochastic differ- ence equation in which the current value of a series is linearly related to its past values, plus an additive stochastic shock. The first-order autoregressive process, AR(1), is

yt = ϕyt−1εt (3.13) which can be written as

(1 − ϕL)yt = εt (3.14)

To illustrate the dynamics of different AR(1) processed, we simulate the realiza- tions of the following four AR(1) processes:

z1t = + 0.9 · z1t−1 + εt (3.15)

z2t = + 0.2 · z2t−1 + εt

z3t = −0.9 · z3t−1 + εt

z4t = −0.2 · z4t−1 + εt where we keep the innovation sequence {εt } to be the same in each case. Figure 3.3 illustrates the time series graph of the z1t and z2t series, while Figure 3.4 illustrates z3t and z4t . These two figures were obtained using: gen Z1 = 0 gen Z2 = 0 gen Z3 = 0 gen Z4 = 0 replace Z1 = +0.9 *l.Z1 + white if time > 1 replace Z2 = +0.2 *l.Z2 + white if time > 1 replace Z3 = -0.9 *l.Z3 + white if time > 1 replace Z4 = -0.2 *l.Z4 + white if time > 1 twoway (line Z1 Z2 time, clcolor(blue red) ), scheme(sj) /// ytitle( "Z1 and Z2" ) /// title( "Two AR(1) Processes" ) twoway (line Z3 Z4 time, clcolor(blue red) ), scheme(sj) /// ytitle( "Z3 and Z4" ) /// title( "Two AR(1) Processes" ) From the first figure we can see that the fluctuations in the AR(1) with parameter ϕ = 0.9 appears much more persistent than those of the AR(1) with parameter ϕ = 0.4. This contrasts sharply with the MA(1) process, which has a very short memory 16 3 Autoregressive and Moving-Average Models

Two AR(1) Processes 4 2 0 -2 Z1 and Z2 -4 -6 0 50 100 150 time

Z1 Z2

Fig. 3.3 Two AR(1) Processes regardless of the parameter value. Hence, the AR(1) model is capable of capturing much more persistent dynamics. Figure 3.4 shows that the sign is also critical in the dynamic of an AR(1) process. With a positive ϕ, a positive value is most likely followed by another positive value. However, with a negative ϕ, the series quickly changes from positive to negative and vice versa. Finally, when ϕ is negative, the there is a larger dispersion the larger its value. Let’s begin with a simple AR(1) process. Then, if we substitute backwards for the lagged y’s on the right hand side and use the lag operator we can write,

yt = ϕyt−1 + εt (3.16) 2 yt = εt + ϕε t−1 + ϕ εt−2 + ··· 1 yt = εt 1 − ϕL which is the moving average representation of y. It is convergent if and only if |ϕ| < 1, which is the covariance stationarity condition for the AR(1) case. From the moving average representation of the covariance stationary AR(1) pro- cess, we can obtain the unconditional mean and variance,

2 E(yt ) = E(εt + ϕε t−1 + ϕ εt−2 + ··· ) (3.17) 3.4 The Autoregressive Processes 17

Two AR(1) Processes 5 0 Z3 and Z4 -5

0 50 100 150 time

Z3 Z4

Fig. 3.4 Two AR(1) Processes

2 = E(εt ) + ϕE(εt−1) + ϕ E(εt−2) + ··· = 0 and

2 var (yt ) = var (εt + ϕε t−1 + ϕ εt−2 + ··· ) (3.18) = σ 2 + ϕ2σ 2 + ϕ4σ 2 + ··· ∞ = σ 2 ∑ ϕ2i i=0 σ 2 = 1 − ϕ2 The conditional moments are

E(yt |yt−1) = E(ϕyt−1 + εt |yt−1) (3.19)

= ϕE(yt−1|yt−1) + E(εt |yt−1)

= ϕyt−1 + 0

= ϕyt−1 and 18 3 Autoregressive and Moving-Average Models

var (yt ) = var (ϕyt−1 + εt |yt−1) (3.20) 2 = ϕ var (yt−1|yt−1) + var (εt |yt−1) = 0 + σ 2 = σ 2

It is important to note how the conditional mean adapts to the changing information set as the process evolves.

3.4.2 The AR( p) Process

The general pth order autoregressive process, AR( p), is

yt = ϕ1yt−1 + ϕ2yt−2 + ··· + ϕpyt−p + εt written using the lag operator we have

2 p A(L)yt = ( 1 − ϕ1L − ϕ2L − ··· − ϕpL )yt = εt

The AR( p) process is covariance stationary if and only if the inverses of all roots of the autoregressive lag operator polynomial A(L) are inside the unit circle. A neces- p sary condition for covariance stationary is ∑i=1 ϕi < 1. If the condition is satisfied, the process may or may not be stationary, but if the condition is violated, the process can’t be stationary. In the covariance stationary case, we can write the process in the convergent infinite moving average form 1 yt = εt A(L)

3.5 Autoregressive Moving Average (ARMA) Models

The random shock that drives an autoregressive process may itself be a moving av- erage process, then the most appropriate process may be an ARMA process. An ARMA process is just the combination of an AR and a MA process. A p, q autore- gressive moving average process is usually written as ARMA( p, q). The simplest ARMA process is an ARMA(1,1), given by

yt = ϕyt−1 + εt + θε t−1 in lag operator form

(1 − ϕL)yt = ( 1 + θL)εt 3.5 Autoregressive Moving Average (ARMA) Models 19 where |ϕ| < 1 is required for stationarity and |θ| < 1 for invertibility. If the covari- ance stationarity condition is satisfied, then we have the moving average represen- tation (1 + θL) yt = εt (1 − ϕL) which is an infinite distributed lag of current and past innovations. Likewise, if the invertibility condition is satisfied, we have the autoregressive representation is

(1 − ϕL) yt = εt (1 + θL) The ARMA( p, q) process is given by

yt = ϕyt−1 + ··· + ϕpyt−p + εt + θ1εt−1 + ··· + θqεt−q or in its lag operator form 1

A(L)yt = B(L)εt

If the inverses of all roots of A(L) are inside the unit circle, then the process is covariance stationary and has the following convergent infinite moving average rep- resentation B(L) yt = εt . A(L)

If the inverses of all roots of B(L) are inside the unit circle, then the process is invertible and has the following convergent infinite autoregressive representation

A(L) yt = εt B(L) As with autoregressions and moving averages, ARMA processes have a fixed un- conditional mean but a time-varying conditional mean.

1 Where

2 p A(L) = 1 − ϕ1L − ϕ2L − ··· − ϕpL and

2 q B(L) = 1 + θ1L + θ2L + ··· + θqL . 20 3 Autoregressive and Moving-Average Models 3.6 ARMA Models in Stata

The mechanics behind estimating ARMA models in Stata is simple. However, the key behind the estimation is the model selection, which will be covered in detail in the next chapter once we cover the autocorrelation and partial autocorrelation functions. For now, let’s consider the following example from Enders (2004) pages 87-93 consider also in the Stata Manual. Let y denote the first difference of the logarithm of the U.S. Whole Price Index (WPI). We have quarterly data over the period 1960q1 through 1990q4. The Stata command is given by use http://www.stata-press.com/data/r11/wpi1 arima D.ln_wpi, ar(1) ma(1)

(setting optimization to BHHH) Iteration 0: log likelihood = 378.88646 Iteration 4: log likelihood = 382.41728 (switching optimization to BFGS) Iteration 5: log likelihood = 382.42198 Iteration 8: log likelihood = 382.42714

ARIMA regression

Sample: 1960q2 - 1990q4 Number of obs = 123 Wald chi2(2) = 509.04 Log likelihood = 382.4271 Prob > chi2 = 0.0000 ------| OPG D.ln_wpi | Coef. Std. Err. z P>|z| [95% Conf. Interval] ------+------ln_wpi | _cons | .0108226 .0054612 1.98 0.048 .0001189 .0215263 ------+------ARMA | ar | L1. | .8832466 .0428881 20.59 0.000 .7991874 .9673058 | ma | L1. | -.4771587 .0920432 -5.18 0.000 -.65756 -.2967573 ------+------/sigma | .0107717 .0004533 23.76 0.000 .0098832 .0116601 ------

Any of the following two commands will yield the same output arima D.ln_wpi, arima(1,0,1) arima ln_wpi, arima(1,1,1) Moreover, a ARMA(1,4) model can be estimated using arima D.ln_wpi, ar(1) ma(1/4) The capital letter I in ARIMA denotes the order of integration, which will be covered in detail in Chapter 6. For now we assume that the difference of the logarithm of the WPI is integrated of order zero. Hence the ARIMA is just an ARMA. Finally, you can try estimating the MA(1) and AR(1) simulated in Equations 3.7 and 3.15.