<<

4.2 Autoregressive (AR) models are causal linear processes by definition. There is another class of models, based on a recursive formulation similar to the exponentially weighted moving average.

Definition 4.11 (Autoregressive AR(p)). Suppose φ1,...,φp ∈ R are constants 2 2 and (Wi) ∼ WN(σ ). The AR(p) process with parameters σ , φ1,...,φp is defined through p

Xi = Wi + φjXi−j, (3) Xj=1 whenever such stationary process (Xi) exists. Remark 4.12. The process in Definition 4.11 is sometimes called a stationary AR(p) process. It is possible to consider a ‘non-stationary AR(p) process’ for any φ1,...,φp satisfying (3) for i ≥ 0 by letting for example Xi = 0 for i ∈ [−p+1, 0]. Example 4.13 ( and of AR(1) process). For the AR(1) process, whenever it exits, we must have

2 2 γ0 = Var(Xi) = Var(φ1Xi−1 + Wi)= φ1γ0 + σ , which implies that we must have |φ1| < 1, and σ2 γ0 = 2 . 1 − φ1 We may also calculate for j ≥ 1

j γj = E[XiXi−j]= E[(φ1Xi−1 + Wi)Xi−j]= φ1E[Xi−1Xi−j]= φ1γ0,

j which gives that ρj = φ1. Example 4.14. Simulation of an AR(1) process. phi_1 <- 0.7 x <- arima.sim(model=list(ar=phi_1), 140) # This is the explicit simulation: gamma_0 <- 1/(1-phi_1^2) x_0 <- rnorm(1)*sqrt(gamma_0) x <- filter(rnorm(140), phi_1, method = "r", init = x_0)

Example 4.15. Consider a stationary AR(1) process. We may write

n−1 n j Xi = φ1Xi−1 + Wi = ··· = φ1 Xi−n + φ1Wi−j. Xj=0

29 30 iue2:Atcreain fA()wt ieetparameters different with AR(1) of 24: Figure iue2:Smlto fA()poesi xml 4.14. Example in process AR(1) of Simulation 23: Figure ACF Simulated values

0.0 0.4 0.8 0.0 0.4 0.8 −0.2 0.2 0.6 1.0 −4 0 2 4 015 10 5 0 15 10 5 0 04 08 0 2 140 120 100 80 60 40 20 0 015 10 5 0 phi_1 =0.5 phi_1 =0.9

−0.5 0.5 −0.5 0.5 015 10 5 0 15 10 5 0 phi_1 =−0.7 phi_1 =−0.9 . ∞ j Define the causal linear process Yi = j=0 φ1Wi−j, then we may write (detailed proof not examinable) P

∞ 2 1/2 2 1/2 n j E|Xi − Yi| = E φ1 Xi−n − φ1Wi−j    Xj=n ∞ n E 2 1/2 j E 2 1/2 ≤|φ1| Xi−n + |φ1| Wi−j  Xj=n  n σ n→∞ = |φ1| σX + −−−→ 0,  1 −|φ1| 2 E 2 where σX = X1 . This implies Xi = Yi (almost surely). We may write the autoregressive process also in terms of the backshift operator, as p j Xi − φjB Xi = Wi, (4) Xj=1 or φ(B)Xi = Wi, where Definition 4.16 (Characteristic polynomial of AR(p)).

p j φ(z) :=1 − φjz . Xj=1 Remark 4.17. Note the minus sign in the AR polynomial, contrary to the plus in the MA polynomial. In some contexts (esp. ), the AR coefficients are often defined φ˜i = −φi, so that the AR polynomial will look exactly like the MA polynomial. Theorem 4.18. The (stationary) AR(p) process exists and can be written as a causal linear process if and only if

φ(z) =06 for all z ∈ C with |z|≤ 1, that is, the roots of the complex polynomial φ(z) lie strictly outside the unit disc. For full proof, see for example Theorem 3.1.1 of Brockwell and Davis. However, to get the idea, we may write informally

−1 Xi = φ(B) Wi, and we may write the reciprocal of the characteristic function as ∞ 1 = c zj, for |z|≤ 1+ ǫ, φ(z) j Xj=0

31 This that we may write the AR(p) as a causal linear process ∞

Xi = cjWi−j, Xj=0 10 −j where the coefficients satisfy |cj|≤ K(1 + ǫ/2) .

Remark 4.19. This justifies viewing AR(p) as a ‘MA(∞)’ with coefficients (cj)j≥1. This also implies that we may apporximate AR(p) with ‘arbitrary precision’ by MA(q) with large enough q.

4.3 Invertibility of MA 2 Example 4.20. Let θ1 ∈ (0, 1) and σ > 0 be some parameters, and consider two MA(1) models, i.i.d. 2 Xi = Wi + θ1Wi−1, (Wn) ∼ N(0,σ ) i.i.d. 2 X˜i = W˜ i + θ˜1W˜ i−1, (W˜ n) ∼ N(0, σ˜ ), 2 2 2 where θ˜1 =1/θ1 andσ ˜ = σ θ1. We have 2 2 2 γ0 = σ (1 + θ1), γ1 = σ θ1 2 2 ˜2 γ˜0 =σ ˜ (1 + θ˜1)γ ˜1 = σ θ˜1. What do you observe? It turns out that the following invertibility condition resolves the MA(q) identifiability problem, and therefore it is standard that the roots of the charac- teristic polynomial are assumed to lie outside the unit disc. Theorem 4.21. If the roots of the characteristic polynomial of MA(q) are strictly outside the unit circle, the MA(q) is invertible in the sense that it satisfies ∞

Wi = βjXi−j, Xj=0 −j where the constants satisfy β0 = 1 and |βj| ≤ K(1 + ǫ) for some constants K < ∞ and ǫ> 0.

As with Theorem 4.18, we may write symbolically, from Xi = θ(B)Wi, that ∞ 1 W = X = β X − , i θ(B) i j i j Xj=0 ∞ j where the constants βj are uniquely determined by 1/θ(z) = j=0 βjz , as the roots of θ(z) lie outside the unit disc. P

j 10. Because cj(1 + ǫ/2) → 0 as j → ∞. P 32 4.4 Autoregressive moving average (ARMA) Definition 4.22 (Autoregressive moving average ARMA(p,q) process). Suppose φ1,...,φp ∈ R are coefficients of a (stationary) AR(p) process and θ1,...,θq ∈ R, 2 and (Wi) ∼ WN(σ ). The (stationary) ARMA(p,q) process with these parameters is a process satisfying

p q

Xi = φjXi−j + θjWi−j, (5) Xj=1 Xj=0 with the convention θ0 = 1 and where the first sum vanishes if p = 0. Remark 4.23. AR(p) is ARMA(p,0) and MA(q) is ARMA(0,q). We may write ARMA(p,q) briefly with the characteristic polynomials of the AR and MA and the backshift operator as

φ(B)Xi = θ(B)Wi. Simulation of a general ARMA(p,q) model is not straightforward exactly, but we can approximately simulate it by setting X−p+1 = ··· = X0 = 0 (say) and then following (5). Then, Xb,Xb+1,...,Xb+n is an approximate sample of a stationary ARMA(p,q) if b is ‘large enough’. This is what R function arima.sim does; the parameter n.start is b above.

Example 4.24. Simulation of ARMA(2,1) model with φ1 =0.3, φ2 = −0.4, θ1 = −0.8. x <- arima.sim(list(ma = c(-0.8), ar=c(.3,-.4)), 140, n.start = 1e5) This is the same as q <- 2; n <- 140; n.start <- 1e5 z <- filter(rnorm(n.start+n), c(1, -0.8), sides=1) z <- tail(z, n.start+n-q) x <- tail(filter(z, c(.3,-.4), method="r"), n) (The latter may sometimes be necessary, because arima.sim checks the stability of the AR part by calculating the roots of φ(z) numerically, which is notoriously unstable if the order of φ is large. Sometimes arima.sim refuses to simulate a stable ARMA. . . ) Remark 4.25. If the characteristic polynomials θ(z) and φ(z) of an ARMA(p,q) share a (complex) root, say x1 = y1, then

θ(z) (z − x1)(z − x2) ··· (z − x ) = q φ(z) (z − y1)(z − y2) ··· (z − yp)

33 34 odto 4.26 Condition hc en httemdlrdcst ARMA( to reduces model the that means which where suetefollowing: the assume b h ot fteM hrceitcplnma r strictl are polynomial characteristic MA the of roots The (b) a h ot fteA hrceitcplnma r tityo strictly are polynomial characteristic AR the of roots The (a) c h RadM hrceitcplnmasd o aecommo have not do polynomials characteristic MA and AR The (c) θ ˜ ic(fTerm4.18). Theorem (cf disc ic(f hoe 4.21). Theorem (cf. disc c.Rmr 4.25). Remark (cf. ( z so order of is )

iue2:Smlto fAM(,)i xml 10.6. Example in ARMA(2,1) of Simulation 25: Figure ACF Simulateed values Rglrt odtosfrARMA) for conditions (Regularity −0.5 0.5 1.0 −4 −2 0 2 q − 04 08 0 2 140 120 100 80 60 40 20 0 015 10 5 0 and 1 = ( ( φ z ˜ z ( − φ − ˜ B ( ) z x y X so order of is ) 2 2 ) ) i · · · · · · = θ ˜ ( ( z ( z B − − ) W y x p q p i ) p ) , − − = 1, ,adi un u that out turns it and 1, φ . θ ˜ ˜ q ( ( nwa olw,w shall we follows, what In z z − ) ) , 1). usd h unit the outside y tieteunit the utside roots n Theorem 4.27. A stationary ARMA(p,q) model satisfying Condition 4.26 ex- ists, is invertible and can be written as a causal linear process

∞ ∞

Xi = ξjWi−j, Wi = βjXi−j, Xj=0 Xj=0 where the constants ξj and βj satisfy

∞ ∞ θ(z) φ(z) ξ zj = and β zj = . j φ(z) j θ(z) Xj=0 Xj=0

In addition, β0 = 1 and there exist constants K < ∞ and ǫ > 0 such that −j max{|ξj|, |βj|} ≤ K(1 + ǫ) for all j ≥ 0.

Remark 4.28. In fact, the coefficients ξj (or βj) related to any ARMA(p,q) can be calculated numerically from the parameters easily. Also the can be calculated numerically up to any lag in a straightforward way; cf. Brockwell and Davis p. 91–95. In R, the autocorrelation coefficients can be calculated with ARMAacf.

4.5 Integrated models Autoregressive moving average models are pretty flexible models for stationary series. However, in many practical , it might be more useful to consider the differenced series (Definition 2.10). This brings us to the general notion of

Definition 4.29 (Difference operator). Suppose (Xi) is a . Its d d:th order difference process is defined as (∇ Xi), where the d:th order difference operator may be written in terms of the backshift operator as ∇d = (1 − B)d for d ≥ 1. Definition 4.30 (Autoregressive integrated moving average ARIMA(p,d,q) pro- d cess). If the d-th difference of the process (∇ Xi) follows ARMA(p,q), then we say (Xi) is ARIMA(p,d,q). d Remark 4.31. Suppose that (∇ Xi) is a stationary ARMA(p, q). (i) The ARIMA(p,d,q) process (Xi) is not unique (why?). (ii) The ARIMA(p,d,q) process (Xi) is not, in general, stationary. The process (Xi) (or the x1,...,xn) is said to be difference stationary. Example 4.32. Simple

Xi = Xi−1 + Wi is an ARIMA(0,1,0).

35