4.2 Autoregressive (AR) Moving average models are causal linear processes by definition. There is another class of models, based on a recursive formulation similar to the exponentially weighted moving average. Definition 4.11 (Autoregressive AR(p)). Suppose φ1,...,φp ∈ R are constants 2 2 and (Wi) ∼ WN(σ ). The AR(p) process with parameters σ , φ1,...,φp is defined through p Xi = Wi + φjXi−j, (3) Xj=1 whenever such stationary process (Xi) exists. Remark 4.12. The process in Definition 4.11 is sometimes called a stationary AR(p) process. It is possible to consider a ‘non-stationary AR(p) process’ for any φ1,...,φp satisfying (3) for i ≥ 0 by letting for example Xi = 0 for i ∈ [−p+1, 0]. Example 4.13 (Variance and autocorrelation of AR(1) process). For the AR(1) process, whenever it exits, we must have 2 2 γ0 = Var(Xi) = Var(φ1Xi−1 + Wi)= φ1γ0 + σ , which implies that we must have |φ1| < 1, and σ2 γ0 = 2 . 1 − φ1 We may also calculate for j ≥ 1 j γj = E[XiXi−j]= E[(φ1Xi−1 + Wi)Xi−j]= φ1E[Xi−1Xi−j]= φ1γ0, j which gives that ρj = φ1. Example 4.14. Simulation of an AR(1) process. phi_1 <- 0.7 x <- arima.sim(model=list(ar=phi_1), 140) # This is the explicit simulation: gamma_0 <- 1/(1-phi_1^2) x_0 <- rnorm(1)*sqrt(gamma_0) x <- filter(rnorm(140), phi_1, method = "r", init = x_0) Example 4.15. Consider a stationary AR(1) process. We may write n−1 n j Xi = φ1Xi−1 + Wi = ··· = φ1 Xi−n + φ1Wi−j. Xj=0 29 Simulated values Simulated −4 0 2 4 0 20 40 60 80 100 120 140 ACF −0.2 0.2 0.6 1.0 0 5 10 15 Figure 23: Simulation of AR(1) process in Example 4.14. phi_1 = 0.9 phi_1 = −0.9 −0.5 0.5 0.0 0.4 0.8 0 5 10 15 0 5 10 15 phi_1 = 0.5 phi_1 = −0.7 −0.5 0.5 0.0 0.4 0.8 0 5 10 15 0 5 10 15 Figure 24: Autocorrelations of AR(1) with different parameters. 30 ∞ j Define the causal linear process Yi = j=0 φ1Wi−j, then we may write (detailed proof not examinable) P ∞ 2 1/2 2 1/2 n j E|Xi − Yi| = E φ1 Xi−n − φ1Wi−j Xj=n ∞ n E 2 1/2 j E 2 1/2 ≤|φ1| Xi−n + |φ1| Wi−j Xj=n n σ n→∞ = |φ1| σX + −−−→ 0, 1 −|φ1| 2 E 2 where σX = X1 . This implies Xi = Yi (almost surely). We may write the autoregressive process also in terms of the backshift operator, as p j Xi − φjB Xi = Wi, (4) Xj=1 or φ(B)Xi = Wi, where Definition 4.16 (Characteristic polynomial of AR(p)). p j φ(z) :=1 − φjz . Xj=1 Remark 4.17. Note the minus sign in the AR polynomial, contrary to the plus in the MA polynomial. In some contexts (esp. signal processing), the AR coefficients are often defined φ˜i = −φi, so that the AR polynomial will look exactly like the MA polynomial. Theorem 4.18. The (stationary) AR(p) process exists and can be written as a causal linear process if and only if φ(z) =06 for all z ∈ C with |z|≤ 1, that is, the roots of the complex polynomial φ(z) lie strictly outside the unit disc. For full proof, see for example Theorem 3.1.1 of Brockwell and Davis. However, to get the idea, we may write informally −1 Xi = φ(B) Wi, and we may write the reciprocal of the characteristic function as ∞ 1 = c zj, for |z|≤ 1+ ǫ, φ(z) j Xj=0 31 This means that we may write the AR(p) as a causal linear process ∞ Xi = cjWi−j, Xj=0 10 −j where the coefficients satisfy |cj|≤ K(1 + ǫ/2) . Remark 4.19. This justifies viewing AR(p) as a ‘MA(∞)’ with coefficients (cj)j≥1. This also implies that we may apporximate AR(p) with ‘arbitrary precision’ by MA(q) with large enough q. 4.3 Invertibility of MA 2 Example 4.20. Let θ1 ∈ (0, 1) and σ > 0 be some parameters, and consider two MA(1) models, i.i.d. 2 Xi = Wi + θ1Wi−1, (Wn) ∼ N(0,σ ) i.i.d. 2 X˜i = W˜ i + θ˜1W˜ i−1, (W˜ n) ∼ N(0, σ˜ ), 2 2 2 where θ˜1 =1/θ1 andσ ˜ = σ θ1. We have 2 2 2 γ0 = σ (1 + θ1), γ1 = σ θ1 2 2 ˜2 γ˜0 =σ ˜ (1 + θ˜1)γ ˜1 = σ θ˜1. What do you observe? It turns out that the following invertibility condition resolves the MA(q) identifiability problem, and therefore it is standard that the roots of the charac- teristic polynomial are assumed to lie outside the unit disc. Theorem 4.21. If the roots of the characteristic polynomial of MA(q) are strictly outside the unit circle, the MA(q) is invertible in the sense that it satisfies ∞ Wi = βjXi−j, Xj=0 −j where the constants satisfy β0 = 1 and |βj| ≤ K(1 + ǫ) for some constants K < ∞ and ǫ> 0. As with Theorem 4.18, we may write symbolically, from Xi = θ(B)Wi, that ∞ 1 W = X = β X − , i θ(B) i j i j Xj=0 ∞ j where the constants βj are uniquely determined by 1/θ(z) = j=0 βjz , as the roots of θ(z) lie outside the unit disc. P j 10. Because cj(1 + ǫ/2) → 0 as j → ∞. P 32 4.4 Autoregressive moving average (ARMA) Definition 4.22 (Autoregressive moving average ARMA(p,q) process). Suppose φ1,...,φp ∈ R are coefficients of a (stationary) AR(p) process and θ1,...,θq ∈ R, 2 and (Wi) ∼ WN(σ ). The (stationary) ARMA(p,q) process with these parameters is a process satisfying p q Xi = φjXi−j + θjWi−j, (5) Xj=1 Xj=0 with the convention θ0 = 1 and where the first sum vanishes if p = 0. Remark 4.23. AR(p) is ARMA(p,0) and MA(q) is ARMA(0,q). We may write ARMA(p,q) briefly with the characteristic polynomials of the AR and MA and the backshift operator as φ(B)Xi = θ(B)Wi. Simulation of a general ARMA(p,q) model is not straightforward exactly, but we can approximately simulate it by setting X−p+1 = ··· = X0 = 0 (say) and then following (5). Then, Xb,Xb+1,...,Xb+n is an approximate sample of a stationary ARMA(p,q) if b is ‘large enough’. This is what R function arima.sim does; the parameter n.start is b above. Example 4.24. Simulation of ARMA(2,1) model with φ1 =0.3, φ2 = −0.4, θ1 = −0.8. x <- arima.sim(list(ma = c(-0.8), ar=c(.3,-.4)), 140, n.start = 1e5) This is the same as q <- 2; n <- 140; n.start <- 1e5 z <- filter(rnorm(n.start+n), c(1, -0.8), sides=1) z <- tail(z, n.start+n-q) x <- tail(filter(z, c(.3,-.4), method="r"), n) (The latter may sometimes be necessary, because arima.sim checks the stability of the AR part by calculating the roots of φ(z) numerically, which is notoriously unstable if the order of φ is large. Sometimes arima.sim refuses to simulate a stable ARMA. ) Remark 4.25. If the characteristic polynomials θ(z) and φ(z) of an ARMA(p,q) share a (complex) root, say x1 = y1, then θ(z) (z − x1)(z − x2) ··· (z − x ) = q φ(z) (z − y1)(z − y2) ··· (z − yp) 33 Simulateed values Simulateed −4 −2 0 2 0 20 40 60 80 100 120 140 ACF −0.5 0.5 1.0 0 5 10 15 Figure 25: Simulation of ARMA(2,1) in Example 10.6. (z − x2) ··· (z − x ) θ˜(z) = q = , (z − y2) ··· (z − yp) φ˜(z) where θ˜(z) is of order q − 1 and φ˜(z) is of order p − 1, and it turns out that φ˜(B)Xi = θ˜(B)Wi, which means that the model reduces to ARMA(p − 1,q − 1). Condition 4.26 (Regularity conditions for ARMA). In what follows, we shall assume the following: (a) The roots of the AR characteristic polynomial are strictly outside the unit disc (cf Theorem 4.18). (b) The roots of the MA characteristic polynomial are strictly outside the unit disc (cf. Theorem 4.21). (c) The AR and MA characteristic polynomials do not have common roots (cf. Remark 4.25). 34 Theorem 4.27. A stationary ARMA(p,q) model satisfying Condition 4.26 ex- ists, is invertible and can be written as a causal linear process ∞ ∞ Xi = ξjWi−j, Wi = βjXi−j, Xj=0 Xj=0 where the constants ξj and βj satisfy ∞ ∞ θ(z) φ(z) ξ zj = and β zj = . j φ(z) j θ(z) Xj=0 Xj=0 In addition, β0 = 1 and there exist constants K < ∞ and ǫ > 0 such that −j max{|ξj|, |βj|} ≤ K(1 + ǫ) for all j ≥ 0. Remark 4.28. In fact, the coefficients ξj (or βj) related to any ARMA(p,q) can be calculated numerically from the parameters easily. Also the autocovariance can be calculated numerically up to any lag in a straightforward way; cf. Brockwell and Davis p. 91–95. In R, the autocorrelation coefficients can be calculated with ARMAacf.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-