4 the Properties of AR(1) and MA Processes

4 the Properties of AR(1) and MA Processes

B.Sc./Grad. Dip.: Probability Models and Time Series MAS programmes: Stochastic Models and Time Series 4 The properties of AR(1) and MA processes 4.1 Autocovariances and autocorrelations for an AR(1) process We derive the autocovariance and autocorrelation functions for an AR(1) process fYtg, using two alternative methods. The ¯rst method is based upon the use of the model equation, Yt = ÁYt¡1 + ²t; t 2 Z: (1) Recall the in¯nite moving average expression, X1 i Yt = Á ²t¡i; (2) i=0 derived in Section 3.7. The representation was shown to hold in the sense of mean square convergence for jÁj < 1. However, as we will see later, the representation also holds in the sense of \almost sure" or \probability one" convergence: with such an interpretation, it is apparent that Yt and ²t+i are uncorrelated for i ¸ 1. Therefore, it can be argued that Ys and ²t are uncorrelated for s < t, so that, since the process mean is zero, E[Ys²t] = 0 s < t: Squaring Equation (1) and taking expectations, 2 2 2 2 E[Yt ] = Á E[Yt¡1] + 2ÁE[Yt¡1²t] + E[²t ]; i.e., 2 2 γ0 = Á γ0 + σ ; 2 where σ = var(²t). Hence, σ2 γ = : (3) 0 1 ¡ Á2 Multiplying Equation (1) by Yt¡¿ , where ¿ ¸ 1, and taking expectations, yields γ¿ = φγ¿¡1 ¿ ¸ 1: (4) Equation (4) together with the initial condition of Equation (3) has the solution σ2 γ = Á¿ ¿ ¸ 0: (5) ¿ 1 ¡ Á2 Recalling that ½¿ = γ¿ /γ0, we obtain ¿ ½¿ = Á ¿ ¸ 0: (6) 1 Note the geometric decline of the autocorrelation function. If Á < 0 then the autocorrelation function oscillates and has negative correlation at lag 1. A slight variant of this method for obtaining the expression for ½¿ is to divide through in Equation (4) by γ0 to obtain the recurrence relation ½¿ = Á½¿¡1 ¿ ¸ 1: (7) Equation (7) together with the initial condition ½0 = 1 has the solution obtained previously as Equation (6). Using the symmetry property that γ¡¿ = γ¿ and ½¡¿ = ½¿ , we may, if we wish, extend the range of the values of ¿ in the solutions for the autocovariance function fγ¿ g and the autocorrelation function f½¿ g. Thus, for example, we may write j¿j ½¿ = Á ¿ 2 Z: An alternative approach to ¯nding the expressions for fγ¿ g and f½¿ g is based upon use of the in¯nite moving average expression of Equation (2). For ¿ ¸ 0, γ = E[Y Y ] ¿ " t t¡¿ # X1 X1 i j = E Á ²t¡i Á ²t¡¿¡j i=0 j=0 " # X1 X1 i j¡¿ = E Á ²t¡i Á ²t¡j i=0 j=¿ X1 X1 i+j¡¿ = Á E[²t¡i²t¡j] i=0 j=¿ X1 = Á2j¡¿ σ2 j=¿ σ2 = Á¿ : 1 ¡ Á2 Thus we have again the result of Equation (5). Note that in the above derivation we have used the properties of the autocovariance function of a white noise process as given in Equations (1) and (2) of Section 3.3. 4.2 The lag operator It is often convenient to use the lag operator L to characterize models and to carry out math- ematical manipulations. (An alternative terminology is backward shift operator with corre- sponding notation B.) The operator is de¯ned by LYt = Yt¡1 t 2 Z: De¯ning Lj to be the `j-fold' composition of L, then we note that j L Yt = Yt¡j: 2 Equation (1) for the AR(1) model may be written as (1 ¡ ÁL)Yt = ²t: The in¯nite moving average representation of Yt may be more simply derived using the formal- ism (and associated algebra) of the lag operator. ¡1 Yt = (1 ¡ ÁL) ²t X1 i i = Á L ²t i=0 X1 i = Á ²t¡i; i=0 assuming that the sum converges. 4.3 Linear processes De¯nition 4.3.1 (Linear Process) A process fYtg that has the representation X1 Yt = Ãi²t¡i; (8) i=0 where f²tg is a white noise process and fÃig is a sequence of coe±cients such that P1 j=0 jÃjj < 1 is referred to as a linear process. Proposition 4.3.2 (Convergence and Stationarity of Linear Process) P1 For a linear process fYtg, with representation Yt = i=0 Ãi²t¡i: P1 (a) Yt = i=0 Ãi²t¡i is well de¯ned, in the sense that the right hand side is almost surely bounded (i.e. bounded with probability one); Pn (b) the sequence of partial sums, i=0 Ãi²t¡i, converges to Yt in mean square as n ¡! 1; (c) fYtg is (weakly) stationary, with mean zero and autocovariance function given by γ = E[Y Y ] ¿ " t t¡¿ # X1 X1 = E Ãi²t¡i Ãj²t¡¿¡j i=0 j=0 " # X1 X1 = E Ãi²t¡i Ãj¡¿ ²t¡j i=0 j=¿ X1 X1 = ÃiÃj¡¿ E[²t¡i²t¡j] i=0 j=¿ X1 2 = σ ÃjÃj¡¿ ¿ ¸ 0: (9) j=¿ 3 In particular, X1 2 2 var(Yt) = γ0 = σ Ãj : j=0 P1 P1 2 But since j=0 jÃjj < 1, then j=0 Ãj < 1, and so var(Yt) is ¯nite. Finiteness of γ¿ for ¿ 6= 0 follows from the calculation used in Examples 3 Qu. 3. 2 Remarks 4.3.3 (i) A linear process fYtg, as de¯ned above, is sometimes said to be causal or a causal function of f²tg. (ii) We see from the in¯nite moving average representation of Equation (8) that the value Ys of the process at time s depends only on f²t : t · sg and not on f²t : t > sg. (iii) As for the special case of the AR(1) process, Ys and ²t are uncorrelated for s < t, so that E[Ys²t] = 0; s < t; (10) a fact that we shall make use of later. i (iv) In the special case of fÃi = Á : i 2 Ng we have the in¯nite moving average represen- P1 tation (2) of the AR(1) process, and thus j=0 jÃjj < 1 is satis¯ed if and only if jÁj < 1. It P1 i is readily checked by substitution that the process fYtg de¯ned by Yt = i=0 Á ²t¡i satis¯es Equation (1). It follows that a viable (i.e. stationary) AR(1) process with autoregressive pa- rameter Á exists if jÁj < 1. P1 2 (v) The square summability of the coe±cients i.e. j=0 Ãj < 1 is actually both necessary and su±cient for mean square convergence of the linear process representation. 4.4 The moving average model Consider a market that every working day receives fresh information which a®ects the price of a certain commodity. Let Yt denote the price change on day t. The immediate e®ect of the information received on day t upon Yt is represented by ²t, where f²tg is assumed to be a white noise process. But there is also a residual e®ect, such that Yt is a®ected by the information received on the q previous days. A simple model represents fYtg as a moving average process. A moving average process of order q, an MA(q) process, with zero mean is a process fYtg which satis¯es the relation Xq Yt = ²t + θi²t¡i t 2 Z; (11) i=1 2 where f²tg is a white noise process with mean zero and variance σ and where θq 6= 0. For convenience, we may de¯ne θ0 = 1 and rewrite Equation (11) as Xq Yt = θi²t¡i t 2 Z: (12) i=0 4 ² Note that we are dealing here with a ¯nite moving average, having a ¯nite number of moving average parameters, as against the in¯nite moving average representation that we described in the discussion of the AR(1) process. ² Historically, \moving averages"P were introduced rather di®erently, with the coe±cients θi de¯ned in such a way that θi = 1. Each Yt value is then a weighted average of the ²t values. The average \moves" as t moves through successive values. An MA(q) process is a special case of a linear process as de¯ned in Section 4.3, with 8 < θi for 0 · i · q à = : i : 0 for i > q Because all but a ¯nite number of the coe±cients Ãi are zero, an MA(q) process necessarily satis¯es the summability conditions on the coe±cients in the de¯nition of a linear process. Thus, for all moving average parameter values, θ1; θ2; : : : ; θq, Equation (11) or (12) de¯nes a stationary process with mean zero. Applying the result of Equation (9) to the present case, for ¿ ¸ 0, Xq 2 γ¿ = σ θjθj¡¿ : j=¿ Thus 8 2 Pq¡¿ < σ i=0 θiθi+¿ for 0 · ¿ · q γ = : ¿ : 0 for ¿ > q In particular, Xq 2 2 var(Yt) = γ0 = σ θi : i=0 The autocorrelation function is given by 8 Pq¡¿ Pq 2 < i=0 θiθi+¿ = i=0 θi for 0 · ¿ · q ½ = : ¿ : 0 for ¿ > q Note that, for an MA(q) process, there is a cut-o® point in the autocorrelation function { all autocorrelations beyond the q-th are zero. By comparison, for the AR(1) process and, as we shall see, for more general autoregressive processes, all autocorrelations are generally non-zero but die away geometrically as a function of the lag. This fact may be borne in mind when we are examining the sample autocorrelation function of an observed time series and considering what model to ¯t to the data. 5 4.5 The ¯rst order moving average process In the special case of the MA(1) process fYtg, which satis¯es the equation Yt = ²t + θ²t¡1 t 2 Z; (13) the autocorrelation function is given by ½0 = 1 θ ½ = 1 1 + θ2 ½¿ = 0;¿ ¸ 2: Note that if θ > 0 then the MA(1) process is smoother than a white noise process but that if θ < 0 then the MA(1) process is more jagged than a white noise process.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us