<<

Time Series: Co-integration

Series: Economic ; Series: General; Watson M W 1994 Vector autoregressions and co-integration. Time Series: Nonstationary Distributions and Unit In: Engle F, McFadden D L (eds.) Handbook of Econo- Roots; Time Series: metrics Vol. IV. Elsevier, The Netherlands N. H. Chan

Bibliography Banerjee A, Dolado J J, Galbraith J W, Hendry D F 1993 Co- Integration, Error Correction, and the Econometric of Non-stationary . Oxford University Press, Oxford, UK Time Series: Cycles Box G E P, Tiao G C 1977 A canonical analysis of multiple time series. Biometrika 64: 355–65 Time series data in economics and other fields of social Chan N H, Tsay R S 1996 On the use of science often exhibit cyclical behavior. For example, analysis in testing common trends. In: Lee J C, Johnson W O, aggregate retail sales are high in November and Zellner A (eds.) Modelling and : Honoring December and follow a seasonal cycle; voter regis- S. Geisser. Springer-Verlag, New York, pp. 364–77 trations are high before each presidential election and Chan N H, Wei C Z 1988 Limiting distributions of follow an election cycle; and aggregate macro- estimates of unstable autoregressive processes. Annals of 16: 367–401 economic activity falls into recession every six to eight Engle R F, Granger C W J 1987 and error years and follows a business cycle. In spite of this correction: Representation, estimation, and testing. Econo- cyclicality, these series are not perfectly predictable, metrica 55: 251–76 and the cycles are not strictly periodic. That is, each Engle R F, Granger C W J 1991 Long-run Economic Relations: cycle is unique. Quite generally, it is useful to think of Readings in Co-integration. Oxford University Press, Oxford, time series as realizations of processes, and UK this raises the question of how to represent cyclicality Hiraki T, Shiraishi N, Takezwa N 1996 Cointegration, common in stochastic processes. The major descriptive tool for factors, and term structure of Yen offshore interest rates. this is the ‘ ’ (or ‘spectrum’), Journal of Fixed Income December: 69–75 which is the subject of this article. Johansen S 1988 Statistical analysis of co-integration vectors. Journal of Economic Dynamics and Control 12: 231–54 Johansen S 1995 Likelihood-based Inference in Cointegrated Vector AutoregressiŠe Models. Oxford University Press, Oxford, UK 1. Cycles in a Typical Time Series Johansen S 1996 Likelihood-based inference for co-integration Figure 1 shows a time series plot of new housing of some nonstationary time series. In: Cox D R, Hinkley D V, authorizations (‘building permits’) issued by com- Barndorff-Nielsen O E (eds.) Time Series Models: In Econo- munities in the USA, monthly, from 1960 through metrics, Finance and Other Fields. Chapman and Hall, New 1999. This plot has characteristics that are typical of York Lee C M C, Myers J, Swaminathan B 1999 What is the intrinsic many economic time series. First, the plot shows a value of the Dow? Journal of Finance 54: 1693–741 clear seasonal pattern: permits are low in the late fall Lien D, Tse Y K 1999 Forecasting the Nikkei spot index with and early winter, and rise markedly in the spring and fractional co-integration. Journal of Forecasting 18: 259–73 summer. This seasonal pattern is persistent through- Lu$ tkepohl H 1993 Introduction to Multiple Time Series Analysis. out the sample period, but does not repeat itself 2nd edn. Springer-Verlag, New York exactly year-after-year. In addition to a seasonal Maddala G S, Kim I M 1998 Unit Roots, Cointegration, and pattern, there is a slow moving change in the level of Structural Change. Cambridge University Press, Cambridge, the series associated with the business cycle. For UK example, local minima of the series are evident in 1967, Park J Y 1992 Canonical co-integrating regressions. Econo- the mid-1970s, the early 1980s, and in 1990, which metrica 60: 119–43 correspond to periods of macroeconomic slowdown Phillips P C B 1991 Optimal inference in co-integrated systems. or recession in the US economy. Econometrica 59: 283–306 How can the cyclical variability in the series be Reinsel G C, Ahn S K 1992 Vector AR models with unit roots represented? One approach is to use a periodic and reduced rank structure: estimation, likelihood ratio test, and forecasting. Journal of Time Series Analysis 13: 353–75 function like a sine or cosine function. But this is Stock J H 1994 Unit roots, structural breaks and trends. In: inadequate in at least two ways. First, a deterministic Engle R F, McFadden D L (eds.) Handbook of , periodic function doesn’t capture the in Vol. IV. Elsevier, Netherlands the series. Second, since several different periodicities Stock J H, Watson M W 1988 Testing for common trends. (seasonal, business cycle) are evident in the plot, Journal of the American Statistical Association 83: 1097–107 several periodic functions will be required. The next Tanaka K 1996 Time Series Analysis: Nonstationary and section presents a representation of a stochastic NoninŠertibel Distribution Theory. Wiley, New York process that has these ingredients.

15714 Time Series: Cycles

Figure 1 Building permits in the USA

2. Representing Cyclical BehaŠior in Time Series: and troughs. Finally, the process is station- l λ l σ# ω The Spectrum ary with E(Yt) 0 and k cos( k). Adding together several components like Eqn. 1 produces a more interesting : 2.1 Spectral Representation of a CoŠariance n l  α ω jδ ω Consider a of scalar random variables Yt [ j cos( jt) j sin( jt)] (2) oY q where t l 0, p1, p2,…, and E(Y ) l µ, j=" tŠ l λ t Co (YtYt−k) k. That is, assume that Yt is observed at regular intervals, like a week, a month, a year, etc. with E(α ) l E(δ ) l 0, for all j; E(α δ ) l 0, for all j, k; α α lj δ δ j l  j k α# l δ# l Assume that the first and second moments of the E( j k) E( j k) 0, for j k; and E( j ) E( j ) µ λ σ# process do not depend on time, that is and k are time j . For this process, Yt is the sum of n different invariant. To simplify notation, assume that µ l 0. uncorrelated components, each corresponding to a To motivate the spectral representation of Y ,itis different , and each with its own . t α ω jδ ω σ# useful to begin with a very special and simple stoch- (The variance of j cos( jt) j sin( jt)is j .) A l λ l n σ# ω astic process: calculation shows E(Yt) 0, k j=" j cos( jk), and the variance of Y is given by Šar (Y ) l n σ#. l α ω jδ ω t t j=" j Yt cos( t) sin( t) (1) So far, this has all been special. That is, two very special processes have been constructed. However, a where α and δ are random variables with E(α) l E(δ) fundamental result (Crame! r 1942, Kolmorgorov 1940) l 0; E(αδ) l 0; and E(α#) l E(δ#) l σ#. While simple, shows that a generalization of this decomposition can this process has three attractive characteristics. First, be used to represent any covariance stationary process. it is periodic: since cos(aj2π) l cos(a) and sin (aj2π) The result, known as the ‘Spectral Representation l sin(a), then Y l Y for QjQ l 1, 2,… . So Y t+j#π/ω t t Theorem,’ says that if Yt is covariance stationary, then repeats itself with a period of 2π\ω. Second, the it can be represented as random components α and δ give Y a random amplitude (value of at its peak) and a random phase π π (value at t l 0). Thus, two realizations of Y will have l ω α ω j ω δ ω Yt & cos( t)d ( ) & sin( t)d ( ) different amplitudes and different timing of their peaks ! ! 15715 Time Series: Cycles where dα(ω) and dδ(ω) are zero random vari- The spectrum can be determining by inverting this ables that are mutually uncorrelated, uncorrelated relation: across frequency, and have that depend on frequency. Like the special example shown in Eqn. 2, _ ω l π −"  λ −iωj this general representation decomposes Yt into a set of S( ) (2 ) je (5) strictly periodic components—each uncorrelated with j=−_ the others, and each with its own variance. For example, processes with important seasonal com- (To verify that Eqn. 5 is the inverse of Eqn. 4, use ponents will have large values of the components corresponding to the seasonal frequency, processes for π 1 π l 5 iωk 2 2 for k 0 6 & e dω l 3 7 (6) series with strong ‘business cycle’ components will 0 for k  0 have large variances at business cycle frequencies, etc. −π 4 8 Sometimes the spectral representation is written π −"o!π _ λ −iω(j−k) ωqlλ using complex as: so that (2 ) −π j=−_ je d k.)

π A simpler formula for the spectrum is Y l & eiωtdγ(ω) (3) t _ π − S(ω) l (2π)−"[λ j2 λ cos( jω)]. (7) N ! j where dγ(ω) l (1\ 2)[dα(ω)kidδ(ω)] for ω & 0 and j=" dγ(ω) l dγ(kω) for ω ! 0. This representation simpli- fies notation for some calculations. which follows from Eqn. 5, since λ l CoŠ(Y Y ) l Š l λ j t t−j The spectral representation Eqn. 3 is a generalization Co (Yt+jYt) −j. of Eqn. 2 where the ‘increments’ dγ(ω) have ortho- gonality properties like those of α and δ . The variance j j σ# of each component is frequency specific (like j ), and is summarized by a density function S(ω). Specifically, 2.3 A Summary of the Properties of the Spectrum the variance of dγ(ω)isE(dγ(ω)dγ(ω)) l S(ω)dω. (The Summarizing the results presented in Sect. 2.2., the use of a density function to summarize the variance of spectrum (or spectral density function) has 4 im- dγ(ω) is not completely general and rules out processes portant properties. with deterministic (or perfectly predictable) com- (a) S(ω)dω can be interpreted as the variance of the ponents. The references listed in the last section cyclical component of Y corresponding to the fre- provide this more general result.) quency ω. The period of this component is 2π\ω. You may wonder why the spectral representation (b) S(ω) & 0. This follows, because S(ω) is a vari- doesn’t use frequencies larger than π. The answer is ance function. that these are not needed to describe a process (c) S(ω) l S(kω). This follows from the definition measured at discrete intervals. This discreteness of γ(ω) in the spectral representation Eqn. 3, or from that periodic components associated with frequencies Eqn. 7, since cos(a) l cos(ka). Because of this sym- larger than π will look just like components associated metry, plots of the spectrum are presented 0 % ω % π. with frequencies less than π. For example, a com- l !π ω ω (d) Var(Y) −π S( )d . ponent with frequency 2π will have period of 1 and, because the series is measured only once every period, this component will appear to be constant—it will look just like (be ‘aliased’ with) a component that has 2.4 The Spectrum of Building Permits ω l a frequency of 0. The time series plot of building permits (Fig. 1) shows that most of the variability in the series comes from 2.2 Relationship Between the Spectral Density and two sources: seasonal variation over the year and the AutocoŠariances business-cycle variability. This is evident in the esti- mated spectrum for the series, shown in Fig. 2. Most of There is a one-to-one relationship between the spectral the mass in the spectrum is concentrated around the density function and the of the seven peaks evident in the plot. (These peaks are process. The autocovariances follow directly from the sufficiently large that spectrum is plotted on a log spectral representation: scale.) The first peak occurs at frequency ω l 0.07 λ l l z k E(YtYt−k) E(YtYt−k) corresponding to a period of 90 months. This repre- sents the business cycle variability in the series. The l E9& eiωt dγ(ω) & e−iω(t−k) dγ(ω): other peaks occur at frequencies 2π\12, 4π\12, 6π\12, 8π\12, and π. These are peaks for the seasonal frequencies: the first corresponds to a period of 12 l & eiωkS(ω)dω. (4) months, and the others are the seasonal ‘harmonics’ 6, 4, 3, and 2 months.

15716 Time Series: Cycles

Figure 2 Spectrum of building permits

3. Spectral Properties of MoŠing AŠerage Filters How does c(L) change the spectral properties of Y? If the spectral representation of Yt is written as:

3.1 Some General Results π l iωt γ ω Often, one time series (Y) is converted into another Yt & e d ( ) time series (X) through a moving function −π such as: then Xt can be written as, s l  s Xt cjYt−j X l  c Y j=−r t j t−j j=−r For example, X might be the first difference of Y , i.e., s π l k t t l l  c & eiω(t−j) dγ(ω) Xt Yt Yt−", or an annual Xt j " "" j=−r −π "# j=! Yt−j, or one of the more complicated moving that approximate official seasonal adjustment π E s G l iωt  −iωj γ ω procedures (see Time Series: Seasonal Adjustment). & e cje d ( ) Let the ‘lag’ operator L shift time series back by one −π F j=−r H l # l l π period (so that LYt Yt−", L Yt L(LYt) Yt−#, −" l l iωt −iω γ ω L Yt Yt+", etc.), then this moving average can be & e c(e )d ( ) represented as: −π π E G l iω t−ρ(ω) Xt c(L)Yt l & e F ω H g(ω)dγ(ω) (8) −π where, where the last line uses the polar form of c(e−iω): c(e−iω) l −rj(j −"j !j j(j s l ω −iρ(ω) c(L) c−r L c−"L c!L c"L csL g( )e where The operator c(L) is sometimes called a ‘linear filter.’ g(ω) l Q c(eiω) Q (9)

15717 Time Series: Cycles and (ω) is plotted in Fig. 3. The gain is nearly unity ex- cept near the seasonal frequencies, and so the filter A C kIm[c(e−iω)] can be interpreted as producing a new series that ρ(ω) l tan−" (10) Re[c(e−iω)] leaves the non-seasonal components unaltered, but eli- B D minates the seasonal components. For a detailed dis- cussion of seasonal adjustment procedures see Time This representation for X shows that the filter c(L) t ω Series: Seasonal Adjustment. ‘amplifies’ each component of Yt by the factor g( ) and shifts it back in time by ρ(ω)\ω time units. ω The function g( ) is called the filter gain (or 3.3 Band-pass Filters sometimes the ‘amplitude gain.’ The function ρ(ω)is called the filter ‘phase’ and g(ω)# l [c(e−iω)c(eiω)] is The seasonal adjustment filter has the desirable prop- called the ‘power transfer function’ of the filter. erties of (essentially) zeroing out certain frequency components (the ‘seasonal’) and leaving the other components (the non-seasonals) unaltered. Filters 3.2 Three Examples with this characteristic are called ‘band-pass filters,’ since they ‘pass’ components associated with certain frequencies. l 3.2.1 Example one: lag filter. Suppose that c(L) Band-pass filters for prespecified frequencies are # −iω l −#iω L . Then c(e ) e , easy to construct. For example, suppose that you want ω l Q iω Q l a filter with no phase shift that passes components g( ) c(e ) 1 between 0 % ω % ω. That is, you want to construct a Q ω Q l % ω % ω and filter, say c(L) with gain g( ) 1 for 0 and Qg(ω)Q l 0 for ω ! ω % π. (Assume that the corre- A ω C sponding negative frequencies are also passed.) − sin2 l ρ(ω) l tan " l 2ω Choosing a symmetric filter, cj c−j, will make the ω iω B cos2 D phase 0 for all ω. In this case, c(e ) is a , and so the gain is given by: Thus, this filter does not alter the amplitude of any ω l _ of the components, g( ) 1, but shifts each com- ω l  −iωj ponent back in time by ρ(ω)\ω l 2 time periods. g( ) cQjQe j=−_ Now, using Eqn. 6, it is straightforward to verify the 3.2.2 Example two: first difference filter. Suppose identity l k that c(L) (1 L) so that Xt is the first difference of −iω l k −iω π _ Yt. Then c(e ) 1 e , l π −" −iωj  −iωj ω cj (2 ) & e cje d ω ω " −π j=−_ g(ω) l (ke−i j2kei )# l N2(1kcos(ω)) π −" iωj iω and l (2π) & e c(e )dω −π A C sin(ω) Replacing c(eiω) l g(ω) with the target value of the ρ(ω) l tan−" k ω gain for the band-pass filter and carrying out the B 1 cos( ) D integration yields

Thus, both the gain and phase are frequency specific. 1 5 For example, g(0) l 0 so that the first difference filter 1 ω  − 1 iωj ω 2 sin( j) for j 0 6 eliminates the lowest frequency component of the c l (2π) " e Q l 3 jπ 7 j ij −ω series (the level of the series is ‘differenced out’), and ω\π for j l 0 g(π) l N2 so that the high frequency components are 4 8 amplified (Xt is ‘choppier’ than Yt). The resulting filter c(L) passes all components with frequencies between 0 % ω % ω. This means that the filter 1kc(L) passes everything except these fre- 3.2.3 Example three: X-12 seasonal adjust- quencies. The filter c(L) is called a low-pass filter (it ment Filter. The official monthly seasonal adjustment passes low frequencies), and 1kc(L) is called a high- procedure in the USA and several other countries pass filter (it passes the high frequencies). Combin- (Census X-12) can be well approximated by a linear ations of high- and low-pass filters can be used to l )% j filter c(L) j=−)%cQjQL where the coefficients ci are construct band-pass filters for any set of frequencies. given in Wallis (1974). Since the filter is symmetric, For example, a filter that passes components with l iω ρ ω l ω ω ω & ω cj c−j, c(e ) is real and so ( ) 0. The function g frequencies between " to # (with # ") can be 15718 Time Series: Cycles

Figure 3 Gain of linear approximation to monthlyi12 seasonal adjustment filter constructed as the difference of the low-pass filter for 4. Spectra of Commonly Used Stochastic ω ω # and the low-pass filter for ". Processes One practical problem with exact band-pass filters is Suppose that Y has spectrum S (ω) and X l c(L)Y . that the coefficients cj die out very slowly (at the rate Y t t 1\j). This introduces important ‘endpoint’ problems What is the spectrum of X? As was shown in Eqn. 8, the when applying these filters to finite realizations of frequency components of X are the frequency com- ω −iρ(ω) data, say oY qT . One approach is simply to truncate ponents of Y scaled by the factor g( )e , where t t=" l g(ω) is the gain and ρ(ω) is the phase of c(L). This the filter at some point, for example to use ck(L) k j o qT−k means that spectra of X and Y are related by: j=−k cjL and apply this filter to Yt t=k+". An alterna- tive procedure is to construct a minimum mean square ω l ω # ω l Q −iω Q# ω error estimate of the infeasible band-pass values: SX( ) g( ) SY( ) c(e ) SY( ) −iρ(ω) _ which follows from Qe Q l 1 and the definition of ω X l c(L)Y l  c Y , g( ). t t i t−i ε i=−_ Now, suppose that is a ‘white ’ process, t ε l λ l σ# λ l defined by the properties E( t) 0, ! , and k as 0 for k  0. The spectrum of ε is then easily calculated from Eqn. 7: _ Q o qT l  Q o qT ω l π −"σ# E(Xt Yj j=") ciE(Yt−i Yj j=") Sε( ) (2 ) i=−_ So, the spectrum of is constant. (Which is This can be accomplished by using backcasts and why the process is called ‘white’ noise.) l ε forecasts for the missing pre-sample and post-samples Now suppose that Yt c(L) t. Then: values of Yt. The relative merits of these two ap- ω l Q −iω Q# ω l Q −iω Q# π −"σ# proaches for band-pass filters is discussed in Baxter SY( ) c(e ) Sε( ) c(e ) (2 ) and King (1999), Geweke (1978) and Dagum (1980) discuss this problem in the context of seasonal ad- This result can be used to determine the spectrum of justment. any stationary ARMA process (see Time Series:

15719 Time Series: Cycles

ARIMA Methods) for a detailed discussion of these and approximately normally distributed in large models). If Yt follows an ARMA process, then it can samples. be represented as: An alternative set of are based on non- parametric methods. These estimators can be moti- φ l θ ε (L)Yt (L) t vated by Eqn. 7 which shows the spectrum as a weighted sum of the autocovariances. This suggests an The autoregressive operator, φ(L) can be inverted to of the form: l ε l φ −"θ yield Yt c(L) t with c(L) (L) (L). This means that: lT ω l π −"  λV S( ) (2 ) wT(k) k 2(π)−"σ#Qθ(e−iω)Q# k=−lT S (ω) l Qc(e−iω)Q#(2π)−"σ# l (11) y Qφ −iω Q# λ# λ (e ) where k is an estimator of k, lT are truncation values that depend on the sample size T, and wT(k) is a weight As an example, consider the AR(1) model: function that also depends on T. These estimators can be studied using nonparametric methods (see Non- l φ jε Yt Yt−" t : Asymptotics). A closely related set of nonparametric estimators equivalently written as, are based on the ‘periodogram’. The periodgram provides a frequency decomposition of the sample kφ l ε (1 L)Yt t. variance of a partial realization of the process, say o qT ω Yt t=". The periodgram is computed at frequencies j Applying Eqn. 11 yields l 2πj\T, j l 1, 2,…, T\2 (assuming T is even), and ω the value of the periodgram at frequency j is: σ# 1 σ# 1 S ω l l Y( ) −iω # # T 2π Q1kφe Q 2π (1jφ k2φ cos(ω)) l 2  k ` −iω t # pj ) (yt y)e j ) T t=" This spectrum is equal to σ#[2π(1jφ#k2φ cos(ω))]−" at ω l 0. When 0 ! φ ! 1, it falls steadily as ω The periodgram has three interesting asymptotic increases from 0 to π. This means that, relative to properties: (a) E(p )\4π ! S (ω ); (b) Š \ π ! ω # Š j l Y j  white noise, the low frequency components of the ar(pj 2 ) SY( j) ; and (c) co (pj, pk) 0 for j k. AR(1) are more important than the high frequency Property (a) shows that a scaled version of the components. Thus, realizations of the series appear periodgram provides an asymptotically unbiased es- smoother than white noise. timator of the spectrum, but since the variance doesn’t approach zero (property (b)), the estimator is not consistent. The third property shows that averaging a 5. Spectral Estimation set of periodgram ordinates around a given frequency reduces the variance of the estimator, although (from There are two general approaches to estimating property (a)) it may introduce bias. By carefully spectra. Perhaps the simplest is to estimate an ARMA selecting the averaging weights, consistent estimators model for the series (as explained in Time Series: with good finite sample mean square error properties ARIMA Methods) and then compute the implied can be constructed. spectrum for this estimated model. Asymptotic appr- Some recent advances for nonparametric spectral oximations can then be used for . estimators are developed in Andrews (1991). Since the estimators of the ARMA parameters are asymptotically normally distributed and the spectrum See also: : Asymptotics; Time is a smooth function of these parameters (see Eqn. 11), Series: ARIMA Methods; Time Series: Seasonal the estimated spectrum at any point ω will be Adjustment asymptotically normal with a variance that can be computed using the δ-method (a mean-value expan- sion). Bibliography An interesting extension of this procedure is dis- cussed in Berk (1974). He shows that, quite generally, Andrews D W K 1991 Heteroskedasticity and a suitably long autoregression can be specified and consistent estimation. Econometrica 59: 817–58 the spectrum estimated using the estimated AR co- Baxter M B, King R G 1999 Measuring business cycles: ap- efficients. The precise definition of ‘suitably long’ proximate band-pass filters for economic time series. ReŠiew depends on the size of the available sample, with more of Economics and Statistics 81(4): 575–93 terms included in larger samples. These ‘autoregressive Berk K N 1974 Consistent autoregressive spectral estimates. spectral estimators’ are shown by Berk to be consistent Annals of Statistics 2(3): 489–502

15720 Time Series: Economic Forecasting

Brockwell P J, Davis R A 1991 Time Series: Theory and Panel Data; Simultaneous Equation Estimation: OŠer- Methods, 2nd edn. Springer, New York Šiew). Time-series models typically forecast the vari- Crame! r H 1942 On of certain function spaces. Š W able(s) of interest by implicitly extrapolating past Arki fur Matematik, Astronomi och Fysik. 28B(12): 1–7 policies into the , while structural models, Dagum E B 1980 The X-11 ARIMA Seasonal Adjustment Method. Research Paper, Statistics, Canada because they rely on economic theory, can evaluate Geweke J F 1978 The revision of seasonally adjusted time series. hypothetical policy changes. In this light, perhaps it Proceedings of the Business and Economic Statistics Section— is not surprising that time-series models typically American Statistical Association, pp. 320–25 produce forecasts as good as, or better than, far more Granger C W J, Newbold P 1977 Forecasting Economic Time complicated structural models. Still, it was an intel- Series. Academic Press, New York lectual watershed when several studies in the 1970s Hamilton J 1994 Time Series Analysis. Princeton University (reviewed in Granger and Newbold 1986) showed that Press, Princeton, NJ simple univariate time-series models could outforecast Harvey A 1993 Time Series Models, 2nd edn. MIT Press, the large structural models of the day, a result which Cambridge, UK Kolmorgorov A N 1940 Kurven in Hilbertschen Raum die continues to be true (see McNees 1990). This good gegenu$ ber eine einparametrigen Gruppe von Bewegungen forecasting performance, plus the relatively low cost of invariant sind. C. R. (Doklady) de L’Academie des Sciences de developing and maintaining time-series forecasting l’IRSS, New Series 26: 6–9 models, makes time-series modeling an attractive way Nerlove M, Grether D M, Carvalho J L 1979 Analysis of to produce baseline economic forecasts. Economic Time Series. Academic Press, New York At a general level, time-series forecasting models Priestly M B 1981 Spectral Analysis and Time Series. Academic can be written, Press, New York Wallis K F 1974 Seasonal adjustment and relations between y l g(X , θ)jε (1) variables. Journal of the American Statistical Association 69: t+h t t+h 18–31 where yt denotes the variable or variables to be M. W. Watson forecast, t denotes the date at which the forecast is made, h is the forecast horizon, Xt denotes the variables used at date t to make the forecast, θ is a ε vector of parameters of the function g, and t+h denotes the forecast error. The variables in Xt usually in- clude current and lagged values of yt. It is useful to Time Series: Economic Forecasting define the forecast error in (1) such that it has ε Q l conditional mean zero, that is, E( t+h Xt) 0. Thus, Time-series forecasts are used in a wide of given the predictor variables Xt, under mean-squared economic activities, including setting monetary and error loss the optimal forecast of y is its conditional θ t+h fiscal policies, state and local budgeting, financial mean, g(Xt, ). Of course, this forecast is infeasible management, and financial . Key elements because in practice neither g nor θ are known. The task of economic forecasting include selecting the fore- of the time-series forecaster therefore is to select the θ casting model(s) appropriate for the problem at hand, predictors Xt, to approximate g, and to estimate in assessing and communicating the asso- such a way that the resulting forecasts are reliable and ciated with a forecast, and guarding against model have mean-squared forecast errors as close as possible instability. to that of the optimal infeasible forecast. Time-series models are usefully separated into univariate and multivariate models. In univariate 1. Time Series Models for Economic Forecasting models, Xt consists solely of current and past values of yt. In multivariate models, this is augmented by data Broadly speaking, statistical approaches to economic on other time series observed at date t. The next forecasting fall into two categories: time-series subsections provide a brief survey of some leading methods and structural economic models. Time-series time-series models used in economic forecasting. For methods use economic theory mainly as a guide to simplicity attention is restricted to one-step ahead variable selection, and rely on past patterns in the data forecasts (h l 1). Here, the focus is on forecasting in a to predict the future. In contrast, structural economic stationary environment; the issue of nonstationarity in models take as a starting point formal economic theory the form of structural breaks or time varying para- and attempt to translate this theory into empirical meters is returned to below. relations, with parameter values either suggested by theory or estimated using historical data. In practice, time-series models tend to be small with at most a 1.1 UniŠariate Models handful of variables, while structural models tend to be large, simultaneous equation systems which some- Univariate models can be either linear, so that g is incorporate hundreds of variables (see Economic linear in Xt, or nonlinear. All linear time-series models 15721

Copyright # 2001 Elsevier Science Ltd. All rights reserved. International Encyclopedia of the Social & Behavioral Sciences ISBN: 0-08-043076-7