<<

NBER Summer Institute Minicourse – What’s New in :

Lecture 1

July 14, 2008

Frequency Domain Descriptive

Lecture 1 ‐ 1, July 21, 2008

Outline

0. Introduction to Course 1. Time Series Basics 2. Spectral representation of 3. Spectral Properties of Filters (a) Band Pass Filters (b) One-Sided Filters 4. Multivariate spectra 5. Spectral Estimation (a few words – more in Lecture 9)

Lecture 1 ‐ 2, July 21, 2008

Introduction to Course

Some themes that have occupied time-series econometricians and empirical macroeconomists in the last decade or so:

1. Low- variability: (i) unit roots, near unit roots, , fractional models, and so forth; (ii) time varying parameters; (iii) stochastic volatility; (iv) HAC matrices; (v) long-run identification in SVAR

2. Identification: methods for dealing with “weak” identification in linear models (linear IV, SVAR) and nonlinear models (GMM).

3. Forecasting: (i) inference procedures for relative forecast performance of existing models; (ii) potential improvements in forecasts from using many predictors

Lecture 1 ‐ 3, July 21, 2008

Some tools used:

1. VARs, spectra, filters, GMM, asymptotic approximations from CLT and LLN.

2. Functional CLT

3. Simulation Methods (MCMC, Bootstrap)

We will talk about these themes and tools. This will not be a comprehensive literature review. Our goal is present some key ideas as simply as possible.

Lecture 1 ‐ 4, July 21, 2008

July 14: Preliminaries and inference

1. Spectral preliminaries and applications, linear filtering theory (MW)

2. Functional central limit theory and structural breaks (testing) (MW)

3. Many instruments/weak identification in GMM I (JS)

4. Many instruments/weak identification in GMM II (JS)

Lecture 1 ‐ 5, July 21, 2008

July 15: Methods for macroeconometric modeling

5. The Kalman filter, nonlinear filtering, and Markov Chain Monte Carlo (MW)

6. Specification and estimation of models with stochastic time variation (MW)

7. Recent developments in structural VAR modeling (JS)

8. Econometrics of DSGE models (JS)

Lecture 1 ‐ 6, July 21, 2008

July 16: HAC, forecasting-related topics

9. Heteroskedasticity- and consistent (HAC) standard errors (MW)

10. Forecast assessment (MW)

11. Dynamic factor models and forecasting with many predictors (JS)

12. Macro modeling with many predictors (JS)

Lecture 1 ‐ 7, July 21, 2008

Time Series Basics (and notation)

(References: Hayashi (2000), Hamilton (1994), … , lots of other books)

1. {Yt}: a sequence of random variables

2. Stochastic Process: The probability law governing {Yt}

3. Realization: One draw from the process, {yt}

4. Strict Stationarity: The process is strictly stationary if the of ()YYtt,++1 ,..., Y tk is identical to the probability distribution of

()YYτ ,ττ++1 ,..., Yk for all t, τ, and k. (Thus, all joint distributions are time invariant.)

′ 5. Autocovariances: γ tk,+=,cov() Y t Y t k

Lecture 1 ‐ 8, July 21, 2008

′ 6. : ρtk,+=,cor() Y t Y t k

7. Covariance Stationarity: The process is covariance stationary if μt =

E(Yt) =μ and γt,k = γk for all t and k.

8. White noise: A process is called white noise if it is covariance stationary and μ = 0 and γk = 0 for k ≠ 0.

9. Martingale: Yt follows a martingale process if E(Yt+1 | Ft) = Yt, where Ft ⊆ Ft+1 is the time t information set.

10. Martingale Difference Process: Yt follows a martingale process if E(Yt+1 | Ft) = 0. {Yt} is called a martingale difference sequence or “mds.”

Lecture 1 ‐ 9, July 21, 2008

11. The Lag : L lags the elements of a sequence by one period. 2 Lyt = yt−1, L yt = yt−2,. If b denotes a constant, then bLYt = L(bYt) = bYt−1.

12. Linear filter: Let {cj} denote a sequence of constants and −r −r+1 s c(L) = c−rL + c−r+1L + … + c0 + c1L + … + csL s denote a polynomial in L. Note that X = c(L)Y = cY is a moving t t ∑ jr=− j tj− average of Yt. c(L) is sometimes called a linear filter (for reasons discussed below) and X is called a filtered version of Y.

p 13. AR(p) process: φ(L)Yt = εt where φ(L) = (1 − φ1L − … − φpL ) and εt is white noise.

q 14. MA(q) process: Yt = θ(L)εt where θ(L) = (1 − θ1L − … − θqL ) and εt is white noise. Lecture 1 ‐ 10, July 21, 2008

15. ARMA(p,q): φ(L)Yt = θ(L)εt.

16. Wold decomposition theorem (e.g., Brockwell and Davis (1991))

Suppose Yt is generated by a linearly indeterministic covariance stationary process. Then Yt can be represented as

Yt = εt + c1εt−1 + c2εt−2 + … ,

2 ∞ 2 where ε is white noise with σ , c < ∞, and t ε ∑i=1 i

εt = Yt – Proj(Yt | lags of Yt) (so that εt is “fundamental”).

Lecture 1 ‐ 11, July 21, 2008

17. Spectral Representation Theorem(e.g, Brockwell and Davis (1991)):

Suppose Yt is a covariance stationary zero process, then there exists an orthogonal-increment process Z(ω) such that

(i) Var(Z(ω)) = F(ω) and

π itω (ii) Xt = ∫ edZ()ω −π where F is the spectral distribution of the process. (The , S(ω), is the density associated with F.) This is a useful and important decomposition, and we’ll spend some time discussing it.

Lecture 1 ‐ 12, July 21, 2008

U.S. Residential Building Permits: 1959:1-2008:5

Lecture 1 ‐ 13, July 21, 2008

Some questions

1. How important are the “seasonal” or “business cycle” components in

Yt? 2. Can we measure the variability at a particular frequency? Frequency 0 (long-run) will be particularly important as that is what HAC Covariance matrices are all about. 3. Can we isolate/eliminate the “seasonal” (“business-cycle”) component? 4. Can we estimate the business cycle or “gap” component in real time? If so, how accurate is our estimate?

Lecture 1 ‐ 14, July 21, 2008

1. Spectral representation of a covariance stationary stochastic process

Deterministic process: 2π (a) Yt = cos(ωt), strictly periodic with period = , Y0 = 1, = 1. ω

2π (b) Yt = a×cos(ωt) + b×sin(ωt) , strictly period with period = , Y0 = a, ω amplitude = ab22+

Lecture 1 ‐ 15, July 21, 2008

Stochastic Process:

Yt = a×cos(ωt) + b×sin(ωt) , a and b are random variables, 0-mean, mutually uncorrelated, with common variance σ2.

2nd - moments :

E(Yt) = 0

2 2 2 2 Var(Yt) = σ ×{cos (ωt) + sin (ωt) } = σ

2 2 Cov(YtYt−k) =σ {cos(ωωttkttk )cos( (− ωω ))+−= sin( )sin( σω ( ))} cos( k )

Lecture 1 ‐ 16, July 21, 2008

Stochastic Process with more components:

n Yt =∑{a j cos(ωωjjt )+ bj sin(t )}, {aj,bj} are uncorrelated 0-mean random j=1

2 variables, with Var(aj) = Var(bj) = σ j

2nd - moments :

E(Yt) = 0

n 2 Var(Yt) = ∑σ j (Decomposition of variance) j=1

n 2 Cov(YtYt−k) = ∑σ jjcos(ω k ) (Decomposition of auto-) j=1

Lecture 1 ‐ 17, July 21, 2008

Stochastic Process with even more components:

π π Ytdatdb=+cos(ω ) (ωωω ) sin( ) ( ) t ∫∫00 da(ω) and db(ω): random variables, 0-mean, mutually uncorrelated, uncorrelated across frequency, with common variance that depends on frequency. This is called the spectrum.

Lecture 1 ‐ 18, July 21, 2008

A convenient change of notation:

Yt = a×cos(ωt) + b×sin(ωt)

11 =−++eiiωω() a ib e− () a ib 22 =+egiiωω e− g

1 where i = −1 and eiω = cos(ω) + i×sin(ω), g = ()aib− and g is the 2 complex conjugate of g.

Lecture 1 ‐ 19, July 21, 2008

Similarly

π ωωπ Ytdatdb=+cos(ω ) (ωωω ) sin( ) ( ) t ∫∫00 π π π 11it− it =−++∫∫e(()ω daω idb ())ωωω e (() da idb ()) 2200 ω = ∫ edZit () −π 1 where dZ(ω) = (da(ω) − idb(ω)) for ω ≥ 0 and dZ()ω = dZ (−ω ) for ω < 0. 2

Because da and db have mean zero, so does dZ. Denote the variance of dZ(ω) as Var(dZ(ω)) = E(dZ(ω)dZ()ω )=S(ω)dω, and using the assumption that da and db are uncorrelated across frequency E(dZ(ω)dZ()'ω )=0 for ω ≠ ω′.

Lecture 1 ‐ 20, July 21, 2008

Second moments of Y:

ππ ⎧⎫it it E(Yt) = EedZ⎨⎬∫∫ωω()ωω= eEdZ ( ())0= −− ⎩⎭ππ ππ ⎧ it−− i() tk ⎫ γ = E(Y Y ) = EYY( )= E eωω dZ ()ω e dZ ()ω k t t−k ttk− ⎨ ∫∫ ⎬ −− ⎩⎭ππ π = ∫ eeitωω−− i() tk EdZdZ(()())ω ω − π ωω π ik = ∫ eSω() d −π

π Setting k = 0, γ0 = Var(Yt) = ∫ Sd()ω ω −π

Lecture 1 ‐ 21, July 21, 2008

Summarizing

1. S(ω)dω can be interpreted as the variance of the cyclical component of Y corresponding to the frequency ω. The period of this component is 2π period = . ω

2. S(ω) ≥ 0 (it is a variance)

3. S(ω) = S(−ω). Because of this symmetry, plots of the spectrum are presented 0 ≤ ω ≤ π.

π ikω 4. γk = eS()ω dω can be inverted to yield ∫− π

∞∞ 11−ikω ⎧ ⎫ Se()ωγγγω==+∑∑kk⎨ 0 2 cos() k⎬ 22kk=−∞⎩⎭ =1 ππ

Lecture 1 ‐ 22, July 21, 2008

The Spectrum of Building Permits

Figure 2 Spectrum of Building Permits

9

8

7

6

5

4

3 Ln(Spectral Power)

2

1

0 00.511.522.53 Frequency

Most of the mass in the spectrum is concentrated around the seven peaks evident in the plot. (These peaks are sufficiently large that spectrum is plotted on a log scale.) The first peak occurs at frequency ω = 0.07 corresponding to a period of 90 months. The other peaks occur at 2π/12, 4π/12, 6π/12, 8π/12, 8π/12, and π. These are peaks for the seasonal frequencies: the first corresponds to a period of 12 months, and the others are the seasonal “” 6, 4, 3, 2.4, 2 months. (These harmonics are necessary to reproduce an arbitrary − not necessary sinesoidal − seasonal pattern.)

Lecture 1 ‐ 23, July 21, 2008

“Long-Run Variance” The long-run variance is S(0), the variance of the 0-frequency (or ∞ period ∞ ∞ 1π −ikω 1 component). Since Se()ω = ∑ γ k , then S(0) = ∑ γ k . This plays 2 k=−∞ 2π k=−∞ an important role in . To see why, suppose Yt is covariance stationary with mean μ. Then ⎛⎞1 T var()TY (−=μ ) var⎜⎟∑ wt ⎝⎠T t=1 1 γγγ γγγγ =+−++−++...+{TT(1)()(2)( T ) 1( )} T 0112211−−−−TT TT−−111 γγγ =−∑∑jjjj() +− jT=− +11T j =

If the autocovariances are “1-summable” so that ∑ j | γ j |< ∞ then 1 T ∞ var()∑∑ wtj→ γ = 2πS(0) T tj==−∞1

Lecture 1 ‐ 24, July 21, 2008

2. Spectral Properties of Filters (Moving Averages)

−r s Let xt = c(L)yt, where c(L) = c−rL + … + csL , so that x is a of y with weights given by the c’s.

How does c(L) change the cyclical properties of y? To study this, suppose that y is strictly periodic

iωt −iωt yt = 2cos(ωt) = e + e

2π with period p = . ω

A simple representation for x follows:

Lecture 1 ‐ 25, July 21, 2008

ss itjωω()−−− itj () xcyceetjtjj==∑∑− [] + jr=− jr =− ss itω −− iωω j it ωωωωω i j it −− i it i =+=+ecee∑∑jj ceeceece() () jr=− jr =− c(eiω) is a , say c(eiω) = a + ib, where a = Re[c(eiω)] and b = Im[c(eiω)]. Write this number in polar form as

1 ⎛⎞iiω 222 θ ce⎜⎟=+( a b ) [cos(θθ ) + i sin( )] =ge ⎝⎠

iω 11 ⎛⎞ 2222iiωω− −−11⎛⎞bImce[( )] where gab=+( ) = [( cece )( )] and θ ==tan⎜⎟ tan ⎜i ⎟ ⎝⎠aRece⎝⎠[(ω )]

Lecture 1 ‐ 26, July 21, 2008

Thus itω − iθωθ− it i xet =+geege ωω it[]−−−θθ it [] =+ge[]ωω e ⎛⎞⎛⎞θ =−2cosgt⎜⎟ω⎜⎟ ⎝⎠⎝⎠ω

So that the filter c(L) “amplifies” y by the factor g and shifts y back in θ time by time units. ω • Note that g and θ depend on ω, and so it makes since to write them as g(ω) and θ(ω). • g(ω) is called the filter gain (or sometimes the amplitude gain). • θ(ω) is called the filter . • g(ω)2 = c(eiω)c(e−iω) is called the power of the filter.

Lecture 1 ‐ 27, July 21, 2008

Examples

1. c(L) = L2 c(eiω) = e2iω = cos(2ω) + isin(2ω) so that sin 2ω θ ()ωω== tan[−1 ] 2 cos 2ω

θ and = 2 time periods. Also g(ω) = |c(eiω)| = 1. ω

Lecture 1 ‐ 28, July 21, 2008

2. (Sargent (1979)) Kuznets Filter for annual : Let

aL()=/ (15)( L−−21012 + L + L ++ L L )

(which ”smooths” the series) and bL()=− ( L−55 L )

(which forms centered ten-year differences), then the Kuznets filter is

cL()= bLaL ()() g(ω) = |c(eiω)| = |b(eiω)| |a(eiω)| , which are easily computed for a grid of values using Gauss, Matlab, Excel, etc.

Lecture 1 ‐ 29, July 21, 2008

Peak at ω ൎ 0.30 Period = 2π/0.30 ൎ 21 years. Lecture 1 ‐ 30, July 21, 2008

Lecture 1 ‐ 31, July 21, 2008

Lecture 1 ‐ 32, July 21, 2008

Example 3: X-11

The linear operations in X-11 can be summarized by the filter sa xXLxtt= 11( ) , X11(L) is a 2-sided filter constructed in 8-steps (Young (1968) and Wallis (1974).

Lecture 1 ‐ 33, July 21, 2008

8-steps:

1 m 6 j X11-1. Form an initial estimate of TC as TCt = A() L x , where AL() is the centered 12-month moving average filter AL()= bL, with 1 t 1 1 ∑ j=−6 j

b||6 =/124, and bj =/ 1 12 , for −≤55j ≤.

l 11m X11-2. Form an initial estimate of SI+ as SItt=− xt TC

1 1 12 122 12 j X11-3. Form an initial estimate of S as SSLSI tt= ()l , where SL()= cL , and where c are weights from a 33× moving average t 1 1 ∑ j=−2 j j (i.e., 1929392919/, /,/, /,/ ).

21 X11-4. Adjust the estimates of S so that they add to zero (approximately) over any 12 month period as SSLStt= 2 () , where SL21()= 1− AL (),

where AL1()is defined in step 1.

2 2 m 1  X11-5. Form a second estimate of TC as TCtt=− A2 ()( L xt S ), where AL2 () denotes a “Henderson” moving average filter. (The 13-term 6 Henderson moving average filter is given by AL()= ALi , with A = .2402 , A = .2143, A = .1474 , A = .0655 , A = 0 , 22∑i=−6 ,i 20, 21,| | 22,| | 23,| | 24,| |

A25,| | =−.0279 , A26,| | =−.0194 .)

3 2 12 m 123 12 j X11-6. Form a third estimate of S as SSLxTC tt=−()( ), where SL()= dL , and where d are weights from a 35× moving 3 t 3 ∑ j=−3 j j average (i.e., 115/ , 215/ , 315/ , 315/ , 315/ , 215/ , 115/ ). 43 X11-7. Adjust the estimates of S so that they add to zero (approximately) over any 12 month period as SSLStt= 2 () , where SL2 () is defined in step 4.

4 sa  X11-8. Form a final seasonally adjusted value as xtt= xS− t .

Lecture 1 ‐ 34, July 21, 2008

Gain of Linear Approximation to X11 (Monthly Data)

1.2

1

0.8

0.6 Gain

0.4

0.2

0 00.511.522.53 Frequency

Lecture 1 ‐ 35, July 21, 2008

Gain of HP Filter (λ = 1600)

Lecture 1 ‐ 36, July 21, 2008

Spectra of Commonly Used Stochastic Processes

Suppose that y has spectrum Sy(ω), and xt = c(L)xt.

What is the spectrum of x?

Because the frequency components of x are the frequency components of y scaled by the factor g(ω)eiθ(ω) , the spectra of x and y are related by

2 iω −iω Sx(ω) = g(ω) Sy(ω) = c(e )c(e )Sy(ω).

Lecture 1 ‐ 37, July 21, 2008

Let εt denote a serially uncorrelated (“white noise”) process. Its spectrum 1 ∞ 2 is Sε(ω) = {γγω0 + 2cos()k k } = σ ε /2π . 2π ∑k=1

If yt follows an ARMA process, then it can be represented as

φ(L)yt = θ(L)εt, or yt = c(L)εt with c(L) = θ(L)/φ(L).

ωω ωωσ

The spectrum of y is then εε θθ()()1eeiiω − ω SceceS()== (ii )(− ) () 2 ωω y φ()()2eeiiφπ−

Writing this εout … ωσ iiqiω ωωωωω− ω− iq ω 2 (1−−−θθθ11eeee ...qq )(1 − −− ... θ ) 1 S y ()== iipiip−− (1−−−φ11eeee ...φφpp )(1 − −− ... φπ ) 2

Lecture 1 ‐ 38, July 21, 2008

A Classical Problem in Processing: Constructing a Band-Pass Filter (see Baxter and King (1999) for discussion)

∞ j Let cL()= ∑ cLj . Suppose that we set the phase of c(L) to be equal to 0 j=−∞

(and thus c(L) is symmetric symmetric: cj = c−j) and we want

iiωω⎧1for −ω ≤≤ωω⎫ gaincL(())()=| ce |= ce () = ⎨ ⎬ ⎩⎭0elsewhere

(where the second equality follows because c(L) is symmetric).

The calculation is straightforward

Lecture 1 ‐ 39, July 21, 2008

∞ π Because ce()−−iijω = ce ω , then ceced= (2π )−−1 ijωω ( i ) ω (an identity) ∑ j j ∫− π j=−∞ Setting the gain equal to unity over the desired frequencies and carrying out the integration yields π ⎧ 1 ⎪ πsin(ω jj ) for≠ 0 −1 1 ijωω ⎪ j cej =|=(2 ) − ⎨ ij ω ω ⎪ for j = 0 ⎩⎪ π Comments: −1 • The values of cj die out at the rate j • 1−c(L) passes everything except −ω ≤≤ωω . Differences of low- pass filters can be used to pass any set of frequencies. k • Baxter and King (199) show that cL()= cLj is an optimal kj∑ jk=− finite order approximation to c(L) in the sense that the gain of ck(L) is as close (L2-norm) as possible to the gain of c(L) for a k-order filter. Lecture 1 ‐ 40, July 21, 2008

Band-Pass Filter Weights (Monthly Data): Periods < 96 Months (ω > 2π/96)

Lecture 1 ‐ 41, July 21, 2008

Log of Real GDP Series Periods > 32 Quarters

Periods Between 6 and 32 Quarters Periods < 6 Quarters

Lecture 1 ‐ 42, July 21, 2008

Minimum MSE One-Sided Filters

Problem: these filters are two-sided with weights that die out slowly. This introduces “endpoint” problems. Geweke (1978) provides a simple way to implement a minimum MSE estimator using data available in real time. (See Christiano and Fitzgerald (2003) for an alternative approach.)

∞ Geweke’s calculation: if xcLttiti= ()y =,∑ c y − i=−∞

T Optimal estimate of xt given {}y j j=1:

∞ TT Ex( tjj|={} y=−=11)(∑ cEy itijj |{} y ) i=−∞ 0 T ∞ ˆˆ =++∑∑∑cti−−y ic tiy ic ti −y i iiiT=−∞ =11 = + where the yˆi ’s denote forecasts and backcasts of yi constructed from the T data {}y j j=1.

Lecture 1 ‐ 43, July 21, 2008

These forecasts and backcasts can be constructed using AR, ARMA, VAR or other models. See Findley et. al (1991) for a description of how this is implemented in X-12.

T The variance of the error associated with using {}y j j=1 is

∞ TT var[( xttjj−| E x{} y=−=−11][ = var∑ c itijjti{ E( y |{} y) −. y }] i=−∞

This looks messy, but it isn’t.

Note: Geweke was concerned with the X11 filter, but his result applies to any linear filter (BandPass, HP, X11, … ).

Lecture 1 ‐ 44, July 21, 2008

Two-Sided Band-Pass Estimates: Logarithm of the Index of Industrial Production (From Watson (2007))

Lecture 1 ‐ 45, July 21, 2008

Notes: These panels show that estimated values of the band‐pass estimates of the trend (periods > 96 months), the gap (periods < 96 months), and the business cycle (periods between 18 and 96 months). Panel D shows the standard errors of the estimates relative to values constructed using a symmetric 100‐year moving average. Values shown in panel A correspond to logarithms, while values shown in panels B‐D are percentage points.

Lecture 1 ‐ 46, July 21, 2008

Two-Sided (Solid) and One-Sided (Dashed) Band-Pass Estimates: Index of Industrial Production

Notes: The solid (red) lines are the two-sided shown in figure 2. The dashed (green) lines are one-sided estimates that do not use data after the date shown on the horizontal axis.

Lecture 1 ‐ 47, July 21, 2008

Standard Errors of One-Sided Band-Pass Estimates: AR (Univariate) and VAR (Multivariate) Forecasts

Gap BusinessCycle Y1-sided Y1-sided Series AR VAR AR VAR Ind. Prod. 2.01 1.88 1.88 1.80 Unemp. Rate 0.46 0.41 0.43 0.40 Employment 0.78 0.77 0.75 0.75 Real GDP 1.03 0.86 0.95 0.83

Notes: This table summarizes results for the four series shown in the first column. The entries under “AR” are the standard errors of one‐sided band‐pass estimates constructed using forecasts constructed by univariate AR models with six lags. The entries under “VAR” are the standard errors of one‐sided band‐ pass estimates constructed using forecasts constructed by VAR models with six lags for monthly models and four lags for quarterly models. The VAR models included the series of interest and first difference of inflation, the term spread, and building permits. Monthly models were estimated over 1960:9‐2006:11, and quarterly models were estimated over 1961:III‐2006:III.

Lecture 1 ‐ 48, July 21, 2008

Regressions Using Filtered Data

Suppose yt = xt′β + ut where E(utxt) = 0

filtered filtered Consider using the filtered data ytt= cL()y and xcLxtt= () .

filtered filtered filtered Write yxtt=+'β u t

filtered filtered Does E(xutt) = 0 ?

This requires that x be “strictly exogenous”. (Same reasoning for NOT doing GLS in time series regression.)

Lecture 1 ‐ 49, July 21, 2008

3. Multivariate spectra ∞ −−1 ijω If yt is a scalar, Se()ωπ= (2) ∑ λj is the spectrum and represents the j=−∞ variance of the complex valued Cramér increment, dZ(ω), in π yedZ= itω ()ω . This can be generalized. t ∫− π

Let Yt : n×1 vector, with j’th autocovariance matrix Γj = V(YtY′t−j) . Let ∞ −−1 ijω Se()ωπ=Γ (2) ∑ j j=−∞ so that S(ω) is an n×n matrix. The spectral representation is as above, but now dZ(ω) is an n×1 complex-valued random vector with spectral density matrix S(ω). S(ω) can be interpreted as a for the increments dZ(ω). The diagonal elements of S(ω) are the (univariate) spectra of the series. The off-diagonal elements are the “Cross-spectra”.

Lecture 1 ‐ 50, July 21, 2008

The cross-spectra are complex valued, with Sij(ω) = Sij ()ω . Cross spectra are often summarized using the following: The real part of Sij(ω) sometimes called the co-spectrum and the imaginary part is called the quadrature spectrum. Then, consider the following definitions: S ()ω Coherence(ω) = ij SSii()ω jj ()ω

| Sij ()ω | Gainij(ω) = S jj ()ω ⎛⎞−Im(S (ω ) Phase (ω) = tan−1 ij ij ⎜⎟ ⎝⎠Re(Sij (ω )

Lecture 1 ‐ 51, July 21, 2008

To interpret these quantities, consider two scalars, Y and X with spectra SY, SX, and cross-spectra SYX. Consider the regression of Yt onto leads and lags of Xt:

∞ YcXutjtjt=+∑ − = c(L)Xt + ut j=−∞

Because u and X are uncorrelated at all leads and lags: SY(ω) = iω 2 |c(e )| SY(ω) + Su(ω).

∞ ∞∞ Moreover: E(YtXt-k) = ∑∑∑cEj () Xt− j X tk−−+−== cE j () XX t tk j c jγ k j jjj=−∞ =−∞ =−∞ where γ is denotes the autocovariances of X. Thus

∞∞ ∞ ∞ -1 −ikω -1 −−ijωω il −iω SYX(ω) = (2π) ∑∑ecjγ kj− = (2π) ∑∑cej e γ l = c(e )SX(ω) kj=−∞ =−∞ jl=−∞ =−∞ Thus, the gain and phase of the cross-spectrum is recognized as the gain and phase of c(L).

Lecture 1 ‐ 52, July 21, 2008

4. Spectral Estimation

AR/VAR/ARMA Parametric Estimators: Suppose Φ(L)Yt = Θ(L)εt, where Y may be a vector and εt is white noise with covariance matrix Σε. The spectral density matrix of Y is

iω −1 iω −iω −iω −1 SY(ω) = Φ(e ) Θ(e )ΣεΘ(e )′Φ(e )′

A parametric estimator uses estimated values of the AR, MA parameters and Σε.

Example (VAR(1)): (I − ΦL)Yt = εt

ˆ ˆˆˆiiωω−−−11 SIeIeY ()ω =−ΦΣ ( )ε ( −Φ )'

Nonparametric Estimators: To be discussed in Lecture 9 in the context of HAC covariance matrix estimators.

Lecture 1 ‐ 53, July 21, 2008