<<

Macroeconomics I University of Tokyo

Lecture 1 LS, Chapter 2

Julen Esteban-Pretel National Graduate Institute for Policy Studies Outline of Lecture 1

§ Definitions. • process. • Stationary processes. § Markov Chains. § First-order stochastic linear difference equations. • Examples: - AR(2). - ARMA(1,1). - VAR. • First and second moments. • Impulse response functions.

Core Macro I - Spring 2013 Lecture 1: Time Series 2 Stochastic Processes

§ We will only focus on discrete-time processes.

§ Let xt be a vector of random variables. § Def: A is a time ordered of random variables.

We write it as {xt} and the starting date is t = 0 (some authors start at t=1). § Def: A stochastic process is stationary if the joint distribution does not change over time.

Core Macro I - Spring 2013 Lecture 1: Time Series 3 -Stationary Processes

§ Def: A stochastic process {xt} is covariance (or weakly) stationary if:

i. E(xt) does not depend of time, and

ii. Cov(xt+j, x t) exists, is finite, and depends only on j but not on t, for all j, t.

§ Def: The j-th order autocovariance, Cx( j ), is

Cx( j ) ≡ Cov(xt+j, x t) for t = 0,1,2,... (1.1)

§ Cx( j ) does not depend on t because of covariance stationarity.

§ Also due to covariance stationarity, Cx( j ) satisfies

Cx( j ) = Cx( - j )’. (1.2) § The 0-th order autocovariance is the

Cx( 0 ) = Var(xt). (1.3)

Core Macro I - Spring 2013 Lecture 1: Time Series 4 White , MDS and i.i.d.

§ Def: A process {wt} is if it is covariance stationary with zero and with no serial correlation (i.e. Cw(j) = 0 for j ≠ 0).

§ Def: A process {wt} is a martingale difference sequence (mds) if E(wt+1 | wt, w t-1, ...)=0.

§ Def: A process {wt} is said to be i.i.d. if it is independent and identically distributed.

§ “{wt} is i.i.d. with mean zero and finite second moments” ⇒ “{wt} is a

stationary mds with finite second moments” ⇒ “{wt} is white noise”

Core Macro I - Spring 2013 Lecture 1: Time Series 5 Markov Processes

§ Def: A stochastic process {xt} is said to have the (or it is a Markov process) if for k ≥ 1 and all t,

Prob(xt+1| xt, x t-1,..., xt-k) = Prob(xt+1| xt) (1.1.4)

§ Def: A is the space in which the possible values of each xt lie.

Core Macro I - Spring 2013 Lecture 1: Time Series 6 Markov Chains

§ Def: A is a Markov process whose state space is a discrete set. We will focus on Markov chains defined over finite sets. § Def: A time-invariant Markov chain is described by a triple of objects:

i. An n-dimensional state space, ei, i = 1, 2,..., n, where ei is an n x 1 unit vector whose i-th entry is 1 and all other entries are zero. ii. An n x n transition , P, which records the of moving from one state to another in one period. The elements of P are

Pij = Prob(xt+1 = ej | xt = ei). (1.5)

iii. An n x 1 vector, π0, of initial conditions, which specifies the probability of being in each state at date 0:

π0i = Prob(x0 = ei). (1.6) § We will assume that, n (1.7) - For i = 1, 2,...,n, the matrix P satisfies: j=1 Pij = 1 A matrix P satisfying this property is called a . n - The vector π0 satisfies: i=1 0i = 1 (1.8)

A vector π0 satisfying this property is called a probability vector.

Core Macro I - Spring 2013 Lecture 1: Time Series 7 Markov Chains (Cont.)

§ A stochastic matrix, P, defines the probability of moving from each value of the state to any other in one period. § The probability of moving from one value of the state to any other in k periods is given by Pk.

P(k) = Prob(x = e x = e ) (1.9) ij t+k j| t i (k) k where P i j is the i, j element of P .

§ The unconditional probability t, i = P r o b ( x t = e i ) is the i-th element of πt where t is given by (n 1) t (1.10) t⇥ = 0⇥ P . (1 n) (1 n)(n n) § To derive (1.10) we use the following formula from :

p(x) = p(x, y), p(x, y) = p(x y)p(y), (1.11) y | where p ( x ) P r o b ( X= x), p(x, y) Prob(X = x, Y = y). Core Macro I - Spring 2013 Lecture 1: Time Series 8 Stationary Distributions

§ According to (1.10) the unconditional distributions evolve as (1.12) t+1 = tP.

§ Def: An unconditional distribution is stationary or invariant if t+1 = t.

§ Hence, if t is a stationary distribution, it satisfies (n 1) (1.13) = P

or = P

or (In P⇥) = 0 . (1.14) (n 1) which implies that π is an eigenvector associated with a unit eigenvalue of P’.

§ If π0 is a stationary distribution, then the Markov chain is a stationary Markov process.

Core Macro I - Spring 2013 Lecture 1: Time Series 9 Asymptotic Stationarity

§ Def: For a given arbitrary π0, a Markov chain is asymptotically stationary if

lim t = t ⇥ ⇥ where satisfies (1.14). § Def: A Markov chain is asymptotically stationary with a unique stationary distribution if the limit is independent of the initial distribution π0. § We call a stationary distribution or an invariant distribution of P. § Theorem 1: Let P be a stochastic matrix with Pij > 0 ∀(i, j). Then P has a unique stationary distribution and the process is asymptotically stationary.

n § Theorem 2: Let P be a stochastic matrix for which P i j > 0 ( i, j) for some value of n ≥ 1. Then P has a unique stationary distribution, and the process is asymptotically stationary.

Core Macro I - Spring 2013 Lecture 1: Time Series 10 Stochastic Linear Difference Equations

§ A first-order stochastic linear difference equation is an example of a continuous-state Markov process.

xt+1 = Ao xt + C wt+1 , t = 0, 1, 2, . . . . (1.15) (n 1) (n n)(n 1) (n m)(m 1)

§ {wt} is either white noise, stationary mds, or i.i.d. with zero mean, with

E(wtwt) =Im.

§ We can append an observation equation yt=Gxt and use the augmented system

xt+1 = Aoxt + Cwt+1 (1.16a)

yt = Gxt. (1.16b)

where yt is the vector of variables observed at time t. § The system (1.16) is called a linear state-space system.

Core Macro I - Spring 2013 Lecture 1: Time Series 11 Stochastic Linear Difference Equations (cont.)

§ Re-stating the first-order stochastic linear difference equation

xt+1 = Aoxt + Cwt+1 (1.15)

§ Iterating (1.15) forward from t = 0, we obtain

t t 1 j (1.17) xt = Aox0 + j0 AoCwt j. =

§ Hence, if x0 is uncorrelated with wt (t = 1,2,...), then xt is uncorrelated with wt+j ( j = 1,2,...).

Core Macro I - Spring 2013 Lecture 1: Time Series 12 Example 1: AR(2)

xt+1 = Aoxt + Cwt+1 (1.16a)

yt = Gxt. (1.16b) Scalar second-order autoregression

§ Assume that zt and wt are scalar processes and that zt+1 = + ⇥1zt + ⇥2zt 1 + wt+1. (1.18) § We can represent this relationship as a first-order system

zt+1 ⇥1 ⇥2 zt 1 zt = 1 0 0 zt 1 + 0 wt+1, (1.19a) 1 ⇥ 0 0 1⇥ 1 ⇥ 0⇥

⇤ ⌅ ⇤ ⌅ ⇤zt ⌅ ⇤ ⌅ zt = [1 0 0 ] zt 1 . (1.19b) 1 ⇥ § Hence an AR(2) can be written as in ⇤first-order⌅ stoch. linear diff. eq. where

zt ⇥1 ⇥2 1 xt = zt 1 , Ao = 1 0 0 , C = 0 , yt = zt, G = [1 0 0 ]. 1 ⇥ 0 0 1⇥ 0⇥

Core Macro I - Spring⇤ 2013 ⌅ ⇤ ⌅ ⇤ ⌅ Lecture 1: Time Series 13 Example 2: ARMA (1,1)

xt+1 = Aoxt + Cwt+1 (1.16a)

yt = Gxt. (1.16b) First-order scalar mixed moving average and autoregression

§ Assume that zt and wt are scalar processes and that zt+1 = ⇥zt + wt+1 + wt. (1.20) § We can represent this relationship as a first-order system z ⇥ z 1 t+1 = t + w , w 0 0 w 1 t+1 (1.21a) t+1⇥ ⇥ t⇥ ⇥ z z = [1 0 ] t . t w (1.21b) t⇥ § Hence an ARMA(1,1) can be written as in first-order stoch. linear diff. eq.: z ⇥ 1 x = t , A = , C = , y = z , G = [1 0 ] w o 0 0 1 t t t⇥ ⇥ ⇥ Core Macro I - Spring 2013 Lecture 1: Time Series 14 Example 3: VAR

xt+1 = Aoxt + Cwt+1 (1.16a)

yt = Gxt. (1.16b) n-dimensional 4-th order vector autoregression

§ Let zt be an n x 1 vector of random variables, wt+1 a mds with x 0⇥ = [ z 0 z 1 z 2 z 3 ] , and Aj an n x n matrix for each j. 4 zt+1 = Aj zt+1 j + Cy wt+1 . (1.22) (n 1) j 1 (n n) (n 1) (n n)(n 1) ⇥ = ⇥ ⇥ ⇥ ⇥ § We can represent this relationship as a first-order system

zt+1 A1 A2 A3 A4 zt Cy zt In 0 0 0 zt 1 0 ⇥ = ⇥ ⇥ + ⇥ wt+1. (1.23) zt 1 0 In 0 0 zt 2 0 ⇧zt 2⌃ ⇧ 0 0 In 0 ⌃ ⇧zt 3⌃ ⇧ 0 ⌃ ⇧ ⌃ ⇧ ⌃ ⇧ ⌃ ⇧ ⌃ ⇤ ⌅ ⇤ ⌅ ⇤ ⌅ ⇤ ⌅

Core Macro I - Spring 2013 Lecture 1: Time Series 15 First and Second Order Moments

xt+1 = Ao xt + C wt+1 , t = 0, 1, 2, . . . . (1.15) (n 1) (n n)(n 1) (n m)(m 1)

§ Let µt E(xt), t var(xt). (n 1) (n n) § Taking the unconditional expectation on both sides of (1.15)

µt+1 = Aoµt. (1.24) § Taking the variance of both sides of (1.15) and noting, as was shown before, that Cov(xt, w t+1) = 0,

t+1 = AotAo + CC. (1.25)

§ If the process is covariance stationary µt = µ and Σt = Σ.

§ Def: Ao is said to be stable if all of its eigenvalues are less than 1 in absolute value.

Core Macro I - Spring 2013 Lecture 1: Time Series 16 Mean of a Covariance-

µt E(xt), t var(xt), µt+1 = A0µt, t+1 = A0tA0⇥ + CC⇥ (n 1) (n n) § Assume that {xt} is cov. stationary and Ao is a stable matrix. § The 1st and 2nd moments for this process are: § Mean:

µ = Aoµ. (1.26)

§ If A0 is stable (with no eigenvalues of one), then µ = 0. § A covariance-stationary process with non-zero µ can be represented as

x µ = A (x µ) + Cw , (1.27) t+1 o t t+1

or xt+1 = c + Aoxt + Cwt+1 (1.28) with c (I A )µ. ⇥ n o

Core Macro I - Spring 2013 Lecture 1: Time Series 17 Variance of a Covariance-Stationary Process

µt E(xt), t var(xt), µt+1 = A0µt, t+1 = A0tA0⇥ + CC⇥ (n 1) (n n) § Variance:

§ Postmultiplying both sides of (1.27) by (xt+1 - µ)’, taking expectations and noting that Cov(xt, w t+1) = 0 and Var(wt+1) = Im,

E(xt 1 µ)(xt 1 µ) = AoE(xt µ)(xt µ)A + CC (1.29) + + o

= AoAo + CC. (1.30)

§ If A0 is stable, then the solution for Σ exists.

§ If µ0 = 0 and Σ0 = Σ, then the process is covariance stationary. So Cx(0) = Σ.

Core Macro I - Spring 2013 Lecture 1: Time Series 18 Autocovariance of a Cov-Stationary Process

µt E(xt), t var(xt), µt+1 = A0µt, t+1 = A0tA0⇥ + CC⇥ (n 1) (n n) § Autocovariance: § Using (1.24) into (1.15), setting it at time t+j and iterating backwards, we get j j 1 (x µ)=A E(x µ)+Cw + ... + A Cw (1.31) t+j o t t+j o t+1 § Postmultiplying both sides of (1.31) by (xt - µ)’, and taking expectations we obtain

j Cx(j) = AoCx(0). (1.32)

Core Macro I - Spring 2013 Lecture 1: Time Series 19 Asymptotic (Cov.) Stat. of the 1st Order System

t t 1 j § Recall that xt = Aox0 + j0 AoCwt j. (1.17) = t § If Ao is stable, then A o gets smaller as t → ∞. Hence xt converges to ⇥ j (1.33) xt = AoCwt j j 0 = § This process is covariance stationary.

• Hence, if Ao is stable, then the 1st-order system is asymptotically covariance-stationary.

§ Furthermore, if {wt} is stationary, then (1.33) is stationary. • Hence, the 1st-order system is asymptotically stationary.

Core Macro I - Spring 2013 Lecture 1: Time Series 20 The Impulse Response Function

xt+1 = Ao xt + C wt+1 , t = 0, 1, 2, . . . . (1.15) (n 1) (n n)(n 1) (n m)(m 1) x § Def: t + j as a function of j is called the impulse response function (it does wt not depend on time because Ao is time invariant). § Iterating (1.15) forward from t - 1, we get

xt = Aoxt 1 + Cwt, 2 xt+1 = Ao xt 1 + AoCwt + Cwt+1,

· · ·j+1 j j 1 xt+j = Ao xt 1 + AoCwt + Ao Cwt+1 + + AoCwt+j 1 + Cwt+j. · · · j § So the impulse response function is A o C .

Core Macro I - Spring 2013 Lecture 1: Time Series 21