<<

Solving a Dynamic Equilibrium Model

Jes´us Fern´andez-Villaverde University of Pennsylvania

1 Basic RBC

Social Planner’s problem: • ∞ t max E β log ct + ψ log (1 lt) { − } tX=0

α zt 1 α ct + kt+1 = k (e lt) − +(1 δ) kt, t>0 t − ∀ zt = ρzt 1 + εt, εt (0, σ) − ∼ N

This is a dynamic optimization problem. •

2 Computing the RBC

The previous problem does not have a known “paper and pencil” so- • lution.

We will work with an approximation: Perturbation Theory. •

We will undertake a first order perturbation of the model. •

How well will the approximation work? •

3 Equilibrium Conditions

From the household problem+firms’s problem+aggregate conditions: 1 1 α 1 zt 1 α = βEt 1+αkt − (e lt) − δ ct (ct+1 − ) c ³ ´ t α zt 1 α 1 ψ =(1 α) kt (e lt) − lt− 1 lt − − α zt 1 α ct + kt+1 = k (e lt) − +(1 δ) kt t − zt = ρzt 1 + εt −

4 Finding a Deterministic Solution

We search for the first component of the solution. •

If σ =0, the equilibrium conditions are: • 1 1 α 1 1 α = β 1+αkt − lt − δ ct ct+1 − ³ ´ ct α α ψ =(1 α) kt lt− 1 lt − − α 1 α ct + kt+1 = k l − +(1 δ) kt t t −

5 Steady State

The equilibrium conditions imply a steady state: • 1 1 = β 1+αkα 1l1 α δ c c − − − c ³ ´ ψ =(1 α) kαl α 1 l − − − α 1 α c + δk = k l −

The first can be written as: • 1 =1+αkα 1l1 α δ β − − −

6 Solving the Steady State

Solution: µ k = Ω + ϕµ l = ϕk c = Ωk α 1 α y = k l −

1 1 1 1 α 1 α 1 α where ϕ = 1+δ − , Ω = ϕ δ and µ = (1 α) ϕ . α β − − − ψ − − ³ ³ ´´

7 Linearization I

Loglinearization or linearization? •

Advantages and disadvantages •

We can linearize and perform later a change of variables. •

8 Linearization II

We linearize:

1 1 α 1 zt 1 α = βEt 1+αkt − (e lt) − δ ct (ct+1 − ) c ³ ´ t α zt 1 α 1 ψ =(1 α) kt (e lt) − lt− 1 lt − − α zt 1 α ct + kt+1 = k (e lt) − +(1 δ) kt t − zt = ρzt 1 + εt − around l, k, and c with a First-order Taylor Expansion.

9 Linearization III

We get: 1 y 1 (ct+1 c)+α (1 α) β zt+1+ (ct c)=Et −c y − − k y −c − α (α 1) β (kt+1 k)+α (1 α) β (lt+1 l) ( − k2 − − kl − ) 1 1 α α (ct c)+ (lt l)=(1 α) zt + (kt k) (lt l) c − (1 l) − − k − − l − − y (1 α) z + α (k k)+(1 α) (l l) (c c)+(k k)= t k t −l t t t+1  µ − − − ¶  − − +(1 δ)(kt k)  − −  zt = ρzt 1 + εt  − 

10 Rewriting the I

Or:

α1 (ct c)=Et α1 (ct+1 c)+α2zt+1 + α3 (kt+1 k)+α4 (lt+1 l) − { − − − } α (ct c)=α5zt + c (kt k)+α6 (lt l) − k − −

(ct c)+(kt+1 k)=α7zt + α8 (kt k)+α9 (lt l) − − − −

zt = ρzt 1 + εt −

11 Rewriting the System II where

1 y α1 = α2 = α (1 α) β −c y − ky α3 = α (α 1) β α4 = α (1 α) β − k2 − kl α 1 α5 =(1 α) c α6 = + c − − l (1 l) αµ − ¶ α7 =(1 α) y α8 = y k +(1 δ) (1−α) α 1 α − α9 = y −l y = k l −

12 Rewriting the System III

After some algebra the system is reduced to:

A (kt+1 k)+B (kt k)+C (lt l)+Dzt =0 − − −

Et (G (kt+1 k)+H (kt k)+J (lt+1 l)+K (lt l)+Lzt+1 + Mzt)=0 − − − −

Etzt+1 = ρzt

13 Guess Policy Functions

We guess policy functions of the form (kt+1 k)=P (kt k)+Qzt and − − (lt l)=R (kt k)+Szt,plugtheminandget: − − A (P (kt k)+Qzt)+B (kt k) − − +C (R (kt k)+Szt)+Dzt =0 −

G (P (kt k)+Qzt)+H (kt k)+J (R (P (kt k)+Qzt)+SNzt) − − − +K (R (kt k)+Szt)+(LN + M) zt =0 −

14 Solving the System I

Since these need to hold for any value (kt+1 k)orzt we need − to equate each coefficient to zero, on (kt k): − AP + B + CR =0 GP + H + JRP + KR =0 and on zt:

AQ + CS + D =0 (G + JR) Q + JSN + KS + LN + M =0

15 Solving the System II

We have a system of four equations on four unknowns. •

TosolveitnotethatR = 1 (AP + B)= 1 AP 1 B • −C −C − C

Then: • B K GC KB HC P 2 + + P + − =0 µA J − JA¶ JA a quadratic equation on P .

16 Solving the System III

We have two solutions: • 0.5 1 B K GC B K GC 2 KB HC P = + + 4 −   −2 −A − J JA ± õA J − JA¶ − JA ! one stable and another unstable. 

If we pick the stable root and find R = 1 (AP + B)wehavetoa • −C system of two linear equations on two unknowns with solution: D (JN + K)+CLN + CM Q = − AJN + AK CG CJR ALN AM−+ DG−+ DJR S = − − AJN + AK CG CJR − −

17 Practical Implementation

How do we do this in practice? •

Solving quadratic equations: “A Toolkit for Analyzing Nonlinear Dy- • namic Stochastic Models Easily” by Harald Uhlig.

Using dynare. •

18 General Structure of Linearized System

Given m states xt,ncontrols yt, and k exogenous stochastic processes zt+1, we have:

Axt + Bxt 1 + Cyt + Dzt =0 − Et (Fxt+1 + Gxt + Hxt 1 + Jyt+1 + Kyt + Lzt+1 + Mzt)=0 − Etzt+1 = Nzt where C is of size l n, l n and of rank n, that F is of size (m + n l) × ≥ − × n, and that N has only stable eigenvalues.

19 Policy Functions

We guess policy functions of the form:

xt = Pxt 1 + Qzt − yt = Rxt 1 + Szt − where P, Q, R, and S are matrices such that the computed equilibrium is stable.

20 Policy Functions

For simplicity, suppose l = n. See Uhlig for general case (I have never be in the situation where l = n did not hold).

Then:

1. P satisfies the matrix quadratic equation:

F JC 1A P 2 JC 1B G + KC 1A P KC 1B +H =0 − − − − − − − − The³ equilibrium´ is stable³ iff max (abs (eig (P )))´ < 1.

2. R is given by: R = C 1 (AP + B) − − 21 3. Q satisfies:

N F JC 1A + I JR + FP + G KC 1A vec (Q) 0 ⊗ − − k ⊗ − − ³ = vec ´JC 1D ³ L N + KC 1D M ´ − − − − ³³ ´ ´

4. S satisfies: S = C 1 (AQ + D) − −

22 How to Solve Quadratic Equations

To solve ΨP 2 ΓP Θ =0 − − for the m m matrix P : ×

1. Define the 2m 2m matrices: × ΓΘ Ψ 0 Ξ = , and ∆ = m " Im 0m # " 0m Im #

2. Let s be the generalized eigenvector and λ be the corresponding generalized eigenvalue of Ξ with respect to ∆. Then we can write s = λx ,x for some x m. 0 0 0 ∈ < £ ¤ 23 3. If there are m generalized eigenvalues λ1, λ2,...,λm together with gen- eralized eigenvectors s1,...,sm of Ξ with respect to ∆, written as m s = λx ,x for some x and if (x1, ..., xm) is linearly inde- 0 i0 i0 i ∈ < pendent,h then:i 1 P = ΩΛΩ−

is a solution to the matrix quadratic equation where Ω =[x1,...,xm] and Λ =[λ1, ..., λm]. The solution of P is stable if max λ < 1. | i| Conversely, any diagonalizable solution P canbewritteninthisway.

24 How to Implement This Solver

Available Code:

1. My own code: undeter1.m.

2. Uhlig’s web page: http://www.wiwi.hu-berlin.de/wpol/html/toolkit.htm

25 An Alternative Dynare

What is Dynare? A platform for the solution, simulation, and estima- • tion of DSGE models in .

Developed by Michel Juilliard and collaborators. •

Iamoneofthem:) •

http://www.cepremap.cnrs.fr/dynare/ •

26 Dynare takes a more “blackbox approach”. •

However, you can access the files... •

...and it is very easy to use. •

Short tutorial. •

27 Our Benchmark Model

We are now ready to compute our benchmark model. •

We begin finding the steady state. •

As before, a variable x with no time index represent the value of that • variable in the steady state.

28 Steady State I

From the first order conditions of the household:. • c σ = βc σ (r +1 δ) − − − } R c σ = βc σ − − π γ σ ψl = c− w

We forget the money condition because the central bank, through • open market operations, will supply all the needed money to support the chosen interest rate.

Also,wenormalizethepriceleveltoone. • 29 Steady State II

From the problem of the intermediate good producer: • α w k = l 1 α r −

Also: • 1 α α 1 − 1 1 α α mc = w − r µ1 α¶ µα¶ −p ε ∗ = mc p ε 1 − where A =1.

30 Steady State III

Now, since p = p: • ∗ 1 1 α 1 α ε 1 − w1 αrα = − 1 α α − ε µ − ¶ µ ¶

By markets clearing: • α 1 α c + δk = y = k l − wherewehaveusedthefactthatx = δk and that: A =1 v

The Taylor rule will be trivially satisfied and we can drop it from the • computation.

31 Steady State IV

Our steady state equations, cancelling redundant constants are: • 1 r = 1+δ β − 1 R = π β γ σ ψl = c− w α w k = l 1 α r 1 α α− 1 − 1 1 α α ε 1 w − r = − µ1 α¶ µα¶ ε − α 1 α c + δk = k l −

A system of six equations on six unknowns. • 32 Solving for the Steady State I

Note first that: • ε 1 w1 α =(1 α)1 α αα − r α − − − ε − ⇒ 1 α α ε 1 1 α 1 α 1 1 α − − w =(1 α) α − − 1+δ − µ ε ¶ Ãβ − !

Then: • k α w = Ω = k = Ωl l 1 α r ⇒ −

33 Solving for the Steady State II

We are left with a system of two equations on two unknowns: • ψlγcσ = w c + δΩl = Ωαl

Substituting c =(Ωα δΩ) l, we have • − c γ ψ cσ = w α µΩ δΩ¶ ⇒ − 1 w γ+σ c = (Ωα δΩ)γ Ã − ψ !

34 Steady State

1 ε 1 r = β 1+δ mc = −ε 1− R = βπ k = Ωl α 1 α 1 α ε 1 1 α 1 α 1 w =(1 α) α − −ε − β 1+δ − x = δk − 1 − α γ w³ γ+σ´ ³ ´ α 1 α c = (Ω δΩ) ψ y = k l − c − α w l = Ω³α δΩ ´ Ω = 1 α r − −

35 Log-Linearizing Equilibrium Conditions

Take variable xt. •

Substitute by xext where: • xt b xt =log x b Notation: a variable xt represents the log-deviation with respect to • the steady state. b

Linearize with respect to xt. • b 36 Households Conditions I

γ σ γ γl σ σc w ψl = c− wt or ψl e t = c e twe t gets loglinearized to: • t t − − b γlt = σbct +bwt − b b b Then: • σ σ c− = βEt c− (rt+1 +1 δ) t { t+1 − } or: σ σc σ σc r c e t = βEt c e t+1 re t+1 +1 δ − − { − − − } ³ ´ that gets loglinearizedb to: b b

σct = σEtct+1 + βrEtrt+1 − − b b b 37 Households Conditions II

Also: • σ σ Rt+1 ct− = βEt ct−+1 { πt+1 } or: R σ σct σ σct Rt πt c− e− = βEt c− e− +1 e +1− +1 { µ π ¶} that gets loglinearizedb to: b b b

σct = σEtct+1 + Et Rt+1 πt+1 − − − ³ ´ b b b b We do not loglinearize the money condition because the central bank, • through open market operations, will supply all the needed money to support the chosen interest rate.

38 Marginal Cost

We know: • 1 α α 1 − 1 1 1 α α mct = wt − rt µ1 α¶ µα¶ At or: − 1 1 α 1 α w1 αrα mcemct = − − e At+(1 α)wt+αrt 1 α α A − − µ − ¶ µ ¶ c b b b Loglinearizes to: • mct = At +(1 α) wt + αrt − − d b b b

39 Pricing Condition I

We have: • ∞ τ pit∗ ε Et (βθp) vt+τ mct+τ yit∗ +τ =0, (Ãpt+τ − ε 1 ! ) τX=0 − where ε pti∗ − yit∗ +τ = yt+τ , Ãpt+τ !

40 Pricing Condition II

Also: • 1 ε ε ∞ τ pit∗ − ε pti∗ − Et (βθp) vt+τ mct+τ yt+τ =0 Ãpt+τ ! − ε 1 Ãpt+τ !   ⇒ τX=0  −   1 ε  ∞ τ pit∗ −  Et (βθp) vt+τ yt+τ = τ=0 Ãpt+τ ! X ε ∞ τ ε pti∗ − Et (βθp) vt+τ mct+τ yt+τ ε 1 Ãpt+τ ! τX=0 −

41 Working on the Expression I

If we prepare the expression for loglinearization (and eliminating the • index i because of the symmetric equilibrium assumption):

p 1 ε ∞ τ ∗ − vt+τ +(1 ε)p (1 ε)pt+τ +yt+τ Et (βθp) v ye − t∗− − = Ã p ! τX=0 ε p b ε b b b ∞ τ ∗ − vt+τ εp +εpt+τ +mct+τ +yt+τ Et (βθp) v mc ye − t∗ ε 1 Ã p ! τX=0 µ − ¶ b b b c b

42 Working on the Expression II

ε p∗ Note that ε 1mc =1, p =1, • −

Dropping redundant constants, we get: •

∞ τ vt+τ +(1 ε)p (1 ε)pt+τ +yt+τ Et (βθp) e − t∗− − = τX=0 b b b b ∞ τ vt+τ εp +εpt+τ +mct+τ +yt+τ Et (βθp) e − t∗ τX=0 b b b c b

43 Working on the Expression III

Then

∞ τ Et (βθp) (vt+τ +(1 ε) p (1 ε) pt+τ + yt+τ )= − t∗ − − τX=0 ∞ b τ b b b = Et (βθp) (vt+τ εp + εpt+τ + mct+τ + yt+τ ) − t∗ τX=0 b b b d b

44 Working on the Expression IV:

∞ τ ∞ τ Et (βθp) (p pt+τ )=Et (βθp) mct+τ t∗ − ⇒ τX=0 τX=0 ∞ τ b ∞ b τ ∞ d τ Et (βθp) p = Et (βθp) pt+τ + Et (βθp) mct+τ t∗ ⇒ τX=0 τX=0 τX=0 1 b ∞ τ b ∞ τ d pt∗ = Et (βθp) pt+τ + Et (βθp) mct+τ 1 βθp ⇒ − τX=0 τX=0 b ∞ τ b ∞ d τ p =(1 βθp) Et (βθp) pt+τ +(1 βθp) Et (βθp) mct+τ t∗ − − τX=0 τX=0 b b d

45 Working on the Expression V

Note that: • ∞ τ (1 βθp) Et (βθp) pt+τ =(1 βθp) pt +(1 βθp) βθpEtpt+1 + ... − − − τX=0 b = pt + βθpEbt (pt+1 pt)+... b − = pt 1 + pt pt 1 + βθpEtπt+1 + ... b − − b − b ∞ τ = pbt 1 + Eb t b (βθp) πt+τb − τX=0 b b Then: • ∞ τ ∞ τ pt∗ = pt 1 + Et (βθp) πt+τ +(1 βθp) Et (βθp) mct+τ − − τX=0 τX=0 b b b d 46 Working on the Expression VI

Now: • ∞ τ pt∗ = pt 1 + πt + (1 βθp) mct + Et (βθp) πt+τ − − τX=1 b b b ∞ dτ b +(1 βθp) Et (βθp) mct+τ − τX=1 d If we forward the equation one term: •

∞ τ ∞ τ Etp = pt + Et (βθp) πt+1+τ +(1 βθp) Et (βθp) mct+1+τ t∗+1 − τX=0 τX=0 b b b d 47 Working on the Expression VII

We multiply it by βθp: • ∞ τ+1 βθpEtpt∗+1 = βθppt + Et (βθp) πt+1+τ τX=0 b b ∞ τb+1 +(1 βθp) Et (βθp) mct+1+τ − τX=0 d Then: • pt∗ pt 1 = βθpEt pt∗+1 pt + πt +(1 βθp) mct − − − − ³ ´ b b b b b d

48 Price Index

1 1 ε 1 ε 1 ε Since the price index is equal to pt = θpp − +(1 θp) p∗ − − . • t 1 − t h − i

we can write: • 1 pt 1 ε (1 ε)pt 1 1 ε (1 ε)p 1 ε pe = θpp e +(1 θp) p e t∗ − − − − − − − ⇒ h 1 i pbt (1 ε)pt 1 b (1 ε)p 1 ε b e = θpe +(1 θp) e t∗ − − − − − h i b b b Loglinearizes to: • pt = θppt 1 +(1 θp) pt∗ πt =(1 θp)(pt∗ pt 1) − − ⇒ − − − b b b b b b 49 Evolution of Inflation

We can put together the price index and the pricing condition: • πt πt+1 = βθpEt + πt +(1 βθp) mct 1 θp 1 θp − −b b− or: b d

πt = βθpEtπt+1 + (1 θp) πt + (1 θp)(1 βθp) mct − − − b b b d Simplifies to: • πt = βEtπt+1 + λ At +(1 α) wt + αrt − − ³ ´ (1 θp)(1 βθp) where λ = b− − b and mct =b At +(1 b α) wtb+ αrt θp − − d50 b b b New Keynesian Phillips Curve

The expression: • πt = βEtπt+1 + λmct is known as the New Keynesian Phillips Curve b b d

Empirical performance? •

Large literature: • 1. Lagged inflation versus expected inflation.

2. Measures of marginal cost.

51 Production I

Now: • AeAt yt α 1 α αkt+(1 α)lt ye = k l − e − ε εjt ε εpt j− e− bp e b b b b b Cancelling constants: • eAt yt αkt+(1 α)lt e = e − εjt εpt e− be b b b b b Then: • yt = At + αkt +(1 α) lt + ε jt pt − µ − ¶ e b b b b b eb 52 Production Function II

Now we find expressions for the loglinearized values of jt and pt: • 1 1 ε jt =logjt log j = log p−it di log p − −ε ÃZ0 ! − 1 b 1 1 ε pt =logpt log p = log pit− di log p − 1 ε Ã 0 ! − − Z b Then: • 1 1 jt = (pit p) di −p Z0 − be 1 1 ε 1 pt = − (pit p) di −1 ε p 0 − − Z eb 53 Production Function II

Clearly j = p . • t t e b eb Then: • yt = At + αkt +(1 α) lt − b b b b No first-order loss of efficiency! •

54 Aggregate Conditions I

c x y We know ct + xt = yt or ce t + xe t = ye t that loglinearizes to: • cbc + xxbt = yytb

b b b k k x Also kt+1 =(1 δ) kt + xt or ke t+1 =(1 δ) ke t + xe t that • − − loglinearizes to: b b b kkt+1 =(1 δ) kkt + xxt − b b b

55 Aggregate Conditions II

Finally: • α wt kt = lt 1 α rt or: − α w kekt = lewt+lt rt 1 α r − − b b b b Loglinearizes to: • kt = wt + lt rt − b b b b

56 Government

We have that: • γ γ γy R Rt R πt π yt t+1 = eϕt R µ R ¶ µ π ¶ Ã y ! or: eRt+1 = eγRRt+γππt+γyyt+ϕt

b b b b Loglinearizes to: • Rt+1 = γRRt + γππt + γyyt + ϕt

b b b b

57 Loglinear System

σct = Et ( σct+1 + βrrt+1) − − σct = Et σct+1 + Rt+1 πt+1 − b − b b − ³ ´ γlt = σct + wt b − b b b yt = At + αkt +(1 α) lt b b b − yyt = cc + xxt b b b b kkt+1 =(1 δ) kkt + xxt b b − b kt = wt + lt rt b −b b Rt+1 = γRRt + γππt + γyyt + ϕt b b b b πt = βEtπt+1 + λ At +(1 α) wt + αrt b b b − b − ³ ´ At = ρAt 1 + zt b b −b b b a system of 10 equations on 10 variables: c , l , x , y , k , w , r , R , π , A . b b t t t t t t t t+1 t t n o 58 b b b b b b b b b b Solving the System

We can put the system in Uhlig’s form. •

To do so, we redefine Rt+1 and πt as (pseudo) state-variables in order • to have at most as many control variables as deterministic equations. b b

States, controls, and shocks: • Xt = kt+1 Rt+1 πt 0 ³ ´ 0 Yt = cbt lt bxt yt b wt rt ³ ´ 0 Zt = zbt bϕt b b b b ³ ´

59 Deterministic Bloc

γlt + σct wt =0 − yt At αkt (1 α) lt =0 − − b− b− b yyt cc xxt =0 b b b − − b kkt+1 (1 δ) kkt xxt =0 − −b b − b kt wt lt + rt =0 b − −b b Rt+1 γRRt γππt γyyt ϕt =0 − − b −b b − b b b b b

60 Expectational Bloc

σct + Et ( σct+1 + βrrt+1)=0 − σct + Rt+1 + Et ( σct+1 πt+1)=0 b −b − b πt βEtπt+1 λ At +(1 α) wt + αrt =0 − b − b − − b b ³ ´ b b b b b Two stochastic processes

At = ρAt 1 + zt − ϕ b tb

61 Matrices of the Deterministic Bloc 000 000  000   α 00 000 −000 A =   ,B =   ,  k 00   (1 δ) k 00        − −   000   100      01 γπ   0 γR 0   −   −  σγ 00  10 00 −  0 (1 α)0 1 0 0  10  c −0 − xy 00 00− C =  − −  ,D=    00 x 000  00      −     0 10011  00  − −     00 0 γy 00  0 1   −   −     

62 Matrices of the Expectational Bloc 000 000 000 F =  000  ,G=  01 1  ,H =  000 , 00 β 001− 000  −      σ 0000 βr  σ 0000   0 − J =  σ 00000  ,K =  σ 0000 0  0− 00001 0000 λ (1 α) λα    − −   00  00  L =  00 ,M =  00 00 λ 0        

63 Matrices of the Stochastic Process ρ 0 N = Ã 00!

64 Solution of the Problem

Xt = PXt 1 + QZt − Yt = RXt 1 + SZt −

65 Beyond Linearization

We solved the model using one particular approach. •

How different are the computational answers provided by alternative • solution methods for dynamic equilibrium economies?

Whydowecare? •

66 Stochastic neoclassical growth model is nearly linear for the benchmark • calibration.

— Linear methods may be good enough.

Unsatisfactory answer for many economic questions: we want to use • highly nonlinear models.

— Linear methods not enough.

67 Solution Methods

1. Linearization: levels and logs.

2. Perturbation: levels and logs, different orders.

3. Projection methods: spectral and Finite Elements.

4. Value Function Iteration.

5. Other?

68 Evaluation Criteria

Accuracy. •

Computational cost. •

Programming cost •

69 What Do We Know about Other Methods?

Perturbation methods deliver an interesting compromise between ac- • curacy, speed and programming burden (Problem: Analytical deriva- tives).

Second order perturbations much better than linear with trivial addi- • tional computational cost.

Finite Elements method the best for estimation purposes. •

Linear methods can deliver misleading answers. •

Linearization in Levels can be better than in Logs. • 70 A Quick Overview

Numerous problems in macroeconomics involve functional equations • of the form: (d)=0 H

Examples: Value Function, Euler Equations. •

Regular equations are particular examples of functional equations. •

How do we solve functional equations? •

71 Two Main Approaches

1. Projection Methods: n n d (x, θ)= θiΨi (x) iX=0 We pick a basis Ψ (x) and “project” ( ) against that basis. { i }i∞=0 H ·

2. Perturbation Methods: n n i d (x, θ)= θ (x x0) i − iX=0

We use implicit-function theorems to find coefficients θi.

72 Solution Methods I: Projection (Spectral)

Standard Reference: Judd (1992). •

Choose a basis for the policy functions. •

Restrict the policy function to a be a linear combination of the ele- • ments of the basis.

Plug the policy function in the Equilibrium Conditions and find the • unknown coefficients.

73 Use Chebyshev polynomial. •

Pseudospectral (collocation) weigthing. •

74 Solution Methods II: Projection (Finite Elements)

Standard Reference: McGrattan (1999) •

Bound the domain of the state variables. •

Partition this domain in nonintersecting elements. •

Choose a basis for the policy functions in each element. •

Plug the policy function in the Equilibrium Conditions and find the • unknown coefficients. 75 Use linear basis. •

Galerkin weighting. •

We can be smart picking our grid. •

76 Solution Methods III: Perturbation Methods

Most complicated problems have particular cases that are easy to solve. •

Often, we can use the solution to the particular case as a building • block of the general solution.

Very successful in physics. •

Judd and Guu (1993) showed how to apply it to economic problems. •

77 ASimpleExample

Imaginewewanttofind the (possible more than one) roots of: • x3 4.1x +0.2=0 − such that x<0.

This a tricky, cubic equation. •

How do we do it? •

78 Main Idea

Transform the problem rewriting it in terms of a small perturbation • parameter.

Solve the new problem for a particular choice of the perturbation pa- • rameter.

Use the previous solution to approximate the solution of original the • problem.

79 Step 1: Transform the Problem

Write the problem into a perturbation problem indexed by a small • parameter ε.

This step is usually ambiguous since there are different ways to do so. •

A natural, and convenient, choice for our case is to rewrite the equation • as: x3 (4 + ε) x +2ε =0 − where ε 0.1. ≡

80 Step 2: Solve the New Problem

Index the solutions as a function of the perturbation parameter x = • g (ε): g (ε)3 (4 + ε) g (ε)+2ε =0 − andassumeeachofthissolutionissmooth(thiscanbeshowntobe the case for our particular example).

Note that ε = 0 is easy to solve: • x3 4x =0 − that has roots g (0) = 2, 0, 2. Since we require x<0, we take − g (0) = 2. − 81 Step 3: Build the Approximated Solution

By Taylor’s Theorem: • gn (0) x = g (ε) = g (0) + ∞ εn |ε=0 n! nX=1

Substitute the solution into the problem and recover the coefficients • gn(0) g (0) and n! for n =1,... in an iterative way.

Let’s do it! •

82 Zeroth -Order Approximation

We just take ε =0. •

Before we found that g (0) = 2. • −

Is this a good approximation? • x3 4.1x +0.2=0 − ⇒ 8+8.2+0.2=0.4 −

It depends! • 83 First -Order Approximation

Take the of g (ε)3 (4 + ε) g (ε)+2ε =0withrespectto • − ε: 3g (ε)2 g (ε) g (ε) (4 + ε) g (ε)+2=0 0 − − 0

Set ε =0 • 3g (0)2 g (0) g (0) 4g (0) + 2 = 0 0 − − 0

Butwejustfoundthatg (0) = 2, so: • − 8g0 (0) + 4 = 0 that implies g (0) = 1. 0 −2 84 First -Order Approximation

1 By Taylor: x = g (ε) g (0) + g (0)ε1 or • |ε=0 ' 1! 1 x 2 ε ' − − 2

For our case ε 0.1 • ≡ 1 x = 2 0.1= 2.05 − − 2 ∗ −

Is this a good approximation? • x3 4.1x +0.2=0 − ⇒ 8.615125 + 8.405 + 0.2= 0.010125 − −

85 Second -Order Approximation

Takethederivativeof3g (ε)2 g (ε) g (ε) (4 + ε) g (ε)+2 = 0 with • 0 − − 0 respect to ε: 2 6g (ε) g (ε) +3g (ε)2 g (ε) g (ε) g (ε) (4 + ε) g (ε)=0 0 00 − 0 − 0 − 00 ³ ´ Set ε =0 • 2 6g (0) g (0) +3g (0)2 g (0) 2g (0) 4g (0) = 0 0 00 − 0 − 00 ³ ´

Since g (0) = 2andg (0) = 1, we get: • − 0 −2 8g (0) 2=0 00 − 1 that implies g00 (0) = 4. 86 Second -Order Approximation

1 2 By Taylor:x = g (ε) g (0) + g (0)ε1 + g (0)ε2 or • |ε=0 ' 1! 2! 1 1 x 2 ε + ε2 ' − − 2 8

For our case ε 0.1 • ≡ 1 1 x = 2 0.1+ 0.01 = 2.04875 − − 2 ∗ 8 ∗ −

Is this a good approximation? • x3 4.1x +0.2=0 − ⇒ 8.59937523242188 + 8.399875 + 0.2=4.997675781240329e 004 − −

87 Some Remarks

The exact solution (up to machine precession of 14 decimal places) is • x = 2.04880884817015. −

A second-order approximation delivers: x = 2.04875 • −

Relative error: 0.00002872393906. •

Yes, this was a rigged, but suggestive, example. •

88 A Couple of Points to Remember

1. We transformed the original problem into a perturbation problem in such a way that the zeroth-order approximation has an analytical so- lution.

2. Solving for the first iteration involves a nonlinear (although trivial in our case) equation. All further iterations only require to solve a in one unknown.

89 An Application in Macroeconomics: Basic RBC

∞ t max E0 β log ct { } tX=0 zt α ct + kt+1 = e k +(1 δ) kt, t>0 t − ∀ zt = ρzt 1 + σεt, εt (0, 1) − ∼ N Equilibrium Conditions 1 1 zt+1 α 1 = βEt 1+αe kt+1− δ ct ct+1 − ³ ´ zt α ct + kt+1 = e k +(1 δ) kt t − zt = ρzt 1 + σεt −

90 Computing the RBC

We already discuss that the previous problem does not have a known • “paper and pencil” solution.

Oneparticularcasethemodelhasaclosedformsolution:δ =1. •

Why?Because,theincomeandthesubstitutioneffect from a produc- • tivity shock cancel each other.

Not very realistic but we are trying to learn here. •

91 Solution

By “Guess and Verify” • z α ct =(1 αβ) e tk − t zt α kt+1 = αβe kt

How can you check? Plug the solution in the equilibrium conditions. •

92 Another Way to Solve the Problem

Now let us suppose that you missed the lecture where “Guess and • Verify” was explained.

You need to compute the RBC. •

What you are searching for? A policy functions for consumption: • ct = c (kt,zt)

and another one for capital:

kt+1 = k (kt,zt)

93 Equilibrium Conditions

We substitute in the equilibrium conditions the budget constraint and • the law of motion for technology.

Then, we have the equilibrium conditions: • ρz +σε α 1 1 αe t t+1k (kt,zt) − = βEt c (kt,zt) c (k (kt,zt) , ρzt + σεt+1) zt α c (kt,zt) + k (kt,zt) = e kt

The Euler equation is the equivalent of x3 4.1x +0.2=0inour • − simple example, and c (kt,zt)andk (kt,zt) are the equivalents of x.

94 A Perturbation Approach

You want to transform the problem. •

Which perturbation parameter? standard deviation σ. •

Why σ? •

z Set σ =0 deterministic model, zt =0ande t =1. • ⇒

95 Taylor’s Theorem

We search for policy function ct = c (kt,zt; σ)andkt+1 = k (kt,zt; σ). •

Equilibrium conditions: • ρz +σε α 1 1 αe t t+1k (kt,zt; σ) − Et β =0 Ãc (kt,zt; σ) − c (k (kt,zt; σ) , ρzt + σεt+1; σ)! z α c (kt,zt; σ)+k (kt,zt; σ) e tk =0 − t

We will take with respect to kt,zt, and σ. •

96 Asymptotic Expansion ct = c (kt,zt; σ) |k,0,0

ct = c (k, 0; 0) +c (k, 0; 0) (kt k)+cz (k, 0; 0) zt + cσ (k, 0; 0) σ k − 1 2 1 + c (k, 0; 0) (kt k) + c (k, 0; 0) (kt k) zt 2 kk − 2 kz − 1 1 + c (k, 0; 0) (kt k) σ + c (k, 0; 0) zt (kt k) 2 kσ − 2 zk − 1 2 1 + czz (k, 0; 0) z + czσ (k, 0; 0) ztσ 2 t 2 1 1 + c (k, 0; 0) σ (kt k)+ cσz (k, 0; 0) σzt 2 σk − 2 1 2 + c 2 (k, 0; 0) σ + ... 2 σ

97 Asymptotic Expansion kt+1 = k (kt,zt; σ) |k,0,0

kt+1 = k (k, 0; 0) +kk (k, 0; 0) kt + kz (k, 0; 0) zt + kσ (k, 0; 0) σ 1 2 1 + k (k, 0; 0) (kt k) + k (k, 0; 0) (kt k) zt 2 kk − 2 kz − 1 1 + k (k, 0; 0) (kt k) σ + k (k, 0; 0) zt (kt k) 2 kσ − 2 zk − 1 2 1 + kzz (k, 0; 0) z + kzσ (k, 0; 0) ztσ 2 t 2 1 1 + k (k, 0; 0) σ (kt k)+ kσz (k, 0; 0) σzt 2 σk − 2 1 2 + k 2 (k, 0; 0) σ + ... 2 σ

98 Comment on Notation

From now on, to save on notation, I will just write • ρzt+σεt+1 α 1 1 β αe k(kt,zt;σ) − 0 F (kt,zt; σ)=Et c(kt,zt;σ) − c(k(kt,zt;σ),ρzt+σεt+1;σ) =  zt α  0 c (kt,zt; σ)+k (kt,zt; σ) e kt " #  −  Note that: • F (kt,zt; σ)=H (c (kt,zt; σ) ,c(k (kt,zt; σ) ,zt+1; σ) ,k(kt,zt; σ) ,kt,zt; σ)

I will use Hi to represent the partial derivative of H with respect to • the i component and drop the evaluation at the steady state of the functions when we do not need it. 99 Zeroth -Order Approximation

First, we evaluate σ =0: • F (kt, 0; 0) = 0

Steady state: • 1 αkα 1 = β − c c or, α 1 1=αβk −

100 Steady State

Then: • α 1 1 α 1 α c = c (k, 0; 0) = (αβ) − (αβ) − − 1 1 α k = k (k, 0; 0) = (αβ) −

Howgoodisthisapproximation? •

101 First -Order Approximation

We take derivatives of F (kt,zt; σ)aroundk, 0, and 0. •

With respect to kt: • Fk (k, 0; 0) = 0

With respect to zt: • Fz (k, 0; 0) = 0

With respect to σ: • Fσ (k, 0; 0) = 0

102 Solving the System I

Remember that:

F (kt,zt; σ)=H (c (kt,zt; σ) ,c(k (kt,zt; σ) ,zt+1; σ) ,k(kt,zt; σ) ,kt,zt; σ)

Then:

Fk (k, 0; 0) = H1ck + H2ckkk + H3kk + H4 =0

Fz (k, 0; 0) = H1cz + H2 (ckkz + ckρ)+H3kz + H5 =0

Fσ (k, 0; 0) = H1cσ + H2 (ckkσ + cσ)+H3kσ + H6 =0

103 Solving the System II

Note that: • Fk (k, 0; 0) = H1ck + H2ckkk + H3kk + H4 =0 Fz (k, 0; 0) = H1cz + H2 (ckkz + ckρ)+H3kz + H5 =0

is a quadratic system of four equations on four unknowns: ck, cz, kk, and kz.

Procedures to solve quadratic : Uhlig (1999). •

Why quadratic? Stable and unstable manifold. •

104 Solving the System III

Note that: • Fσ (k, 0; 0) = H1cσ + H2 (ckkσ + cσ)+H3kσ + H6 =0

is a linear, and homogeneous system in cσ and kσ.

Hence • cσ = kσ =0

105 Comparison with Linearization

After Kydland and Prescott (1982) a popular method to solve eco- • nomic models has been the use of a LQ approximation.

Close relative: linearization of equilibrium conditions. •

When properly implemented linearization, LQ, and first-order pertur- • bation are equivalent.

Advantages of linearization: • 1. Theorems.

2. Higher order terms.

106 Second -Order Approximation

We take second-order derivatives of F (kt,zt; σ) around k, 0, and 0: • Fkk (k, 0; 0) = 0 Fkz (k, 0; 0) = 0 Fkσ (k, 0; 0) = 0 Fzz (k, 0; 0) = 0 Fzσ (k, 0; 0) = 0 Fσσ (k, 0; 0) = 0

Remember Young’s theorem! •

107 Solving the System

We substitute the coefficients that we already know. •

A linear system of 12 equations on 12 unknowns. Why linear? •

Cross-terms kσ and zσ are zero. •

Conjecture on all the terms with odd powers of σ. •

108 Correction for Risk

We have a term in σ2. •

Captures precautionary behavior. •

We do not have certainty equivalence any more! •

Important advantage of second order approximation. •

109 Higher Order Terms

We can continue the iteration for as long as we want. •

Often, a few iterations will be enough. •

The level of accuracy depends on the goal of the exercise: Fern´andez- • Villaverde, Rubio-Ram´ırez, and Santos (2005).

110 AComputer

In practice you do all this approximations with a computer. •

Burden: analytical derivatives. •

Why are numerical derivatives a bad idea? •

More theoretical point: do the derivatives exist? (Santos, 1992). •

111 Code

First and second order: Matlab and Dynare. •

Higher order: Mathematica, Fortran code by Jinn and Judd. •

112 An Example

Let me run a second order approximation. •

Our choices •

Calibrated Parameters Parameter β α ρ σ Value 0.99 0.33 0.95 0.01

113 Computation

Steady State: • α 1 1 α 1 α c =(αβ) − (αβ) − =0.388069 − 1 1 α k =(αβ) − =0.1883

First order components. • ck (k, 0; 0) = 0.680101 kk (k, 0; 0) = 0.33 cz (k, 0; 0) = 0.388069 kz (k, 0; 0) = 0.1883 cσ (k, 0; 0) = 0 kσ (k, 0; 0) = 0

114 Comparison

zt 0.33 ct =0.6733e kt ct 0.388069 + 0.680101 (kt k)+0.388069zt ' − and:

zt 0.33 kt+1 =0.3267e kt kt+1 0.1883 + 0.1883 (kt k)+0.33zt ' −

115 Second-Order Terms

c (k, 0; 0) = 2.41990 k (k, 0; 0) = 1.1742 kk − kk − ckz (k, 0; 0) = 0.680099 kkz (k, 0; 0) = 0.330003 ckσ (k, 0; 0) = 0.kkσ (k, 0; 0) = 0 czz (k, 0; 0) = 0.388064 kzz (k, 0; 0) = 0.188304 czσ (k, 0; 0) = 0 kzσ (k, 0; 0) = 0 cσ2 (k, 0; 0) = 0 kσ2 (k, 0; 0) = 0

116 Non Local Accuracy test (Judd, 1992, and Judd and Guu, 1997)

Given the Euler equation:

zt i α 1 1 αe +1k (kt,zt) − = Et i  i i  c (kt ,zt) c k (kt,zt),zt+1 we can define:  ³ ´ 

zt+1 i α 1 i i αe k (kt,zt) − EE (kt ,zt) 1 c (kt ,zt) Et  i i  ≡ − c k (kt,zt),zt+1  ³ ´ 

117 Changes of Variables

We approximated our solution in levels. •

We could have done it in logs. •

Why stop there? Why not in powers of the state variables? •

Judd (2002) has provided methods for changes of variables. •

We apply and extend ideas to the stochastic neoclassical growth model. • 118 AGeneralTransformation

We look at solutions of the form: • cµ cµ = a kζ kζ + cz − 0 − 0 k γ kγ = c ³kζ kζ ´ + dz 0 − 0 − 0 ³ ´

Note that: • 1. If γ, ζ, µ and ϕ are 1 we get the linear representation.

2. As γ, ζ and µ tend to zero and ϕ is equal to 1 we get the loglinear approximation.

119 Theory

The first order solution can be written as • f (x) f (a)+(x a) f (a) ' − 0

Expand g(y)=h (f (X (y))) around b = Y (a), where X (y)isthe • inverse of Y (x).

Then: • α α g (y)=h (f (X (y))) = g (b)+gα (b)(Y (x) b ) A i − where gα = hAfi Xα comes from the application of the chain rule.

From this expression it is easy to see that if we have computed the • A values of fi , then it is straightforward to find the value of gα. 120 Coefficients Relation

Remember that the linear solution is: • k k0 = a1 (k k0) + b1z 0 − − ³ (l l0´)=c1 (k k0)+d1z − −

Then we show that: •

γ γ ζ γ 1 a3 = ζ k0− a1 b3 = γk0− b1 µ µ 1 1 ζ µ 1 c3 = ζ l0− k0− c1 d3 = µl0− d1

121 Finding the Parameters γ, ζ, µ and ϕ

Minimize over a grid the Euler Error. •

Some optimal results •

Table 6.2.2: Euler Equation Errors γ ζ µ SEE 1 1 1 0.0856279 0.986534 0.991673 2.47856 0.0279944

122 Sensitivity Analysis

Different parameter values. •

Most interesting finding is when we change σ: •

Table 6.3.3: Optimal Parameters for different σ’s σ γ ζ µ 0.014 0.98140 0.98766 2.47753 0.028 1.04804 1.05265 1.73209 0.056 1.23753 1.22394 0.77869

A first order approximation corrects for changes in variance! • 123 A Quasi-Optimal Approximation I

Sensitivity analysis reveals that for different parametrizations • γ ζ '

This suggests the quasi-optimal approximation: • γ γ γ γ k k = a3 k k + b3z 0 − 0 − 0 µ µ ³ γ γ ´ l l = c3 k k + d3z − 0 − 0 ³ ´

124 A Quasi-Optimal Approximation II

Note that if define k = kγ kγ and l = lµ lµ we get: • − 0 − 0 b k0 = a3kb+ b3z l = c k + d z b 3 b 3 b b Linear system: • 1. Use for analytical study (Campbell, 1994 and Woodford, 2003).

2. Use for estimation with a Kalman Filter.

125 References

General Perturbation theory: Advanced Mathematical Methods for • Scientists and Engineers: Asymptotic Methods and Perturbation The- ory by Carl M. Bender, Steven A. Orszag.

Perturbation in Economics: • 1. “Perturbation Methods for General Dynamic Stochastic Models” by Hehui Jin and Kenneth Judd.

2. “Perturbation Methods with Nonlinear Changes of Variables” by Kenneth Judd.

126 A gentle introduction: “Solving Dynamic General Equilibrium Mod- • els Using a Second-Order Approximation to the Policy Function” by Mart´ın Uribe and Stephanie Schmitt-Grohe.

127 Figure 5.1.1: Labor Supply at z = 0, τ = 2 / σ = 0.007

Linear Log-Linear FEM 0.325 Chebyshev Perturbation 2 Perturbation 5 Value Function

0.32

0.315

Labor Supply 0.31

0.305

0.3

18 20 22 24 26 28 30 Capital Figure 5.1.2: Investment at z = 0, τ = 2 / σ = 0.007

Linear Log-Linear 0.49 FEM Chebyshev Perturbation 2 Perturbation 5 0.48 Value Function

0.47

0.46

0.45

0.44 Investment

0.43

0.42

0.41

0.4

0.39

18 20 22 24 26 28 30 Capital Figure 5.1.3 : Labor Supply at z = 0, τ = 50 / σ = 0.035

0.35

0.345

0.34

0.335

0.33 Labor Supply

0.325

0.32

Linear Log-Linear 0.315 FEM Chebyshev Perturbation 2 Perturbation 5 0.31 Perturbation 2(log) Value Function

25 30 35 40 45 50 Capital Figure 5.1.4 : Investment at z = 0, τ = 50 / σ = 0.035

1

0.9

0.8

Labor Supply 0.7

0.6

Linear Log-Linear FEM 0.5 Chebyshev Perturbation 2 Perturbation 5 Perturbation 2(log) Value Function

25 30 35 40 45 50 Capital Figure 5.2.1 : Density of Output, τ = 2 / σ = 0.007

Linear Log-Linear FEM 7000 Chebyshev Perturbation 2 Perturbation 5 Value Function

6000

5000

4000

3000

2000

1000

0 1.5 1.6 1.7 1.8 1.9 2 Figure 5.2.2 : Density of Capital, τ = 2 / σ = 0.007

Linear Log-Linear 5000 FEM Chebyshev Perturbation 2 4500 Perturbation 5 Value Function

4000

3500

3000

2500

2000

1500

1000

500

0 20 21 22 23 24 25 26 Figure 5.2.3 : Density of Labor, τ = 2 / σ = 0.007

Linear Log-Linear 7000 FEM Chebyshev Perturbation 2 Perturbation 5 Value Function 6000

5000

4000

3000

2000

1000

0 0.295 0.3 0.305 0.31 0.315 0.32 0.325 Figure 5.2.4 : Density of Consumption, τ = 2 / σ = 0.007

Linear Log-Linear FEM 6000 Chebyshev Perturbation 2 Perturbation 5 Value Function

5000

4000

3000

2000

1000

0 1.15 1.2 1.25 1.3 1.35 1.4 Figure 5.2.5: Time Series for Output, τ = 2 / σ = 0.007 1.9 Linear Log-Linear FEM Chebyshev Perturbation 2 Perturbation 5 1.85 Value Function

1.8

1.75

1.7

1.65

50 100 150 200 250 300 350 400 450 500 Figure 5.2.6: Time Series for Capital, τ = 2 / σ = 0.007

Linear Log-Linear FEM Chebyshev Perturbation 2 24.5 Perturbation 5 Value Function

24

23.5

23

22.5

50 100 150 200 250 300 350 400 450 500 Figure 5.2.7 : Density of Output, τ = 50 / σ = 0.035 8000 Linear Log-Linear FEM Chebyshev 7000 Perturbation 2 Perturbation 5 Perturbation 2(log) Value Function

6000

5000

4000

3000

2000

1000

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 Figure 5.2.8 : Density of Capital, τ = 50 / σ = 0.035

Linear Log-Linear FEM Chebyshev Perturbation 2 6000 Perturbation 5 Perturbation 2(log) Value Function

5000

4000

3000

2000

1000

0 10 20 30 40 50 60 70 80 90 100 Figure 5.2.9 : Density of Labor, τ = 50 / σ = 0.035

Linear Log-Linear 7000 FEM Chebyshev Perturbation 2 Perturbation 5 Perturbation 2(log) 6000 Value Function

5000

4000

3000

2000

1000

0 0.25 0.3 0.35 0.4 0.45 0.5 Figure 5.2.10 : Density of Consumption, τ = 50 / σ = 0.035

Linear Log-Linear FEM 12000 Chebyshev Perturbation 2 Perturbation 5 Perturbation 2(log) Value Function

10000

8000

6000

4000

2000

0 1 1.5 2 2.5 3 Figure 5.2.11 : Time Series for Output, τ = 50 / σ = 0.035

3

2.5

2

1.5

1 Linear Log-Linear FEM Chebyshev Perturbation 2 Perturbation 5 Perturbation 2(log) Value Function 0.5 50 100 150 200 250 300 350 400 450 500 Figure 5.2.12 : Time Series for Capital, τ = 50 / σ = 0.035

45

40

35

30

25

20

15

10 Linear Log-Linear FEM Chebyshev 5 Perturbation 2 Perturbation 5 Perturbation 2(log) Value Function 0 50 100 150 200 250 300 350 400 450 500 Figure 5.3.1 : Empirical CDF of den Haan Marcet Tests, τ = 50 / σ = 0.035

Chi-Square Linear Log-Linear 0.9 FEM Chebyshev Perturbation 2 Perturbation 5 0.8 Value Function

0.7

0.6

0.5 CDF

0.4

0.3

0.2

0.1

0 2 4 6 8 10 12 14 16 18 Test Statistic Figure 5.3.2 : Empirical CDF of den Haan Marcet Tests, τ = 50 / σ = 0.035

Chi-Square Linear Log-Linear 0.9 FEM Chebyshev Perturbation 2 Perturbation 5 0.8 Perturbation 2(log) Value Function

0.7

0.6

0.5 CDF

0.4

0.3

0.2

0.1

0 0 5 10 15 20 25 Test Statistic

Figure 5.4.8 : Euler Equation Errors at z = 0, τ = 2 / σ = 0.007

-3 Linear Log-Linear FEM Chebyshev Perturbation 2 Perturbation 5 -4 Value Function

-5

-6

-7 Log10|Euler Equation Error| Log10|Euler Equation

-8

-9

18 20 22 24 26 28 30 Capital -4 x 10 Figure 5.4.9 : Euler Equation Errors at z = 0, τ = 2 / σ = 0.007 18 Linear Log-Linear FEM Chebyshev 16 Perturbation 2 Perturbation 5 Value Function

14

12

10

8 Euler Equation Error

6

4

2

0

18 20 22 24 26 28 30 Capital -5 x 10 Figure 5.4.10 : Euler Equation Errors at z = 0, τ = 2 / σ = 0.007

3 FEM Chebyshev Perturbation 2 Perturbation 5 Value Function 2.5

2

1.5

1

Euler Equation Error 0.5

0

-0.5

-1 18 20 22 24 26 28 30 Capital -4 Figurex 105.4.11 : Marginal Density of Capital versus Euler Errors at z=0, τ = 2 / σ = 0.007 Linear Log-Linear FEM Chebyshev Perturbation 2 5 Perturbation 5 Value Function

4

3

2

1

0

20 21 22 23 24 25 26

Figure 5.4.20 : Euler Equation Errors at z = 0, τ = 50 / σ = 0.035

-3

-3.5

-4

-4.5

-5 Log10|Euler Equation Error| Log10|Euler Equation

-5.5

Linear Log-Linear -6 FEM Chebyshev Perturbation 2 Perturbation 5 Perturbation 2(log) -6.5 Value Function

25 30 35 40 45 50 Capital 4 Table 5.4.1: Integral of the Euler Errors (x10− ) Linear 0.2291 Log-Linear 0.6306 Finite Elements 0.0537 Chebyshev 0.0369 Perturbation 2 0.0481 Perturbation 5 0.0369 Value Function 0.0224

128 4 Table 5.4.2: Integral of the Euler Errors (x10− ) Linear 7.12 Log-Linear 24.37 Finite Elements 0.34 Chebyshev 0.22 Perturbation 2 7.76 Perturbation 5 8.91 Perturbation 2 (log) 6.47 Value Function 0.32

129 Figure 6.2.1 : Euler Equation Errors at z = 0, τ = 2 / σ = 0.007

Linear -3.5 Optimal Change

-4

-4.5

-5

-5.5

-6

Log10|Euler Equation Error| Log10|Euler Equation -6.5

-7

-7.5

-8 18 20 22 24 26 28 30

Capital Figure 6.2.2 : Euler Equation Errors at z = 0, τ = 2 / σ = 0.007

-3

-4

-5

-6

-7 Log10|Euler Equation Error| Log10|Euler Equation

-8 Linear Log-Linear FEM Chebyshev Perturbation 2 -9 Perturbation 5 Value Function Optimal Change

18 20 22 24 26 28 30

Capital Figure 6.2.3. : Euler Equation Errors at z = 0, τ = 2 / σ = 0.007

Linear -3.5 Optimal Change Restricted Optimal Change

-4

-4.5

-5

-5.5

-6

Log10|Euler Equation Error| Log10|Euler Equation -6.5

-7

-7.5

-8 18 20 22 24 26 28 30

Capital Computing Time and Reproducibility

How methods compare? •

Web page: •

www.econ.upenn.edu/~jesusfv/companion.htm

130