Dynamic Economics Quantitative Methods and Applications to Macro and Micro
Total Page:16
File Type:pdf, Size:1020Kb
Dynamic Economics Quantitative Methods and Applications to Macro and Micro J¶er^ome Adda and Nicola Pavoni MACT1 2003-2004. I Overview Dynamic Programming Theory ² { Contraction mapping theorem. { Euler equation Numerical Methods ² Econometric Methods ² Applications ² MACT1 2003-2004. I Numerical Methods Examples: Cake Eating Deterministic Cake eating: ² V (K) = max u(c) + ¯V (K c) c ¡ with { K: size of cake. K 0 ¸ { c: amount of cake consumed. c 0 ¸ Stochastic Cake eating: ² V (K; y) = max u(c) + ¯Ey0 V (K0; y0) c K0 = K c + y ¡ Discrete Cake eating: ² V (K; ") = max[u(K; "); ¯E 0 V (½K; "0)] ½ [0; 1] " 2 MACT1 2003-2004. I How do we Solve These Models? Not necessarily a closed form solution for V (:). ² Numerical approximations. ² MACT1 2003-2004. I Solution Methods Value function iterations. (Contraction Mapping Th.) ² Policy function iterations. (Contraction Mapping Th.) ² Projection methods. (Euler equation) ² MACT1 2003-2004. I Value Function Iterations Value Function Iterations Vn(S) = max u(action; S) + ¯EVn 1(S0) action ¡ Vn(:) = T Vn 1(:) ¡ Take advantage of the Contraction Mapping Theorem. If T is ² the contraction operator, we use the fact that d(Vn; Vn 1) ¯d(Vn 1; Vn 2) ¡ · ¡ ¡ n Vn(:) = T V0(:) This guarantee that: ² 1. successive iterations will converge to the (unique) ¯xed point. 2. starting guess for V0 can be arbitrary. Successive iterations: ² { Start with a given V0(:). Usually V0(:) = 0. { Compute V1(:) = T V0(:) { Iterate Vn(:) = T Vn 1(:) ¡ { Stop when d(Vn; Vn 1) < ". ¡ MACT1 2003-2004. I Value Function Iterations: Deterministic Cake Eating Model: ² V (K) = max u(c) + ¯V (K c) c ¡ Can be rewritten as: ² V (K) = max u(K K0) + ¯V (K0) K0 ¡ The iterations will be on ² Vn(K) = max u(K K0) + ¯Vn 1(K0) K0 ¡ ¡ example: take u(c) = ln(c). ² { we need a grid for the cake size. K ; : : : ; K f 0 N g { we need to store the successive values of Vn: N x 1 vector. { search on the K grid, the value K0 which gives the high- f g est utility flow. Vn 1(K) ¡ 6 - K1 K2 K3 K4 K MACT1 2003-2004. I Computer Code for Deterministic Cake Eating Problem clear % clear workspace memory dimIter=30; % number of iterations beta=0.9; % discount factor K=0:0.005:1; % grid over cake size, from 0 to 1 dimK=length(K); % numbers of rows (size of grid) V=zeros(dimK,dimIter); % initialize matrix for value function for iter=1:dimIter % start iteration loop aux=zeros(dimK,dimK)+NaN; for ik=1:dimK % loop on all sizes for cake for ik2=1:(ik-1) % loop on all future sizes of cake aux(ik,ik2)=log(K(ik)-K(ik2))+beta*V(ik2,iter); end end V(:,iter+1)=max(aux')'; % computes the maximum over all future sizes end plot(K,V); % plots all the successive values against size of cake Discrete Cake Eating Model Model: ² V (K; ") = max[u(K; "); ¯E 0 V (½K; "0)] ½ [0; 1] " 2 Grid for the size of the cake: K ; ½K ; ½2K ; : : : ; ½N K ² f 0 0 0 0g Markov process for the taste shock: " "; "¹ ² 2 f g ¼ ¼ ¼ = LL LH ¼ ¼ · HL HH ¸ We construct V as a Nx2 matrix, containing the value function. ² Let i denote an index for the cake size, i 1; : : : ; N , and ² k k 2 f g i" an index for the taste shock. { for a given ik and i", we compute: the value of eating the cake now: u(K[i ]; "[i ])) ¤ k " the value of waiting: 2 ¼ V (½K[i ]; "[i]) ¤ i=1 i";i K { we then compute the maxPof these two values. MACT1 2003-2004. I Code for Discrete Cake Eating Problem %%%%%%%%%%%%%%% Initialisation of parameters %%%%%%%%%%%%%%%%%%%%%%%%%% itermax=60; % number of iterations dimK=100; % size of grid for cake size dimEps=2; % size of grid for taste shocks K0=2; % initial cake size ro=0.95; % shrink factor beta=0.95; % discount factor K=0:1:(dimK-1); K=K0*ro.^K'; % Grid for cake size 1 ro ro^2... eps=[.8,1.2]; % taste shocks pi=[.6 .4;.2 .8]; % transition matrix for taste shocks V=zeros(dimK,dimEps); % Stores the value function. % Rows are cake size and columns are shocks auxV=zeros(dimK,dimEps); %%%%%%%%%%%%%%% End Initialisation of parameters %%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%% Start of Iterations %%%%%%%%%%%%%%%%%%%%%%%%%% for iter=1:itermax; % loop for iterations for ik=1:dimK-1; % loop over size of cake for ieps=1:dimEps; % loop over taste shocks Vnow=sqrt(K(ik))*eps(ieps); % utility of eating the cake now Vwait=pi(ieps,1)*V(ik+1,1)+pi(ieps,2)*V(ik+1,2); auxV(ik,ieps)=max(Vnow,beta*Vwait); end % end loop over taste shock end % end loop over size of cake V=auxV; end plot(K,V) % graph the value function % as a function of cake size Continuous Cake Eating Problem Program of the agent: ² V (W; y) = max u(c) + ¯Ey0 yV (W 0; y0) for all (W; y) 0 c W +y j · · with W 0 = R(W c + y) and y is iid ¡ (1) We can rewrite this Bellman equation by de¯ning: ² X = W + y the total amount of cake available at the beginning of the pe- riod. V (X) = max u(c) + ¯Ey0 V (X0) for all X 0 c X · · (2) with X0 = R(X c) + y0 ¡ The operator is de¯ned as: ² T (V (X)) = max u(c) + ¯Ey0 V (X0): (3) c [0;X] 2 MACT1 2003-2004. I Value Function Iterations First, we need to discretize the state variable X: X 1; : : : ; XnS ² f g Second, we discretize the choice variable c: c1; : : : ; cnc ² f g i Suppose we know Vn 1(X ), i 1; : : : ; ns . ² ¡ 2 f g For any values on the grid X i, and cj, we evaluate: ² K j i j k vij = u(c ) + ¯ ¼kVn 1(R(X c ) + y ) ¡ ¡ Xk=1 then ² i Vn(X ) = max vij j i i i We stop when Vn(X ) Vn 1(X ) < "; X ² j ¡ ¡ j 8 MACT1 2003-2004. I Approximating Value in Next Period K j i j k vij = u(c ) + ¯ ¼kVn 1(R(X c ) + y ) ¡ ¡ Xk=1 Vn 1(X) ¡ 6 - X1 X2 X3 X4 X i j k to calculate Vn 1(R(X c ) + y ), they are several options: ¡ ¡ i0 i j k we ¯nd i0 such that X is closest to R(X c ) + y ² ¡ i j k i0 Vn 1(R(X c ) + y ) Vn 1(X ) ¡ ¡ ' ¡ i0 i j k i0+1 ¯nd i0 such that X < R(X c ) + y < X , then perform ² ¡ linear interpolation: i j k i0 i0+1 Vn 1(R(X c ) + y ) ¸Vn 1(X ) + (1 ¸)Vn 1(X ) ¡ ¡ ' ¡ ¡ ¡ MACT1 2003-2004. I Policy Function Iterations Policy Function Iteration Improvement over value function iterations. ² faster method for small problems. ² Implementation: ² { guess c0(X). { evaluate: V (X) = u(c (X)) + ¯ ¼ V (R(X c (X)) + y ) 0 0 i 0 ¡ 0 i i=XL;H this requires solving a system of linear equations. { policy improvement step: c1(X) = argmax[u(c) + ¯ ¼iV0(R(X c) + yi)] c ¡ i=XL;H { continue until convergence. MACT1 2003-2004. I Projection Methods Projection Methods Example: Continuous cake eating: Euler equation: ² u0(ct) = ¯Etu0(ct+1) if ct < Xt 8 < ct = Xt if corner solution This can:be rewritten as: ² u0(ct) = max[Xt; ¯Etu0(ct+1)] c = X c(X ) + y t+1 t ¡ t t+1 The solution to this equation is a function: c(X ) ² t u0(c(X )) max [X ; ¯E 0 u0 (X c(X ) + y0)] = 0 t ¡ t y t ¡ t F (c(Xt)) = 0 Goal: Find a function c^(X) which satis¯es the above equation. ² Find the zero of the functional equation. MACT1 2003-2004. I Approximating the Policy Function De¯ne c^(X; ª) be an approximation to the real c(X). ² n c^(X; ª) = Ãipi(X) i=1 X where p (X) is a base of the space of continuous functions. f i g Examples: { 1; X; X2; : : : f g { Chebyshev polynomials: p (X) = cos(i arccos(X)) X [0; 1]; i = 0; 1; 2; : : : i 2 8 pi(X) = 2Xpi 1(X) pi 2(X) i 2; with p0(0) = 1; p1(X) = X < ¡ ¡ ¡ ¸ { Legendre: or Hermite polynomials. For instance, the policy function can be approximated by: ² 2 c^(X; ª) = Ã0 + Ã1X + Ã2X c^(X; ª) = Ã + Ã X + Ã (2X2 1) + ::: 0 1 2 ¡ MACT1 2003-2004. I De¯ning a Metric We want to bring F (c^(X; Ã) as \close as possible" to zero. ² How do we de¯ne \close to zero"? ² For any weighting function g(x), the inner product of two in- ² tegrable functions f1 and f2 on a space A is de¯ned as: f ; f = f (x)f (x)g(x)dx (4) h 1 2i 1 2 ZA Two functions f and f are said to be orthogonal, conditional ² 1 2 on a weighting function g(x), if f ; f = 0 h 1 2i The weighting function indicates where the researcher wants the approximation to be good. In our problem, we want ² F (c^(X; ª)); f(X) = 0 h i where f(X) is a given function. The choice of the f function will give di®erent projection methods. MACT1 2003-2004. I Di®erent Projection Methods Least square method: ² min F (c^(X; ª)); F (c^(X; ª)) ª h i Collocation method: ² min F (c^(X; ª)); ±(X Xi) i = 1; : : : ; n ª h ¡ i where ±(X X ) is the mass point function at point X : ¡ i i ±(X) = 1 if X = Xi ±(X) = 0 elsewhere Galerkin method: ² min F (c^(X; ª)); pi(X) i = 1; : : : ; n ª h i where pi(X) is a base of the function space.