Estimating Discrete Choice Dynamic Programming Models
Total Page:16
File Type:pdf, Size:1020Kb
Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References Estimating Discrete Choice Dynamic Programming Models Katsumi Shimotsu1 Ken Yamada2 1Department of Economics, Hitotsubashi University 2School of Economics, Singapore Management University Japanese Economic Association Spring Meeting Tutorial Session (May, 2011) Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References Introduction to Part II • The second part focuses on econometric implementation of discrete choice dynamic programming models. • A simple machine replacement model is estimated using the nested fixed point algorithm (Rust, 1987). • For more applications, see Aguirregabiria and Mira (2010) and Keane, Todd, and Wolpin (2010). • For audience with different backgrounds, I will briefly review 1. Discrete choice models and 2. Numerical methods for (i) maximum likelihood estimation and (ii) dynamic programming. Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References Outline Discrete Choice Models The Random Utility Model Maximum Likelihood Estimation Dynamic Programming Models Machine Replacement Numerical Dynamic Programming Discrete-Choice Dynamic-Programming Models The Nested Fixed Point Algorithm The Nested Pseudo Likelihood Algorithm References Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References Outline Discrete Choice Models The Random Utility Model Maximum Likelihood Estimation Dynamic Programming Models Machine Replacement Numerical Dynamic Programming Discrete-Choice Dynamic-Programming Models The Nested Fixed Point Algorithm The Nested Pseudo Likelihood Algorithm References Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References The Random Utility Model • Consider a static problem in which the agent chooses among J alternatives, such as transportation mode and brand choice. • Assume that the choice-specific utility can be expressed as Vij = uij + eij i = 1;:::;N; j = 0:::;J; where uij is a deterministic component while eij is a stochastic component. • A simple example is uij = xij q, where xij is a vector of observed characteristics such as price and income. • Let A = f1;:::;Jg denote the choice set. The choice probability is Z ( ) P (a) = 1 a = argmax (uij + eij ) f (ei )dei ; e j2A 0 where f (ei ) is a joint density of ei = (ei1;:::;eiJ ) . Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References The Conditional Logit Model • Assume that each ej is independently, identically distributed −e −e extreme value. The density is f (ej ) = e j exp(−e j ). The choice probability follows the logit model (McFadden, 1981). exp(uij ) exp(uij − ui1) pij = P (a = j) = J = J ∑j=1 exp(uij ) 1 + ∑j=2 exp(uij − ui1) • Note that this is a system of J − 1 equations. For J = 2, exp(ui2 − ui1) pi2 = = Λ(ui2 − ui1): 1 + exp(ui2 − ui1) There exists a mapping between the utility differences and choice probabilities. −1 pi2 ui2 − ui1 = Λ (pi2) or ui2 − ui1 = ln 1 − pi2 See Hotz and Miller (1993) for the details on invertibility. Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References Outline Discrete Choice Models The Random Utility Model Maximum Likelihood Estimation Dynamic Programming Models Machine Replacement Numerical Dynamic Programming Discrete-Choice Dynamic-Programming Models The Nested Fixed Point Algorithm The Nested Pseudo Likelihood Algorithm References Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References MLE • The choice probability is exp(uij ) P (a = j) = J ∑j=1 exp(uij ) • Suppose Vij = xij q + eij . The log-likelihood function is N N J l (q) = ∑ li (q) = ∑ ∑ 1(ai = j)lnP (ai = jjxij ): i=1 i=1 j=1 • If the model is correctly specified, 2 −1! d ¶ l qb ! N q;E − : ¶q¶q 0 Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References Numerical Maximization (Train, 2009) Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References Numerical Maximization (Train, 2009) Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References Numerical Methods for MLE • The Newton-Raphson procedure uses the following formula. −1 qk+1 = qk + −Hk gk ; 2 0 where gk = ¶l/¶qj and Hk = ¶ l ¶q¶q . qk qk • Quasi-Newton methods: [fval;theta;g;H] = csminwel(0fun0;theta0;H0;g;crit;nit) by Christopher Sims • Non-gradient based algorithm (simplex methods): [theta;fval] = fminsearch(0fun0;theta0;options) Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References Extension to Dynamic Models • We may be concerned with dynamics in some applications such as firm’s adjustment of labor and capital, educational and occupational choices. • The value function V is the unique solution to the Bellman equation: V (xt ;et ) = max fVjt = V (xt ;et ;at = j)g; j2A where the choice-specific value function (including e) is V (xt ;et ;at ) = u (xt ;at ) + et (at ) + bE[V (xt+1;et+1)jxt ;et ;at ] = fu (xt ;at ) + bE[V (xt+1;et+1)jxt ;et ;at ]g + et (at ) = v (xt ;at ) + et (at ) for b 2 (0;1). The model considered here is a natural extension of the conditional logit model (or the random utility model) to a dynamic setting. Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References Outline Discrete Choice Models The Random Utility Model Maximum Likelihood Estimation Dynamic Programming Models Machine Replacement Numerical Dynamic Programming Discrete-Choice Dynamic-Programming Models The Nested Fixed Point Algorithm The Nested Pseudo Likelihood Algorithm References Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References Plant Investment Decisions • Here we consider the plant’s investment decision problem set out by Kasahara at UBC. The tradeoff is that investment (machine replacement), a 2 f0;1g, 1. requires large fixed costs for now but 2. increases profits and lowers replacement costs in the future. • The profit function is u (zt ;at−1;at ) = R (zt ;at ) − FC (at ;at−1); where R is revenues net of variable input costs, zt is plant’s productivity, and FC is fixed costs. 2 • Assume that zt = rzt−1 + ht , where ht ∼ N 0;sh . Specify the profit function as u (zt ;at−1;at ) = u (zt−1;at−1;at ) = exp(q0 + q1zt−1 + q2at ) − [q3at + q4 (1 − at−1)at ]: Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References Transition Probability • Let Z = fz1;:::;zg ;:::;zG g. • Denote the transition probability by qgh = q (zt = zhjzt−1 = zg ) = Pr(z 2 [zh;zh+1]jz 2 [zg ;zg+1]): By the Tauchen method, this can be approximated by 8 < Φ((z1 − rzg )/sh ) if h = 1; qgh = Φ((zh − rxg )/sh ) − Φ((zh−1 − rzg )/sh ) if 1 < h < G; : 1 − Φ((zG − rzg )/sh ) if h = G; where zg = (zg + zg+1)/2 and Φ is the standard normal cdf. Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References The Choice-Specific Value Function • Define the integrated value function as Z V (zt−1;at−1) = V (zt−1;at−1;et )dF (et ): et • Assume that each et (at ) is iid extreme value. J ! V (zt−1;at−1) = g + ln ∑ exp(v (zt−1;at−1;at = j)) ; j=1 where g is the Euler constant. • The choice-specific value function is v (zt−1;at−1;at ) = u (zt−1;at−1;at )+b ∑q (zt jzt−1)V (zt−1;at−1): xt Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References Outline Discrete Choice Models The Random Utility Model Maximum Likelihood Estimation Dynamic Programming Models Machine Replacement Numerical Dynamic Programming Discrete-Choice Dynamic-Programming Models The Nested Fixed Point Algorithm The Nested Pseudo Likelihood Algorithm References Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References Infinite Horizon Problem • In the infinite-horizon problem with a finite number of states, the Bellman equation is a system of nonlinear equations. " G # Vg = max u (xg ;a) + b qgh (a)Vh ; a2A ∑ h=1 where a 2 f1;:::;Jg is a choice variable, x 2 fx1;:::;xg ;;:::;xG g is a state variable, and qgh (a) is the transition probability from state g to state h. 0 • Let a = (a1;:::;aG ) denote the policy function and a 0 u = (u (x1;a1);:::;u (xG ;aG )) the return. Then, in vector notation, Va = ua + bQaVa; which leads to the solution: Va = (I − bQa)−1 ua; a where the ghth element of Q is qgh (a). For details, see Judd (1998) and Adda and Cooper (2003). Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References Value Iteration Algorithm 1. Choose a grid X = fx1;:::;xg ;;:::;xG g, where xg < xh for g < h, and specify u (x;a) and qgh (a). 2. Make an initial guess V 0 and choose stopping criterion c > 0. 3. For g = 1;:::;G, compute ( G ) k+1 k Vg = max u (xg ;a) + b qgh (a)Vh : a2A ∑ h=1 4. If V k − V k+1 < c, stop; else go to step 3. Discrete Choice Models Dynamic Programming Models Discrete-Choice Dynamic-Programming Models References Policy Iteration Algorithm 1. Choose a grid X = fx1;:::;xg ;;:::;xG g, where xg < xh for g < h, and specify u (x;a) and qgh (a). 2. Make an initial guess V 0 and choose stopping criterion c > 0. 0 k+1 k+1 k+1 3. Compute a = a1 ;:::;aG , where ( G ) k+1 k ag = argmax u (xg ;a) + b qgh (a)Vh : a2A ∑ h=1 0 k+1 k+1 k+1