Dynamic Discrete Choice Models Overview: Goals of This Lecture

Total Page:16

File Type:pdf, Size:1020Kb

Dynamic Discrete Choice Models Overview: Goals of This Lecture Single Agent Dynamics: Dynamic Discrete Choice Models Part I: Theory and Solution Methods Günter J. Hitsch The University of Chicago Booth School of Business 2013 1/113 Overview: Goals of this lecture 1. Discuss the structure and theory of dynamic discrete choice models 2. Survey numerical solution methods 3. Discuss goals of statistical inference: Reduced vs structural form of the model 4. Model estimation I Nested fixed point (NFP) estimators I MPEC approach I Heterogeneity and unobserved state variables 2/113 Overview Review: Infinite Horizon, Discounted Markov Decision Processes The Storable Goods Demand Problem Dynamic Discrete Choice Models The Storable Goods Demand Problem (Cont.) The Durable Goods Adoption Problem Numerical Solution of a Dynamic Decision Process Approximation and Interpolation Numerical Integration The Durable Goods Adoption Problem (Cont.) References 3/113 Overview Review: Infinite Horizon, Discounted Markov Decision Processes The Storable Goods Demand Problem Dynamic Discrete Choice Models The Storable Goods Demand Problem (Cont.) The Durable Goods Adoption Problem Numerical Solution of a Dynamic Decision Process Approximation and Interpolation Numerical Integration The Durable Goods Adoption Problem (Cont.) References 4/113 Elements of a Markov Decision Process I Time: t =0, 1,... (infinite horizon) I A set of states, s S 2 I S is the state space I A set of actions, a (s) 2 A I Set of feasible actions may depend on the current state, although this is more general than what we typically need in dynamic discrete choice 5/113 Elements of a Markov Decision Process I A decision maker chooses some action a (s) in period t, given 2 A the current state s, and then receives the utility U(s, a) I The state evolves according to a probability distribution p( s, a) ·| I If S is discrete, then Pr st+1 = j st,at = p(j st,at) for j S { | } | 2 I If S is continuous, then p( st,at) is the conditional density of st+1 ·| 6/113 Elements of a Markov Decision Process I The decision process is called “Markov” because the utility function and transition probabilities depend only on the current state and actions, not on the whole history of the process I Utilities, U(s, a), and transition probabilities, p( s, a) are stationary, ·| i.e., do not depend on the time period t when an action is taken I Note: this does not imply that the time series process st 1 is { }t=0 stationary, and vice versa, stationarity of st 1 is not required in { }t=0 this class of models I Example: A decision make gets tired of a product as time progresses. Could this be captured by a Markov Decision Process? — Answer: yes I Ut(s, a)= t is not allowed − I Instead let S = 0, 1, 2,... , and for any s S let p(s +1s, a)=1, { } 2 | i.e. st+1 = st +1. Then define U(s, a)= s − 7/113 Actions, decision rules, and policies I We examine decision problems where the decision maker uses current (and possibly past) information to choose actions in order to maximize some overall objective function I Decision rules: dt : S s (s),dt(s) (s) ! 2S A 2 A I Decision rules associateS a current state with a feasible action I Policies, strategies, or plans: ⇡ =(d0,d1,...)=(dt)t1=0 I A policy is a sequence of decision rules—a plan on how to choose actions at each point in time, given the state at that point in time, st 8/113 Expected total discounted reward I A given policy ⇡ =(dt)t1=0 induces a probability distribution on the sequence of states and actions, (st,at)t1=0 I For a given policy ⇡, the expected total discounted reward (utility) is given by ⇡ ⇡ 1 t v (s0)=E β U(st,dt(st)) s0 | "t=0 # X I Future utilities are discounted based on the discount factor 0 β < 1 (in finite horizon problems we could allow for β =1) 9/113 Expected total discounted reward ⇡ I We will assume that v is well-defined I One necessary condition if the state space is discrete: The series inside the expectation converges I Ensured if U is bounded, because 0 β < 1 I Note, however, that boundedness is not a necessary condition for convergence I If the state space is continuous, certain technical conditions need to hold for the expectation to exist (see Stokey and Lucas 1989) 10 / 113 Optimal policies I Let ⇧ be the set of all policies. ⇡⇤ ⇧ is optimal if 2 ⇡⇤ ⇡ v (s) v (s),sS ≥ 2 for all ⇡ ⇧ 2 I The value of the Markov decision problem is given by ⇡ v⇤(s)=sup v (s) ⇡ ⇧ { } 2 I We call v⇤ the (optimal) value function I An optimal policy ⇡⇤ is a policy such that ⇡⇤ ⇡ v (s)=v⇤(s) = max v (s) ⇡ ⇧ { } 2 11 / 113 Bellman Equation I Intuition: I Due to the stationarity of the utilities and transition probabilities, and given that there is always an infinite decision horizon, the (optimal) value function should only depend on the current state, but not on t I Hence, for any current state s, the expected total discounted reward under optimal decision making should satisfy the relationship v⇤(s)= sup U(s, a)+β p(s0 s, a)v⇤(s0)ds0 a (s) | 2A ⇢ Z 12 / 113 Bellman Equation I The equation above is called a Bellman equation I Note that the Bellman equation is a functional equation with potential solutions of the form v : S R ! I More formally, let = f : S R : v < be the metric space B { ! k k 1} of real-valued, bounded functions on S. f =sups f(s) is the k k 2S | | supremum-norm. Define the operator L : , B ! B Lv(s) sup U(s, a)+β p(s0 s, a)v(s0)ds0 ⌘ a (s) | 2A ⇢ Z I A solution of the Bellman equation is a fixed point of L, v = Lv 13 / 113 Questions to be addressed 1. Is a solution of the Bellman equation the optimal value function? 2. Is there a solution of the Bellman equation, and how many solutions exist in general? 14 / 113 The main theorem Theorem If v is a solution of the Bellman equation, v = Lv, then v = v⇤. Furthermore, under the model assumptions stated above a (unique) solution of the Bellman equation always exists. I The proof of this theorem is based on the property that L is a contraction mapping. According to the Banach fixed-point theorem, a contraction mapping (defined on a complete metric space) has a unique fixed point. The contraction mapping property crucially depends on the additive separability of the total discounted reward function, and in particular on discounting, β < 1 15 / 113 Optimal policies I Assume that the supremum in the Bellman equation can be attained, and define the decision rule d⇤(s) arg max U(s, a)+β p(s0 s, a)v⇤(s0) 2 a (s) | 2A ⇢ Z I Let ⇡⇤ =(d⇤,d⇤,...) be a corresponding policy Theorem Let ⇡⇤ be as defined above. Then ⇡⇤ is an optimal policy, such that ⇡⇤ v = v⇤ 16 / 113 Optimal policies I Note that the theorem on the previous slide shows us how to find an optimal policy I In particular, note that the theorem shows that if the supremum in the Bellman equation can be attained, then there is a stationary optimal policy 17 / 113 Optimal Policies: Notes I Asufficient condition for the existence of an optimal policy is I (s) is finite A I Satisfied (by definition) in dynamic discrete choice problems I Decision rules that depend on the past history of states and actions do not improve the total expected reward I Hence, basing decisions only on current, not past information, is not restrictive in Markov decision processes 18 / 113 Overview Review: Infinite Horizon, Discounted Markov Decision Processes The Storable Goods Demand Problem Dynamic Discrete Choice Models The Storable Goods Demand Problem (Cont.) The Durable Goods Adoption Problem Numerical Solution of a Dynamic Decision Process Approximation and Interpolation Numerical Integration The Durable Goods Adoption Problem (Cont.) References 19 / 113 Kellogg’s Raisin Bran, 20 oz pack size ●● ●●●● ●●●●●●●●●● ●●● ● ●●● ●●●●●●●●●●● ●●●●●● 4.0 ●●● ●●●●● ●●●●●●●● ●● ●●●●● ●●●●●● ●●●● ●●●● ●●● ●●●● ●● ●●● ●●●●●●● ●●●● ● ● ●●●● ●●●●●●●●● ●●●●●● ●●●●●●●● ●● ●●●● ●●●●●●●● 3.5 ●●● ●●●●●●●● ●●● ● ●●●● ● ●● ● ●●● ● ● ● ●● ● ● ● ● 3.0 ● ● ● ● ● Price ● ● ● ●● ● 2.5 ● ● ● ● ● ● ● ● ● 2.0 ● ● ● ● ● ● ● ● ● ● ● 2008 2009 2010 2011 2012 ● ● ● ● ● ● ● ● ● ● ● ● 2.0 ● ● ●● ● ● ● ● ● 1.5 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 1.0 ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ●● ● ● ● ● ● ● Log (base 10) units ● ● ● ● ● ● ● ●● ● ●● ● ●● ● ● ●● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.5 ● ● ● ● ●● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0 2008 2009 2010 2011 2012 20 / 113 Overview: Sales promotions I Many CPG products are sold at both regular (base) prices and at promotional prices I We often observe sales spikes during a promotion I Could be due to brand switching or increased consumption I Could be due to stockpiling if the good is storable I Develop a simple model of stockpiling—storable goods demand I Goal: predict how demand is affected by the distribution of prices at which the storable good can be bought 21 / 113 Model outline I At the beginning of each period a consumer goes to a store and chooses among K different pack sizes of a storable good I Pack size k contains nk units of the product (nk 1, 2,... ) 2 { } I No purchase: n0 =0 I The choice in period t is a = 0, 1,...,K t 2 A { } I it is the number of units in the consumer’s inventory at the beginning of period t I After the shopping trip, if at = k the consumer has it + nk units of the product at her disposal I The consumer has a consumption need of one unit in each period I Will consume one unit if it + nk 1, and will not consume otherwise ≥ 22 / 113 Evolution of inventory I Consumer can store at most I units I Conditional on it and at = k, inventory evolves as 0 if it
Recommended publications
  • Bayesian Estimation of Dynamic Discrete Choice Models*
    Bayesian Estimation of Dynamic Discrete Choice Models* Susumu Imai Concordia University and CIREQ Neelam Jain Northern Illinois University and Andrew Ching Rotman School of Management University of Toronto January, 2005 * Please direct all correspondence to Susumu Imai, Department of Economics, Concordia University, 1455 de Maisonneuve Blvd. W., Montreal, QC H3G 1M8, Canada, email: [email protected], phone: (514)848-2424 X 3907, or Neelam Jain, Department of Economics, Northern Illinois University, 508 Zulauf Hall, DeKalb, IL 60115, email: [email protected], phone: (815)753-6964. Abstract We propose a new estimator for dynamic programming discrete choice models. Our estimation method combines the Dynamic Programming algorithm with a Bayesian Markov Chain Monte Carlo algorithm into one single Markov Chain algorithm that solves the dynamic programming problem and estimates the parameters at the same time. Our key innovation is that during each solution-estimation iteration both the parameters and the expected value function are updated only once. This is in contrast to the conventional estimation methods where at each estimation iteration the dynamic programming problem needs to be fully solved. A single dynamic programming solution requires repeated updates of the expected value functions. As a result, in our algorithm the computational burden of estimating a dynamic model is of similar order of magnitude as that of a static model. Another feature of our algorithm is that even though per estimation iteration, we keep the number of grid points on the state variable small, we can make the number of effective grid points as large as we want by simply increasing the number of estimation iterations.
    [Show full text]
  • Dynamic Random Utility
    http://www.econometricsociety.org/ Econometrica, Vol. 87, No. 6 (November, 2019), 1941–2002 DYNAMIC RANDOM UTILITY MIRA FRICK Department of Economics, Yale University RYOTA IIJIMA Department of Economics, Yale University TOMASZ STRZALECKI Department of Economics, Harvard University The copyright to this Article is held by the Econometric Society. It may be downloaded, printed and re- produced only for educational or research purposes, including use in course packs. No downloading or copying may be done for any commercial purpose without the explicit permission of the Econometric So- ciety. For such commercial purposes contact the Office of the Econometric Society (contact information may be found at the website http://www.econometricsociety.org or in the back cover of Econometrica). This statement must be included on all copies of this Article that are made available electronically or in any other format. Econometrica, Vol. 87, No. 6 (November, 2019), 1941–2002 DYNAMIC RANDOM UTILITY MIRA FRICK Department of Economics, Yale University RYOTA IIJIMA Department of Economics, Yale University TOMASZ STRZALECKI Department of Economics, Harvard University We provide an axiomatic analysis of dynamic random utility, characterizing the stochastic choice behavior of agents who solve dynamic decision problems by maxi- mizing some stochastic process (Ut ) of utilities. We show first that even when (Ut ) is arbitrary, dynamic random utility imposes new testable across-period restrictions on be- havior, over and above period-by-period analogs of the static random utility axioms. An important feature of dynamic random utility is that behavior may appear history- dependent, because period-t choices reveal information about Ut , which may be serially correlated; however, our key new axioms highlight that the model entails specific limits on the form of history dependence that can arise.
    [Show full text]
  • Dynamic Discrete Choice and Dynamic Treatment Effects
    IZA DP No. 1790 Dynamic Discrete Choice and Dynamic Treatment Effects James J. Heckman Salvador Navarro DISCUSSION PAPER SERIES DISCUSSION PAPER October 2005 Forschungsinstitut zur Zukunft der Arbeit Institute for the Study of Labor Dynamic Discrete Choice and Dynamic Treatment Effects James J. Heckman University of Chicago and IZA Bonn Salvador Navarro University of Wisconsin-Madison Discussion Paper No. 1790 October 2005 IZA P.O. Box 7240 53072 Bonn Germany Phone: +49-228-3894-0 Fax: +49-228-3894-180 Email: [email protected] Any opinions expressed here are those of the author(s) and not those of the institute. Research disseminated by IZA may include views on policy, but the institute itself takes no institutional policy positions. The Institute for the Study of Labor (IZA) in Bonn is a local and virtual international research center and a place of communication between science, politics and business. IZA is an independent nonprofit company supported by Deutsche Post World Net. The center is associated with the University of Bonn and offers a stimulating research environment through its research networks, research support, and visitors and doctoral programs. IZA engages in (i) original and internationally competitive research in all fields of labor economics, (ii) development of policy concepts, and (iii) dissemination of research results and concepts to the interested public. IZA Discussion Papers often represent preliminary work and are circulated to encourage discussion. Citation of such a paper should account for its provisional character. A revised version may be available directly from the author. IZA Discussion Paper No. 1790 October 2005 ABSTRACT Dynamic Discrete Choice and Dynamic Treatment Effects This paper considers semiparametric identiÞcation of structural dynamic discrete choice models and models for dynamic treatment effects.
    [Show full text]
  • Structural Methods for Labor Economics Fall 2008 Course
    Department of Economics, UC Berkeley Economics 250c: Structural Methods for Labor Economics Fall 2008 Course Outline and Reading List The goal of the course is to provide a selective overview of more “structural” methods in labor economics and empirical micro, including single period models and dynamic choice problems. Students are assumed to be familiar with basic econometrics and consumer demand theory, and with “reduced form” methods such as regression, instrumental variables, etc. Three recommended text books are: J. Angist and S. Pischke, Mostly Harmless Econometrics. Princeton Press, forthcoming 2008. I have managed to get an advanced copy of this text -- paper copies will be available to class members. This is highly recommended for people who want to catch up on classic reduced form methods. K. Train, Discrete Choice Methods with Simulation. Cambridge U. Press, 2003. J. Adda and R. Cooper, Dynamic Programming: Theory and Applications. MIT Press, 2003. These three are all relatively easy reading. I also recommend the “What’s New in Econometrics?” (WNiE) lectures by G. Imbens and J. Wooldridge (especially lectures 6, 11, 9, and 7, in that order) which are available at the NBER website. Preliminary Outline (Incomplete and subject to additions/changes). 1. Introductory Lecture: Structural vs. Alternative Approaches Background reading: Daniel McFadden. “Economics Choices.” AER, June 2001. Edward E. Leamer. “Let's Take the Con Out of Econometrics.” AER March 1983. 2. Simple Static Choice Models Michael Ransom. “An Empirical Model of Discrete and Continuous Choice in Family Labor Supply.” Review of Economics and Statistics 69 (3) (1987): 465-472. (See also: Michael Ransom.
    [Show full text]
  • A Dynamic Discrete-Continuous Choice Model for Car Ownership and Usage
    A dynamic discrete-continuous choice model for car ownership and usage Estimation procedure Aurélie Glerum Emma Frejinger Anders Karlström Muriel Beser Hugosson Michel Bierlaire April 2014 A dynamic discrete-continuous choice model for car ownership and usage April 2014 A dynamic discrete-continuous choice model for car own- ership and usage Aurélie Glerum, Michel Bierlaire Emma Frejinger School of Architecture, Civil and Environmental Faculté des arts et des sciences, Department of Engineering (ENAC), Transport and Mobility Computer Science and Operations Research Laboratory (TRANSP-OR) Université de Montréal École Polytechnique Fédérale de Lausanne Montréal, Canada (EPFL) Lausanne, Switzerland Anders Karlström, Muriel Beser Hugosson School of Architecture and the Built Environment, Department of Transport Science Royal Institute of Technology (KTH) Stockholm, Sweden April 2014 Abstract This research focuses on the formulation of a dynamic discrete-continuous choice model to explain and predict individuals’ choices regarding car ownership, choice of fuel type and choice of annual driving distance. Its main contribution is to integrate dynamic choice modeling for multiple-car households and discrete-continuous choice modeling. This paper presents the formulation of the model and its estimation on synthetic data. Keywords Dynamic discrete-continuous choice modeling, dynamic programming, discrete choice model- ing, constant elasticity of substitution, car ownership and usage modeling. Front picture: Shell Spirit and Motor Oils, René Vincent, 1926. i A dynamic discrete-continuous choice model for car ownership and usage April 2014 1 Introduction Numerous governments have in the past implemented policies aiming at reducing green house gas emissions and favoring the introduction of alternative fuel vehicles in the market.
    [Show full text]
  • Semiparametric Estimation of Dynamic Discrete Choice Models
    SEMIPARAMETRIC ESTIMATION OF DYNAMIC DISCRETE CHOICE MODELS NICHOLAS BUCHHOLZ?, MATTHEW SHUMy, AND HAIQING XUz ABSTRACT. We consider the estimation of dynamic binary choice models in a semiparametric setting, in which the per-period utility functions are known up to a finite number of parameters, but the distribution of utility shocks is left unspecified. This semiparametric setup differs from most of the existing identification and estimation literature for dynamic discrete choice models. To show identification we derive and exploit a new recursive representation for the unknown quantile function of the utility shocks. Our estimators are straightforward to compute, and resem- ble classic closed-form estimators from the literature on semiparametric regression and average derivative estimation. Monte Carlo simulations demonstrate that our estimator performs well in small samples. Keywords: Semiparametric estimation, Dynamic discrete choice model, Average derivative esti- mation, Fredholm integral operators JEL: C14, D91, C41, L91 Date: Thursday 26th September, 2019. ∗We thank Hassan Afrouzi, Saroj Bhattarai, Stephane Bonhomme, Denis Chetverikov, Kirill Pogorelskiy, Eduardo Souza-Rodrigues, Tang Srisuma, and seminar participants at Academia Sinica, Arizona, Carnegie-Mellon, Chicago, Colorado (Boulder), Duke, Einaudi Institute, Irvine, Washington (Seattle), UT Austin, Wisconsin, Yale, and Texas Metrics Camp for helpful discussions. ?Department of Economics, Princeton University, [email protected]. yDivision of Humanities and Social Sciences, California Institute of Technology, [email protected]. zDepartment of Economics, University of Texas at Austin, [email protected]. 1 1. INTRODUCTION The dynamic discrete choice (DDC) framework, pioneered by Wolpin (1984), Pakes (1986), Rust (1987, 1994), has gradually become the workhorse model for modelling dynamic decision processes in structural econometrics.
    [Show full text]
  • Identification of Counterfactuals in Dynamic Discrete Choice Models
    NBER WORKING PAPER SERIES IDENTIFICATION OF COUNTERFACTUALS IN DYNAMIC DISCRETE CHOICE MODELS Myrto Kalouptsidi Paul T. Scott Eduardo Souza-Rodrigues Working Paper 21527 http://www.nber.org/papers/w21527 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050 Massachusetts Avenue Cambridge, MA 02138 September 2015, Revised February 2020 Previously circulated as "Identification of Counterfactuals and Payoffs in Dynamic Discrete Choice with an Application to Land Use." We are grateful to Victor Aguirregabiria, Ivan Canay, Phil Haile, Bo Honore, Jakub Kastl, Yuichi Kitamura, Arthur Lewbel, Yao Luo, Rob McMillan, Ariel Pakes, Rodrigo Pinto, John Rust, Junichi Suzuki, Elie Tamer, Ali Yurukoglu, and seminar participants at various universities for many helpful comments. We also thank the Editor and the anonymous referees. Charis Katsiardis and Adrian Torchiana provided excellent research assistance. The research leading to these results has received funding from the European Research Council under the European Community's Seventh Framework Programme Grant Agreement no. 230589 and Agence Nationale de la Recherche Projet ANR-12-CHEX-0012-01. All remaining errors are our own. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research. NBER working papers are circulated for discussion and comment purposes. They have not been peer-reviewed or been subject to the review by the NBER Board of Directors that accompanies official NBER publications. © 2015 by Myrto Kalouptsidi, Paul T. Scott, and Eduardo Souza-Rodrigues. All rights reserved. Short sections of text, not to exceed two paragraphs, may be quoted without explicit permission provided that full credit, including © notice, is given to the source.
    [Show full text]
  • Global Concavity and Optimization in a Class of Dynamic Discrete Choice Models
    Global Concavity and Optimization in a Class of Dynamic Discrete Choice Models Yiding Feng * 1 Ekaterina Khmelnitskaya * 2 Denis Nekipelov * 2 Abstract 1. Introduction Dynamic discrete choice model with unobserved hetero- geneity is, arguably, the most popular model that is currently Discrete choice models with unobserved hetero- used for Econometric analysis of dynamic behavior of in- geneity are commonly used Econometric models dividuals and firms in Economics and Marketing (e.g. see for dynamic Economic behavior which have been surveys in (Eckstein and Wolpin, 1989), (Dube´ et al., 2002) adopted in practice to predict behavior of individ- (Abbring and Heckman, 2007), (Aguirregabiria and Mira, uals and firms from schooling and job choices to 2010)). In this model, pioneered in Rust(1987), the agent strategic decisions in market competition. These chooses between a discrete set of options in a sequence of models feature optimizing agents who choose discrete time periods to maximize the expected cumulative among a finite set of options in a sequence of discounted payoff. The reward in each period is a function periods and receive choice-specific payoffs that of the state variable which follows a Markov process and is depend on both variables that are observed by the observed in the data and also a function of an idiosyncratic agent and recorded in the data and variables that random variable that is only observed by the agent but is not are only observed by the agent but not recorded reported in the data. The unobserved idiosyncratic compo- in the data. Existing work in Econometrics as- nent is designed to reflect heterogeneity of agents that may sumes that optimizing agents are fully rational value the same choice differently.
    [Show full text]
  • The Endogenous Grid Method for Discrete-Continuous Dynamic Choice Models with (Or Without) Taste Shocks
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Iskhakov, Fedor; Jørgensen, Thomas Høgholm; Rust, John; Schjerning, Bertel Article The endogenous grid method for discrete-continuous dynamic choice models with (or without) taste shocks Quantitative Economics Provided in Cooperation with: The Econometric Society Suggested Citation: Iskhakov, Fedor; Jørgensen, Thomas Høgholm; Rust, John; Schjerning, Bertel (2017) : The endogenous grid method for discrete-continuous dynamic choice models with (or without) taste shocks, Quantitative Economics, ISSN 1759-7331, Wiley, Hoboken, NJ, Vol. 8, Iss. 2, pp. 317-365, http://dx.doi.org/10.3982/QE643 This Version is available at: http://hdl.handle.net/10419/195542 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben
    [Show full text]
  • Solution and Estimation of Dynamic Discrete Choice Structural Models Using Euler Equations
    Solution and Estimation of Dynamic Discrete Choice Structural Models Using Euler Equations Victor Aguirregabiria Arvind Magesan University of Toronto and CEPR University of Calgary May 10th, 2016 Abstract This paper extends the Euler Equation (EE) representation of dynamic decision problems to a general class of discrete choice models and shows that the advantages of this approach apply not only to the estimation of structural parameters but also to the computation of a solution and to the evaluation of counterfactual experiments. We use a choice probabilities representation of the discrete decision problem to derive marginal conditions of optimality with the same features as the standard EEs in continuous decision problems. These EEs imply a …xed point mapping in the space of conditional choice values, that we denote the Euler equation-value (EE-value) operator. We show that, in contrast to Euler equation operators in continuous decision models, this operator is a contraction. We present numerical examples that illustrate how solving the model by iterating in the EE-value mapping implies substantial computational savings relative to iterating in the Bellman equation (that requires a much larger number of iterations) or in the policy function (that involves a costly valuation step). We de…ne a sample version of the EE-value operator and use it to construct a sequence of consistent estimators of the structural parameters, and to evaluate counterfactual experiments. The computational cost of evaluating this sample-based EE-value operator increases linearly with sample size, and provides an unbiased (in …nite samples) and consistent estimator the counterfactual. As such there is no curse of dimensionality in the consistent estimation of the model and in the evaluation of counterfactual experiments.
    [Show full text]
  • Bayesian Estimation of Dynamic Discrete Choice Models
    Econometrica, Vol. 77, No. 6 (November, 2009), 1865–1899 BAYESIAN ESTIMATION OF DYNAMIC DISCRETE CHOICE MODELS BY SUSUMU IMAI,NEELAM JAIN, AND ANDREW CHING1 We propose a new methodology for structural estimation of infinite horizon dynamic discrete choice models. We combine the dynamic programming (DP) solution algo- rithm with the Bayesian Markov chain Monte Carlo algorithm into a single algorithm that solves the DP problem and estimates the parameters simultaneously. As a result, the computational burden of estimating a dynamic model becomes comparable to that of a static model. Another feature of our algorithm is that even though the number of grid points on the state variable is small per solution-estimation iteration, the number of effective grid points increases with the number of estimation iterations. This is how we help ease the “curse of dimensionality.” We simulate and estimate several versions of a simple model of entry and exit to illustrate our methodology. We also prove that under standard conditions, the parameters converge in probability to the true posterior distribution, regardless of the starting values. KEYWORDS: Bayesian estimation, dynamic programming, discrete choice models, Markov chain Monte Carlo. 1. INTRODUCTION STRUCTURAL ESTIMATION OF DYNAMIC DISCRETE CHOICE (DDC) models has become increasingly popular in empirical economics. Examples include Keane and Wolpin (1997) on labor economics, Erdem and Keane (1996)onmarket- ing, Imai and Krishna (2004)oncrime,andRust(1987) on empirical indus- trial organization. Structural estimation of DDC models is appealing because it captures the dynamic forward-looking behavior of individuals. This is im- portant in understanding agents’ behavior in various settings.
    [Show full text]
  • Identification of Structural Dynamic Discrete Choice Models
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Research Papers in Economics Identification of structural dynamic discrete choice models Le-Yu Chen The Institute for Fiscal Studies Department of Economics, UCL cemmap working paper CWP08/09 Identi…cation of Structural Dynamic Discrete Choice Models Le-Yu Chen1 2 Institute of Economics, Academia Sinica Revised : April 2009 [Abstract] This paper presents new identi…cation results for the class of structural dy- namic discrete choice models that are built upon the framework of the structural discrete Markov decision processes proposed by Rust (1994). We demonstrate how to semiparametrically identify the deep structural parameters of interest in the case where utility function of one choice in the model is parametric but the distribution of unobserved heterogeneities is nonparametric. The proposed identi…cation method does not rely on the availability of terminal period data and hence can be applied to in…nite horizon structural dynamic models. For identi…cation we assume availability of a continuous observed state variable that satis…es certain exclusion restrictions. If such excluded variable is accessible, we show that the structural dynamic discrete choice model is semiparametrically identi…ed using the control function approach. KEYWORDS : structural dynamic discrete choice models, semiparametric identi…cation, control function 1 I am grateful to my supervisor, Dr. Sokbae Lee for his helpful advice on this research work. I also thank Prof. Oliver Linton and Prof. Pedro Mira for their various comments on this work. 2 E-mail : [email protected] 1 1 Introduction Over the decades, structural estimation of dynamic discrete choice models has been more and more employed in empirical economics and for the assessment of various policies of interest.
    [Show full text]