Statistical Decision Theory: Concepts, Methods and Applications
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Implications of Rational Inattention
IMPLICATIONS OF RATIONAL INATTENTION CHRISTOPHER A. SIMS Abstract. A constraint that actions can depend on observations only through a communication channel with finite Shannon capacity is shown to be able to play a role very similar to that of a signal extraction problem or an adjustment cost in standard control problems. The resulting theory looks enough like familiar dynamic rational expectations theories to suggest that it might be useful and practical, while the implications for policy are different enough to be interesting. I. Introduction Keynes's seminal idea was to trace out the equilibrium implications of the hypothe- sis that markets did not function the way a seamless model of continuously optimizing agents, interacting in continuously clearing markets would suggest. His formal de- vice, price \stickiness", is still controversial, but those critics of it who fault it for being inconsistent with the assumption of continuously optimizing agents interacting in continuously clearing markets miss the point. This is its appeal, not its weakness. The influential competitors to Keynes's idea are those that provide us with some other description of the nature of the deviations from the seamless model that might account for important aspects of macroeconomic fluctuations. Lucas's 1973 clas- sic \International Evidence: : :" paper uses the idea that agents may face a signal- extraction problem in distinguishing movements in the aggregate level of prices and wages from movements in the specific prices they encounter in transactions. Much of subsequent rational expectations macroeconomic modeling has relied on the more tractable device of assuming an \information delay", so that some kinds of aggregate data are observable to some agents only with a delay, though without error after the delay. -
The Physics of Optimal Decision Making: a Formal Analysis of Models of Performance in Two-Alternative Forced-Choice Tasks
Psychological Review Copyright 2006 by the American Psychological Association 2006, Vol. 113, No. 4, 700–765 0033-295X/06/$12.00 DOI: 10.1037/0033-295X.113.4.700 The Physics of Optimal Decision Making: A Formal Analysis of Models of Performance in Two-Alternative Forced-Choice Tasks Rafal Bogacz, Eric Brown, Jeff Moehlis, Philip Holmes, and Jonathan D. Cohen Princeton University In this article, the authors consider optimal decision making in two-alternative forced-choice (TAFC) tasks. They begin by analyzing 6 models of TAFC decision making and show that all but one can be reduced to the drift diffusion model, implementing the statistically optimal algorithm (most accurate for a given speed or fastest for a given accuracy). They prove further that there is always an optimal trade-off between speed and accuracy that maximizes various reward functions, including reward rate (percentage of correct responses per unit time), as well as several other objective functions, including ones weighted for accuracy. They use these findings to address empirical data and make novel predictions about performance under optimality. Keywords: drift diffusion model, reward rate, optimal performance, speed–accuracy trade-off, perceptual choice This article concerns optimal strategies for decision making in It has been known since Hernstein’s (1961, 1997) work that the two-alternative forced-choice (TAFC) task. We present and animals do not achieve optimality under all conditions, and in compare several decision-making models, briefly discuss their behavioral economics, humans often fail to choose optimally (e.g., neural implementations, and relate them to one that is optimal in Kahneman & Tversky, 1984; Loewenstein & Thaler, 1989). -
Statistical Decision Theory Bayesian and Quasi-Bayesian Estimators
Statistical Decision Theory Bayesian and Quasi-Bayesian estimators Giselle Montamat Harvard University Spring 2020 Statistical Decision Theory Bayesian and Quasi-Bayesian estimators 1 / Giselle Montamat 46 Statistical Decision Theory Framework to make a decision based on data (e.g., find the \best" estimator under some criteria for what \best" means; decide whether to retain/reward a teacher based on observed teacher value added estimates); criteria to decide what a good decision (e.g., a good estimator; whether to retain/reward a teacher) is. Ingredients: Data: \X " Statistical decision: \a" Decision function: \δ(X )" State of the world: \θ" Loss function: \L(a; θ)" Statistical model (likelihood): \f (X jθ)" Risk function (aka expected loss): Z R(δ; θ) = Ef (X jθ)[L(δ(X ); θ)] = L(δ(X ); θ)f (X jθ)dX Statistical Decision Theory Bayesian and Quasi-Bayesian estimators 2 / Giselle Montamat 46 Statistical Decision Theory Objective: estimate µ(θ) (could be µ(θ) = θ) using data X via δ(X ). (Note: here the decision is to choose an estimator; we'll see another example where the decision is a binary choice). Loss function L(a; θ): describes loss that we incur in if we take action a when true parameter value is θ. Note that estimation (\decision") will be based on data via δ(X ) = a, so loss is a function of the data and the true parameter, ie, L(δ(X ); θ). Criteria for what makes a \good" δ(X ), for a given θ: the expected loss (aka, the risk) has to be small, where the expectation is taken over X given model f (X jθ) for a given θ. -
Prisoners of Reason Game Theory and Neoliberal Political Economy
C:/ITOOLS/WMS/CUP-NEW/6549131/WORKINGFOLDER/AMADAE/9781107064034PRE.3D iii [1–28] 11.8.2015 9:57PM Prisoners of Reason Game Theory and Neoliberal Political Economy S. M. AMADAE Massachusetts Institute of Technology C:/ITOOLS/WMS/CUP-NEW/6549131/WORKINGFOLDER/AMADAE/9781107064034PRE.3D iv [1–28] 11.8.2015 9:57PM 32 Avenue of the Americas, New York, ny 10013-2473, usa Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781107671195 © S. M. Amadae 2015 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2015 Printed in the United States of America A catalog record for this publication is available from the British Library. Library of Congress Cataloging in Publication Data Amadae, S. M., author. Prisoners of reason : game theory and neoliberal political economy / S.M. Amadae. pages cm Includes bibliographical references and index. isbn 978-1-107-06403-4 (hbk. : alk. paper) – isbn 978-1-107-67119-5 (pbk. : alk. paper) 1. Game theory – Political aspects. 2. International relations. 3. Neoliberalism. 4. Social choice – Political aspects. 5. Political science – Philosophy. I. Title. hb144.a43 2015 320.01′5193 – dc23 2015020954 isbn 978-1-107-06403-4 Hardback isbn 978-1-107-67119-5 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party Internet Web sites referred to in this publication and does not guarantee that any content on such Web sites is, or will remain, accurate or appropriate. -
The Functional False Discovery Rate with Applications to Genomics
bioRxiv preprint doi: https://doi.org/10.1101/241133; this version posted December 30, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY-ND 4.0 International license. The Functional False Discovery Rate with Applications to Genomics Xiongzhi Chen*,† David G. Robinson,* and John D. Storey*‡ December 29, 2017 Abstract The false discovery rate measures the proportion of false discoveries among a set of hypothesis tests called significant. This quantity is typically estimated based on p-values or test statistics. In some scenarios, there is additional information available that may be used to more accurately estimate the false discovery rate. We develop a new framework for formulating and estimating false discovery rates and q-values when an additional piece of information, which we call an “informative variable”, is available. For a given test, the informative variable provides information about the prior probability a null hypothesis is true or the power of that particular test. The false discovery rate is then treated as a function of this informative variable. We consider two applications in genomics. Our first is a genetics of gene expression (eQTL) experiment in yeast where every genetic marker and gene expression trait pair are tested for associations. The informative variable in this case is the distance between each genetic marker and gene. Our second application is to detect differentially expressed genes in an RNA-seq study carried out in mice. -
Optimal Trees for Prediction and Prescription Jack William Dunn
Optimal Trees for Prediction and Prescription by Jack William Dunn B.E.(Hons), University of Auckland (2014) Submitted to the Sloan School of Management in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Operations Research at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY June 2018 ○c Massachusetts Institute of Technology 2018. All rights reserved. Author................................................................ Sloan School of Management May 18, 2018 Certified by. Dimitris Bertsimas Boeing Professor of Operations Research Co-director, Operations Research Center Thesis Supervisor Accepted by . Patrick Jaillet Dugald C. Jackson Professor Department of Electrical Engineering and Computer Science Co-Director, Operations Research Center Optimal Trees for Prediction and Prescription by Jack William Dunn Submitted to the Sloan School of Management on May 18, 2018, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Operations Research Abstract For the past 30 years, decision tree methods have been one of the most widely- used approaches in machine learning across industry and academia, due in large part to their interpretability. However, this interpretability comes at a price—the performance of classical decision tree methods is typically not competitive with state- of-the-art methods like random forests and gradient boosted trees. A key limitation of classical decision tree methods is their use of a greedy heuristic for training. The tree is therefore constructed one locally-optimal split at a time, and so the final tree as a whole may be far from global optimality. Motivated bythe increase in performance of mixed-integer optimization methods over the last 30 years, we formulate the problem of constructing the optimal decision tree using discrete optimization, allowing us to construct the entire decision tree in a single step and hence find the single tree that best minimizes the training error. -
Richard Bradley, Decision Theory with a Human Face
Œconomia History, Methodology, Philosophy 9-1 | 2019 Varia Richard Bradley, Decision Theory with a Human Face Nicolas Gravel Electronic version URL: http://journals.openedition.org/oeconomia/5273 DOI: 10.4000/oeconomia.5273 ISSN: 2269-8450 Publisher Association Œconomia Printed version Date of publication: 1 March 2019 Number of pages: 149-160 ISSN: 2113-5207 Electronic reference Nicolas Gravel, « Richard Bradley, Decision Theory with a Human Face », Œconomia [Online], 9-1 | 2019, Online since 01 March 2019, connection on 29 December 2020. URL : http://journals.openedition.org/ oeconomia/5273 ; DOI : https://doi.org/10.4000/oeconomia.5273 Les contenus d’Œconomia sont mis à disposition selon les termes de la Licence Creative Commons Attribution - Pas d'Utilisation Commerciale - Pas de Modification 4.0 International. | Revue des livres/Book review 149 Comptes rendus / Reviews Richard Bradley, Decision Theory with a Human Face Cambridge: Cambridge University Press, 2017, 335 pages, ISBN 978-110700321-7 Nicolas Gravel∗ The very title of this book, borrowed from Richard Jeffrey’s “Bayesianism with a human face” (Jeffrey, 1983a) is a clear in- dication of its content. Just like its spiritual cousin The Foun- dations of Causal Decision Theory by James M. Joyce(2000), De- cision Theory with a Human Face provides a thoughtful descrip- tion of the current state of development of the Bolker-Jeffrey (BJ) approach to decision-making. While a full-fledged presen- tation of the BJ approach is beyond the scope of this review, it is difficult to appraise the content of Decision Theory with a Hu- man Face without some acquaintance with both the basics of the BJ approach to decision-making and its fitting in the large cor- pus of “conventional” decision theories that have developed in economics, mathematics and psychology since at least the publication of Von von Neumann and Morgenstern(1947). -
Rational Inattention in Controlled Markov Processes
Rational Inattention in Controlled Markov Processes Ehsan Shafieepoorfard, Maxim Raginsky and Sean P. Meyn Abstract— The paper poses a general model for optimal Sims considers a model in which a representative agent control subject to information constraints, motivated in part decides about his consumption over subsequent periods of by recent work on information-constrained decision-making by time, while his computational ability to reckon his wealth – economic agents. In the average-cost optimal control frame- work, the general model introduced in this paper reduces the state of the dynamic system – is limited. A special case is to a variant of the linear-programming representation of the considered in which income in one period adds uncertainty average-cost optimal control problem, subject to an additional of wealth in the next period. Other modeling assumptions mutual information constraint on the randomized stationary reduce the model to an LQG control problem. As one policy. The resulting optimization problem is convex and admits justification for introducing the information constraint, Sims a decomposition based on the Bellman error, which is the object of study in approximate dynamic programming. The remarks [7] that “most people are only vaguely aware of their structural results presented in this paper can be used to obtain net worth, are little-influenced in their current behavior by performance bounds, as well as algorithms for computation or the status of their retirement account, and can be induced to approximation of optimal policies. make large changes in savings behavior by minor ‘informa- tional’ changes, like changes in default options on retirement I. INTRODUCTION plans.” Quantitatively, the information constraint is stated in terms of an upper bound on the mutual information in the In typical applications of stochastic dynamic program- sense of Shannon [8] between the state of the system and ming, the controller has access to limited information about the observation available to the agent. -
9<HTLDTH=Hebaaa>
springer.com/booksellers Springer News 11/12/2007 Statistics 49 D. R. Anderson, Colorado State University, Fort J. Franke, W. Härdle, C. M. Hafner U. B. Kjaerulff, Aalborg University, Aalborg, Denmark; Collins, CO, USA A. L. Madsen, HUGIN Expert A/S, Aalborg, Denmark Statistics of Financial Markets Model Based Inference in the An Introduction Bayesian Networks and Life Sciences Influence Diagrams: A Guide A Primer on Evidence to Construction and Analysis Statistics of Financial Markets offers a vivid yet concise introduction to the growing field of statis- The abstract concept of “information” can be tical applications in finance. The reader will learn Probabilistic networks, also known as Bayesian quantified and this has led to many important the basic methods to evaluate option contracts, to networks and influence diagrams, have become advances in the analysis of data in the empirical analyse financial time series, to select portfolios one of the most promising technologies in the area sciences. This text focuses on a science philosophy and manage risks making realistic assumptions of of applied artificial intelligence, offering intui- based on “multiple working hypotheses” and statis- the market behaviour. tive, efficient, and reliable methods for diagnosis, tical models to represent them. The fundamental The focus is both on fundamentals of math- prediction, decision making, classification, trou- science question relates to the empirical evidence ematical finance and financial time series analysis bleshooting, and data mining under uncertainty. for hypotheses in this set - a formal strength of and on applications to given problems of financial Bayesian Networks and Influence Diagrams: A evidence. Kullback-Leibler information is the markets, making the book the ideal basis for Guide to Construction and Analysis provides a information lost when a model is used to approxi- lectures, seminars and crash courses on the topic. -
Theories of International Relations* Ole R. Holsti
Theories of International Relations* Ole R. Holsti Universities and professional associations usually are organized in ways that tend to separate scholars in adjoining disciplines and perhaps even to promote stereotypes of each other and their scholarly endeavors. The seemingly natural areas of scholarly convergence between diplomatic historians and political scientists who focus on international relations have been underexploited, but there are also some signs that this may be changing. These include recent essays suggesting ways in which the two disciplines can contribute to each other; a number of prizewinning dissertations, later turned into books, by political scientists that effectively combine political science theories and historical materials; collaborative efforts among scholars in the two disciplines; interdisciplinary journals such as International Security that provide an outlet for historians and political scientists with common interests; and creation of a new section, “International History and Politics,” within the American Political Science Association.1 *The author has greatly benefited from helpful comments on earlier versions of this essay by Peter Feaver, Alexander George, Joseph Grieco, Michael Hogan, Kal Holsti, Bob Keohane, Timothy Lomperis, Roy Melbourne, James Rosenau, and Andrew Scott, and also from reading 1 K. J. Holsti, The Dividing Discipline: Hegemony and Diversity in International Theory (London, 1985). This essay is an effort to contribute further to an exchange of ideas between the two disciplines by describing some of the theories, approaches, and "models" political scientists have used in their research on international relations during recent decades. A brief essay cannot do justice to the entire range of theoretical approaches that may be found in the current literature, but perhaps those described here, when combined with citations of some representative works, will provide diplomatic historians with a useful, if sketchy, map showing some of the more prominent landmarks in a neighboring discipline. -
Improving Monetary Policy Models by Christopher A. Sims, Princeton
Improving Monetary Policy Models by Christopher A. Sims, Princeton University and NBER CEPS Working Paper No. 128 May 2006 IMPROVING MONETARY POLICY MODELS CHRISTOPHER A. SIMS ABSTRACT. If macroeconomic models are to be useful in policy-making, where un- certainty is pervasive, the models must be treated as probability models, whether formally or informally. Use of explicit probability models allows us to learn sys- tematically from past mistakes, to integrate model-based uncertainty with uncertain subjective judgment, and to bind data-bassed forecasting together with theory-based projection of policy effects. Yet in the last few decades policy models at central banks have steadily shed any claims to being believable probability models of the data to which they are fit. Here we describe the current state of policy modeling, suggest some reasons why we have reached this state, and assess some promising directions for future progress. I. WHY DO WE NEED PROBABILITY MODELS? Fifty years ago most economists thought that Tinbergen’s original approach to macro-modeling, which consisted of fitting many equations by single-equation OLS Date: May 26, 2006. °c 2006 by Christopher A. Sims. This material may be reproduced for educational and research purposes so long as the copies are not sold, even to recover costs, the document is not altered, and this copyright notice is included in the copies. Research for this paper was supported by NSF grant SES 0350686 and by Princeton’s Center for Economic Policy Studies. This paper was presented at a December 2-3, 2005 conference at the Board of Governors of the Federal Reserve and may appear in a conference volume or journal special issue. -
Introduction to Decision Theory: Econ 260 Spring, 2016
Introduction to Decision Theory: Econ 260 Spring, 2016 1/7. Instructor Details: Sumantra Sen Email: [email protected] OH: TBA TA: Ross Askanazi OH of TA: TBA 2/7. Website: courseweb.library.upenn.edu (Canvas Course site) 3/7. Overview: This course will provide an introduction to Decision Theory. The goal of the course is to make the student think rigorously about decision-making in contexts of objective and subjective uncertainty. The heart of the class is delving deeply into the axiomatic tradition of Decision Theory in Economics, and covering the classic theorems of De Finetti, Von Neumann & Morgenstern, and Savage and reevaluating them in light of recent research in behavioral economics and Neuro-economics. Time permitting; we will study other extensions (dynamic models and variants of other static models) as well, with the discussion anchored by comparison and reference to these classic models. There is no required text for the class, as there is no suitable one for undergraduates that covers the material we plan to do. We will reply on lecture notes and references (see below). 4/7. Policies of the Economics Department, including course pre-requisites and academic integrity Issues may be found at: http://economics.sas.upenn.edu/undergraduate-program/course- information/guidelines/policies 5/7. Grades: There will be two non-cumulative in-class midterms (20% each) during the course and one cumulative final (40%). The midterms will be on Feb 11 (will be graded and returned by Feb 18) and April 5th. The final is on May 6th. If exams are missed, documentable reasons of the validity of such actions must be provided within a reasonable period of time and the instructor will have the choice of pro-rating your grade over the missed exam rather than deliver a make-up.