2011Philstat.Pdf

Total Page:16

File Type:pdf, Size:1020Kb

2011Philstat.Pdf CONTENTS General Preface vii Dov Gabbay, Paul Thagard, and John Woods Preface ix Prasanta S. Bandyopadhyay and Malcolm R. Forster List of Contributors xiii Introduction Philosophy of Statistics: An Introduction 1 Prasanta S. Bandyopadhyay and Malcolm R. Forster Part I. Probability & Statistics Elementary Probability and Statistics: A Primer 53 Prasanta S. Bandyopadhyay and Steve Cherry Part II. Philosophical Controversies about Conditional Probability Conditional Probability 99 Alan H´ajek The Varieties of Conditional Probability 137 Kenny Easwaran Part III. Four Paradigms of Statistics Classical Statistics Paradigm Error Statistics 153 Deborah G. Mayo and Aris Spanos Significance Testing 199 Michael Dickson and Davis Baird Bayesian Paradigm The Bayesian Decision-Theoretic Approach to Statistics 233 Paul Weirich 18 Modern Bayesian Inference: 263 Foundations and Objective Methods Jos´eM. Bernardo Evidential Probability and Objective Bayesian Epistemology 307 Gregory Wheeler and Jon Williamson Confirmation Theory 333 James Hawthorne Challenges to Bayesian Confirmation Theory 391 John D. Norton Bayesianism as a Pure Logic of Inference 441 Colin Howson Bayesian Inductive Logic, Verisimilitude, and Statistics 473 Roberto Festa Likelihood Paradigm Likelihood and its Evidential Framework 493 Jeffrey D. Blume Evidence, Evidence Functions, and Error Probabilities 513 Mark L. Taper and Subhash R. Lele Akaikean Paradigm AIC Scores as Evidence — a Bayesian Interpretation 535 Malcolm Forster and Elliott Sober Part IV: The Likelihood Principle The Likelihood Principle 553 Jason Grossman Part V: Recent Advances in Model Selection AIC, BIC and Recent Advances in Model Selection 583 Arijit Chakrabarti and Jayanta K. Ghosh Posterior Model Probabilities 607 A. Philip Dawid 19 Part VI: Attempts to Understand Different Aspects of “Randomness” Defining Randomness 633 Deborah Bennett Mathematical Foundations of Randomness 641 Abhijit Dasgupta Part VII: Probabilistic and Statistical Paradoxes Paradoxes of Probability 713 Susan Vineberg Statistical Paradoxes: Take It to The Limit 737 C. Andy Tsao Part VIII: Statistics and Inductive Inference Statistics as Inductive Inference 751 Jan-Willem Romeijn Part IX: Various Issues about Causal Inference Common Cause in Causal Inference 777 Peter Spirtes The Logic and Philosophy of Causal Inference: A Statistical 813 Perspective Sander Greenland Part X: Some Philosophical Issues Concerning Statistical Learning Theory Statistical Learning Theory as a Framework for the 833 Philosophy of Induction Gilbert Harman and Sanjeev Kulkarni Testability and Statistical Learning Theory 849 Daniel Steel 20 Part XI: Different Approaches to Simplicity Related to Inference and Truth Luckiness and Regret in Minimum Description Length Inference 865 Steven de Rooij and Peter D. Gr¨unwald MML, Hybrid Bayesian Network Graphical Models, Statistical 901 Consistency, Invariance and Uniqueness David L. Dowe Simplicity, Truth and Probability 983 Kevin T. Kelly Part XII: Special Problems in Statistics/Computer Science Normal Approximations 1027 Robert J. Boik Stein’s Phenomenon 1073 Richard Charnigo and Cidambi Srinivasan Data, Data, Everywhere: Statistical Issues in Data Mining 1099 Choh Man Teng Part XIII: An Application of Statistics to Climate Change An Application of Statistics in Climate Change: Detection of 1121 Nonlinear Changes in a Streamflow Timing Measure in the Columbia and Missouri Headwaters Mark C. Greenwood, Joel Harper and Johnnie Moore Part XIV: Historical Approaches to Probability/Statistics The Subjective and the Objective 1149 Sandy L. Zabell Probability in Ancient India 1175 C. K. Raju Index 1197 MODERN BAYESIAN INFERENCE: FOUNDATIONS AND OBJECTIVE METHODS Jos´eM. Bernardo The field of statistics includes two major paradigms: frequentist and Bayesian. Bayesian methods provide a complete paradigm for both statistical inference and decision making under uncertainty. Bayesian methods may be derived from an axiomatic system and provide a coherent methodology which makes it possible to incorporate relevant initial information, and which solves many of the difficulties which frequentist methods are known to face. If no prior information is to be assumed, a situation often met in scientific reporting and public decision making, a formal initial prior function must be mathematically derived from the assumed model. This leads to objective Bayesian methods, objective in the precise sense that their results, like frequentist results, only depend on the assumed model and the data obtained. The Bayesian paradigm is based on an interpretation of probability as a rational conditional measure of uncertainty, which closely matches the sense of the word ‘probability’ in ordinary language. Statistical inference about a quantity of interest is described as the modification of the uncertainty about its value in the light of evidence, and Bayes’ theorem specifies how this modification should precisely be made. 1 INTRODUCTION Scientific experimental or observational results generally consist of (possibly many) sets of data of the general form D = x ,...,x , where the x ’s are somewhat { 1 n} i “homogeneous” (possibly multidimensional) observations xi. Statistical methods are then typically used to derive conclusions on both the nature of the process which has produced those observations, and on the expected behaviour at future instances of the same process. A central element of any statistical analysis is the specification of a probability model which is assumed to describe the mechanism which has generated the observed data D as a function of a (possibly multidimen- sional) parameter (vector) ω Ω, sometimes referred to as the state of nature, about whose value only limited∈ information (if any) is available. All derived sta- tistical conclusions are obviously conditional on the assumed probability model. Unlike most other branches of mathematics, frequentist methods of statistical inference suffer from the lack of an axiomatic basis; as a consequence, their pro- posed desiderata are often mutually incompatible, and the analysis of the same data may well lead to incompatible results when different, apparently intuitive Handbook of the Philosophy of Science. Volume 7: Philosophy of Statistics. Volume editors: Prasanta S. Bandyopadhyay and Malcolm R. Forster. General Editors: Dov M. Gabbay, Paul Thagard and John Woods. !c 2011 Elsevier B.V. All rights reserved. 264 Jos´eM. Bernardo procedures are tried; see Lindley [1972] and Jaynes [1976] for many instructive examples. In marked contrast, the Bayesian approach to statistical inference is firmly based on axiomatic foundations which provide a unifying logical structure, and guarantee the mutual consistency of the methods proposed. Bayesian meth- ods constitute a complete paradigm to statistical inference, a scientific revolution in Kuhn’s sense. Bayesian statistics only require the mathematics of probability theory and the interpretation of probability which most closely corresponds to the standard use of this word in everyday language: it is no accident that some of the more important seminal books on Bayesian statistics, such as the works of de Laplace [1812], Jeffreys [1939] or de Finetti [1970] are actually entitled “Probability Theory”. The practical consequences of adopting the Bayesian paradigm are far reaching. Indeed, Bayesian methods (i) reduce statistical inference to problems in probability theory, thereby minimizing the need for completely new concepts, and (ii) serve to discriminate among conventional, typically frequentist statistical techniques, by either providing a logical justification to some (and making explicit the conditions under which they are valid), or proving the logical inconsistency of others. The main result from these foundations is the mathematical need to describe by means of probability distributions all uncertainties present in the problem. In particular, unknown parameters in probability models must have a joint probabil- ity distribution which describes the available information about their values; this is often regarded as the characteristic element of a Bayesian approach. Notice that (in sharp contrast to conventional statistics) parameters are treated as ran- dom variables within the Bayesian paradigm. This is not a description of their variability (parameters are typically fixed unknown quantities) but a description of the uncertainty about their true values. A most important particular case arises when either no relevant prior infor- mation is readily available, or that information is subjective and an “objective” analysis is desired, one that is exclusively based on accepted model assumptions and well-documented public prior information. This is addressed by reference analysis which uses information-theoretic concepts to derive formal reference prior functions which, when used in Bayes’ theorem, lead to posterior distributions en- capsulating inferential conclusions on the quantities of interest solely based on the assumed model and the observed data. In this article it is assumed that probability distributions may be described through their probability density functions, and no distinction is made between a random quantity and the particular values that it may take. Bold italic roman fonts are used for observable random vectors (typically data) and bold italic greek fonts are used for unobservable random vectors (typically parameters); lower case is used for variables and calligraphic upper case for their dominion sets. Moreover, the standard mathematical convention
Recommended publications
  • The Interpretation of Probability: Still an Open Issue? 1
    philosophies Article The Interpretation of Probability: Still an Open Issue? 1 Maria Carla Galavotti Department of Philosophy and Communication, University of Bologna, Via Zamboni 38, 40126 Bologna, Italy; [email protected] Received: 19 July 2017; Accepted: 19 August 2017; Published: 29 August 2017 Abstract: Probability as understood today, namely as a quantitative notion expressible by means of a function ranging in the interval between 0–1, took shape in the mid-17th century, and presents both a mathematical and a philosophical aspect. Of these two sides, the second is by far the most controversial, and fuels a heated debate, still ongoing. After a short historical sketch of the birth and developments of probability, its major interpretations are outlined, by referring to the work of their most prominent representatives. The final section addresses the question of whether any of such interpretations can presently be considered predominant, which is answered in the negative. Keywords: probability; classical theory; frequentism; logicism; subjectivism; propensity 1. A Long Story Made Short Probability, taken as a quantitative notion whose value ranges in the interval between 0 and 1, emerged around the middle of the 17th century thanks to the work of two leading French mathematicians: Blaise Pascal and Pierre Fermat. According to a well-known anecdote: “a problem about games of chance proposed to an austere Jansenist by a man of the world was the origin of the calculus of probabilities”2. The ‘man of the world’ was the French gentleman Chevalier de Méré, a conspicuous figure at the court of Louis XIV, who asked Pascal—the ‘austere Jansenist’—the solution to some questions regarding gambling, such as how many dice tosses are needed to have a fair chance to obtain a double-six, or how the players should divide the stakes if a game is interrupted.
    [Show full text]
  • Statistical Science Meets Philosophy of Science Part 2: Shallow Versus Deep Explorations
    RMM Vol. 3, 2012, 71–107 Special Topic: Statistical Science and Philosophy of Science Edited by Deborah G. Mayo, Aris Spanos and Kent W. Staley http://www.rmm-journal.de/ Deborah Mayo Statistical Science Meets Philosophy of Science Part 2: Shallow versus Deep Explorations Abstract: Inability to clearly defend against the criticisms of frequentist methods has turned many a frequentist away from venturing into foundational battlegrounds. Conceding the dis- torted perspectives drawn from overly literal and radical expositions of what Fisher, Ney- man, and Pearson ‘really thought’, some deny they matter to current practice. The goal of this paper is not merely to call attention to the howlers that pass as legitimate crit- icisms of frequentist error statistics, but also to sketch the main lines of an alternative statistical philosophy within which to better articulate the roles and value of frequentist tools. 1. Comedy Hour at the Bayesian Retreat Overheard at the comedy hour at the Bayesian retreat: Did you hear the one about the frequentist. “who defended the reliability of his radiation reading, despite using a broken radiometer, on the grounds that most of the time he uses one that works, so on average he’s pretty reliable?” or “who claimed that observing ‘heads’ on a biased coin that lands heads with probability .05 is evidence of a statistically significant improve- ment over the standard treatment of diabetes, on the grounds that such an event occurs with low probability (.05)?” Such jests may work for an after-dinner laugh, but if it turns out that, despite being retreads of ‘straw-men’ fallacies, they form the basis of why some statis- ticians and philosophers reject frequentist methods, then they are not such a laughing matter.
    [Show full text]
  • STATISTICAL SCIENCE Volume 36, Number 3 August 2021
    STATISTICAL SCIENCE Volume 36, Number 3 August 2021 Khinchin’s 1929 Paper on Von Mises’ Frequency Theory of Probability . Lukas M. Verburgt 339 A Problem in Forensic Science Highlighting the Differences between the Bayes Factor and LikelihoodRatio...........................Danica M. Ommen and Christopher P. Saunders 344 A Horse Race between the Block Maxima Method and the Peak–over–Threshold Approach ..................................................................Axel Bücher and Chen Zhou 360 A Hybrid Scan Gibbs Sampler for Bayesian Models with Latent Variables ..........................Grant Backlund, James P. Hobert, Yeun Ji Jung and Kshitij Khare 379 Maximum Likelihood Multiple Imputation: Faster Imputations and Consistent Standard ErrorsWithoutPosteriorDraws...............Paul T. von Hippel and Jonathan W. Bartlett 400 RandomMatrixTheoryandItsApplications.............................Alan Julian Izenman 421 The GENIUS Approach to Robust Mendelian Randomization Inference .....................................Eric Tchetgen Tchetgen, BaoLuo Sun and Stefan Walter 443 AGeneralFrameworkfortheAnalysisofAdaptiveExperiments............Ian C. Marschner 465 Statistical Science [ISSN 0883-4237 (print); ISSN 2168-8745 (online)], Volume 36, Number 3, August 2021. Published quarterly by the Institute of Mathematical Statistics, 9760 Smith Road, Waite Hill, Ohio 44094, USA. Periodicals postage paid at Cleveland, Ohio and at additional mailing offices. POSTMASTER: Send address changes to Statistical Science, Institute of Mathematical Statistics, Dues and Subscriptions Office, PO Box 729, Middletown, Maryland 21769, USA. Copyright © 2021 by the Institute of Mathematical Statistics Printed in the United States of America Statistical Science Volume 36, Number 3 (339–492) August 2021 Volume 36 Number 3 August 2021 Khinchin’s 1929 Paper on Von Mises’ Frequency Theory of Probability Lukas M. Verburgt A Problem in Forensic Science Highlighting the Differences between the Bayes Factor and Likelihood Ratio Danica M.
    [Show full text]
  • A Matching Based Theoretical Framework for Estimating Probability of Causation
    A Matching Based Theoretical Framework for Estimating Probability of Causation Tapajit Dey and Audris Mockus Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Abstract The concept of Probability of Causation (PC) is critically important in legal contexts and can help in many other domains. While it has been around since 1986, current operationalizations can obtain only the minimum and maximum values of PC, and do not apply for purely observational data. We present a theoretical framework to estimate the distribution of PC from experimental and from purely observational data. We illustrate additional problems of the existing operationalizations and show how our method can be used to address them. We also provide two illustrative examples of how our method is used and how factors like sample size or rarity of events can influence the distribution of PC. We hope this will make the concept of PC more widely usable in practice. Keywords: Probability of Causation, Causality, Cause of Effect, Matching 1. Introduction Our understanding of the world comes from our knowledge about the cause-effect relationship between various events. However, in most of the real world scenarios, one event is caused by multiple other events, having arXiv:1808.04139v1 [stat.ME] 13 Aug 2018 varied degrees of influence over the first event. There are two major concepts associated with such relationships: Cause of Effect (CoE) and Effect of Cause (EoC) [1, 2]. EoC focuses on the question that if event X is the cause/one of the causes of event Y, then what is the probability of event Y given event Email address: [email protected], [email protected] (Tapajit Dey and Audris Mockus) Preprint submitted to arXiv July 2, 2019 X is observed? CoE, on the other hand, tries to answer the question that if event X is known to be (one of) the cause(s) of event Y, then given we have observed both events X and Y, what is the probability that event X was in fact the cause of event Y? In this paper, we focus on the CoE scenario.
    [Show full text]
  • December 2000
    THE ISBA BULLETIN Vol. 7 No. 4 December 2000 The o±cial bulletin of the International Society for Bayesian Analysis A WORD FROM already lays out all the elements mere statisticians might have THE PRESIDENT of the philosophical position anything to say to them that by Philip Dawid that he was to continue to could possibly be worth ISBA President develop and promote (to a listening to. I recently acted as [email protected] largely uncomprehending an expert witness for the audience) for the rest of his life. defence in a murder appeal, Radical Probabilism He is utterly uncompromising which revolved around a Modern Bayesianism is doing in his rejection of the realist variant of the “Prosecutor’s a wonderful job in an enormous conception that Probability is Fallacy” (the confusion of range of applied activities, somehow “out there in the world”, P (innocencejevidence) with supplying modelling, data and in his pragmatist emphasis P ('evidencejinnocence)). $ analysis and inference on Subjective Probability as Contents procedures to nourish parts that something that can be measured other techniques cannot reach. and regulated by suitable ➤ ISBA Elections and Logo But Bayesianism is far more instruments (betting behaviour, ☛ Page 2 than a bag of tricks for helping or proper scoring rules). other specialists out with their What de Finetti constructed ➤ Interview with Lindley tricky problems – it is a totally was, essentially, a whole new ☛ Page 3 original way of thinking about theory of logic – in the broad ➤ New prizes the world we live in. I was sense of principles for thinking ☛ Page 5 forcibly struck by this when I and learning about how the had to deliver some brief world behaves.
    [Show full text]
  • ALEXANDER PHILIP DAWID Emeritus Professor of Statistics University of Cambridge E-Mail: [email protected] Web
    ALEXANDER PHILIP DAWID Emeritus Professor of Statistics University of Cambridge E-mail: [email protected] Web: http://www.statslab.cam.ac.uk/∼apd EDUCATION 1967{69 University of London (Imperial College; University College) 1963{67 Cambridge University (Trinity Hall; Darwin College) 1956{63 City of London School QUALIFICATIONS 1982 ScD (Cantab.) 1970 MA (Cantab.) 1967 Diploma in Mathematical Statistics (Cantab.: Distinction) 1966 BA (Cantab.): Mathematics (Second Class Honours) 1963 A-levels: Mathematics and Advanced Mathematics (Double Distinction), Physics (Distinction) State Scholarship HONOURS AND AWARDS 2018 Fellow of the Royal Society 2016 Fellow of the International Society for Bayesian Analysis 2015 Honorary Lifetime Member, International Society for Bayesian Analysis 2013 Network Scholar of The MacArthur Foundation Research Network on Law and Neuroscience 2002 DeGroot Prize for a Published Book in Statistical Science 2001 Royal Statistical Society: Guy Medal in Silver 1978 Royal Statistical Society: Guy Medal in Bronze 1977 G. W. Snedecor Award for Best Publication in Biometry EMPLOYMENT ETC. 2013{ Emeritus Professor of Statistics, Cambridge University 2013{ Emeritus Fellow, Darwin College Cambridge 2007{13 Professor of Statistics and Director of the Cambridge Statistics Initiative, Cambridge University 2011{13 Director of Studies in Mathematics, Darwin College Cambridge 2007{13 Professorial Fellow, Darwin College Cambridge 1989{2007 Pearson Professor of Statistics, University College London 1983{93 Head, Department of Statistical Science, University College London 1982{89 Professor of Probability and Statistics, University College London 1981{82 Reader in Probability and Statistics, University College London 1978{81 Professor of Statistics, Head of Statistics Section, and Director of the Statistical Laboratory, Department of Mathematics, The City University, London 1969{78 Lecturer in Statistics, University College London VISITING POSITIONS ETC.
    [Show full text]
  • Theory and Applications of Proper Scoring Rules
    Theory and Applications of Proper Scoring Rules A. Philip Dawid, University of Cambridge and Monica Musio, Università di Cagliari Abstract We give an overview of some uses of proper scoring rules in statistical inference, including frequentist estimation theory and Bayesian model selection with improper priors. 1 Introduction The theory of proper scoring rules (PSRs) originated as an approach to probability forecasting (Dawid 1986), the general enterprise of forming and evaluating probabilistic predictions. The fundamental idea is to develop ways of motivating a forecaster to be honest in the predictions he announces, and of assessing the performance of announced probabilities in the light of the outcomes that eventuate. The early applications of this methodology were to meteorology (Brier 1950) and subjective Bayesianism (Good 1952; de Finetti 1975). However, it is becoming clear that the mathematical theory of PSRs has a wide range of other applications in Statistics generally. The aim of this paper is to give a brief outline of some of these. Some further details will appear in Dawid et al. (2014). After setting out the basic ideas and properties of PSRs in § 2, in § 3 we describe a number of important special cases. Section 4 describes how a PSR supplies an alternative to the likelihood function of a statistical model, and how this can be used to develop new estimators that may have useful properties of robustness and/or computational feasability. In particular, in § 5 we show that, by using an appropriate PSR, it is possible to avoid difficulties associated with intractable normalising constants. In § 6 the same ability to ignore a normalising constant is shown to supply an approach to Bayesian model selection with improper priors.
    [Show full text]
  • PHIL 8670 (Fall, 2015): Philosophy of Statistics
    PHIL 8670 (Fall, 2015): Philosophy of Statistics Samuel C. Fletcher September 12, 2015 Primary Email: scfl[email protected] Office Phone: 612-625-4325 Office Hours: 754 Heller Hall, Tuesdays 3 pm { 4 pm, and by appointment Class Period: 731 Heller Hall, Tuesdays 4 pm { 6 pm Course Description Statistical methods form a significant part of the methodology of every scientific discipline. This seminar will survey a number of central philosophical and conceptual issues in statistics. After a brief review of elementary probability theory and its interpretation, we will consider at length both the classical and Bayesian schools of statistics: their formulations, and arguments pro and con. We will then consider a third school, likelihoodism, that is sometimes considered a compromise between classical and Bayesian methodologies and has gained favor among some philosophers of science. This leads to the likelihood principle, a natural focal point in the debate between these schools on the nature of evidence. The remainder of the seminar will be split between topics on statistical modeling (model verification and model selection), connections between the philosophy of statistics and more traditional topics in the philosophy of science (simplicity, induction and confirmation) and the relation between statistics and causation (randomness, randomization and causal inference). Evaluation Each week, there will be a semi-formal presentation of that week's reading assignment aimed at stimulating a critical discussion. For the first meeting (or two) I will be responsible for presenting, and for each of the remaining meetings, one or two students will be responsible. Students not presenting during a given week are responsible instead for submitting a one- paragraph question about a philosophical/conceptual issue arising in that week's readings to those presenting (and me).
    [Show full text]
  • Fusion Confusion? Comments on Nancy Reid: “BFF Four–Are We Converging?”
    Fusion Confusion? Comments on Nancy Reid: “BFF Four–Are we Converging?” Deborah G. Mayo The Fourth Bayesian, Fiducial and Frequentist Workshop (BFF4): Harvard University May 2, 2017 <1> I’m delighted to Be part of a workshop linking statistics and philosophy of statistics! I thank the organizers for inviting me. Nancy Reid’s “BFF Four–Are we Converging?” gives numerous avenues for discussion She zeroes in on obstacles to fusion: Confusion or disagreement on the nature of probability and its use in statistical inference <2> From Nancy Reid: Nature of probability probability to describe physical haphazard variability • probabilities represent features of the “real” world in idealized form • suBject to empirical test and improvement • conclusions of statistical analysis expressed in terms of interpretaBle parameters • enhanced understanding of the data generating process probability to describe the uncertainty of knowledge • measures rational, supposedly impersonal, degree of Belief given relevant information (Jeffreys) • measures a particular person’s degree of Belief, suBject typically to some constraints of self-consistency … • often linked with personal decision-making <3> • As is common, she laBels the second “epistemological” But a key question for me is: what’s relevant for a normative epistemology, for an account of what’s warranted/unwarranted to infer <4> Reid quite rightly asks: • in what sense are confidence distribution functions, significance functions, structural or fiducial probaBilities to be interpreted? • empirically? degree of Belief? • literature is not very clear <5> Reid: We may avoid the need for a different version of proBaBility by appeal to a notion of calibration (Cox 2006, Reid & Cox 2015) • This is my central focus I approach this indirectly, with analogy between philosophy of statistics and statistics <6> Carnap: Bayesians as Popper: Frequentists (N-P/Fisher) Can’t solve induction But can Build logics of induction or confirmation theories (e.g., Carnap 1962).
    [Show full text]
  • Computational and Financial Econometrics (CFE 2017)
    CFE-CMStatistics 2017 PROGRAMME AND ABSTRACTS 11th International Conference on Computational and Financial Econometrics (CFE 2017) http://www.cfenetwork.org/CFE2017 and 10th International Conference of the ERCIM (European Research Consortium for Informatics and Mathematics) Working Group on Computational and Methodological Statistics (CMStatistics 2017) http://www.cmstatistics.org/CMStatistics2017 Senate House & Birkbeck University of London, UK 16 – 18 December 2017 ⃝c ECOSTA ECONOMETRICS AND STATISTICS. All rights reserved. I CFE-CMStatistics 2017 ISBN 978-9963-2227-4-2 ⃝c 2017 - ECOSTA ECONOMETRICS AND STATISTICS Technical Editors: Gil Gonzalez-Rodriguez and Marc Hofmann. All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any other form or by any means without the prior permission from the publisher. II ⃝c ECOSTA ECONOMETRICS AND STATISTICS. All rights reserved. CFE-CMStatistics 2017 International Organizing Committee: Ana Colubi, Erricos Kontoghiorghes, Marc Levene, Bernard Rachet, Herman Van Dijk. CFE 2017 Co-chairs: Veronika Czellar, Hashem Pesaran, Mike Pitt and Stefan Sperlich. CFE 2017 Programme Committee: Knut Are Aastveit, Alessandra Amendola, Josu Arteche, Monica Billio, Roberto Casarin, Gianluca Cubadda, Manfred Deistler, Jean-Marie Dufour, Ekkehard Ernst, Jean-David Fermanian, Catherine Forbes, Philip Hans Franses, Marc Hallin, Alain Hecq, David Hendry, Benjamin Holcblat, Jan Jacobs, Degui Li, Alessandra Luati, Richard Luger, J Isaac Miller, Claudio Morana, Bent
    [Show full text]
  • Hyper and Structural Markov Laws for Graphical Models
    Hyper and Structural Markov Laws for Graphical Models Simon Byrne Clare College and Statistical Laboratory, Department of Pure Mathematics and Mathematical Statistics This dissertation is submitted for the degree of Doctor of Philosophy University of Cambridge September 2011 Declaration This dissertation is the result of my own work and includes nothing which is the outcome of work done in collaboration except where specifically indi- cated in the text. iii Acknowledgements I would like to thank everyone who helped in the writing of this thesis, especially my supervisor Philip Dawid for the thoughtful discussions and advice over the past three years. I would also like to thank my colleagues in the Statistical Laboratory for their support and friendship. Finally I would like to thank my family, especially Rebecca, without whom I would never have been able to complete this work. v Summary My thesis focuses on the parameterisation and estimation of graphical mod- els, based on the concept of hyper and meta Markov properties. These state that the parameters should exhibit conditional independencies, similar to those on the sample space. When these properties are satisfied, parameter estima- tion may be performed locally, i.e. the estimators for certain subsets of the graph are determined entirely by the data corresponding to the subset. Firstly, I discuss the applications of these properties to the analysis of case-control studies. It has long been established that the maximum like- lihood estimates for the odds-ratio may be found by logistic regression, in other words, the “incorrect” prospective model is equivalent to the correct retrospective model.
    [Show full text]
  • Probability Forecasts and Prediction Markets A
    PROBABILITY FORECASTS AND PREDICTION MARKETS A. Philip Dawid - Julia Mortera ISSN 2279-6916 Working papers (Dipartimento di Economia Università degli studi Roma Tre) (online) Working Paper n° 250, 2019 I Working Papers del Dipartimento di Economia svolgono la funzione di divulgare tempestivamente, in forma definitiva o provvisoria, i risultati di ricerche scientifiche originali. La loro pubblicazione è soggetta all'approvazione del Comitato Scientifico. Per ciascuna pubblicazione vengono soddisfatti gli obblighi previsti dall'art. l del D.L.L. 31.8.1945, n. 660 e successive modifiche. Copie della presente pubblicazione possono essere richieste alla Redazione. esemplare fuori commercio ai sensi della legge 14 aprile 2004 n.106 REDAZIONE: Dipartimento di Economia Università degli Studi Roma Tre Via Silvio D'Amico, 77 - 00145 Roma Tel. 0039-06-57335655 fax 0039-06-57335771 E-mail: [email protected] http://dipeco.uniroma3.it PROBABILITY FORECASTS AND PREDICTION MARKETS A. Philip Dawid - Julia Mortera Comitato Scientifico: Fabrizio De Filippis Francesco Giuli Anna Giunta Paolo Lazzara Loretta Mastroeni Silvia Terzi Probability Forecasts and Prediction Markets Julia Mortera∗ A. Philip Dawidy Abstract We give an overview of various topics tied to the expression of uncertainty about a variable or event by means of a probability distribution. We first consider meth- ods used to evaluate a single probability forecaster and then revisit methods for combining several experts' distributions. We describe the implications of imposing coherence constraints, based on a specific understanding of \expertise". In the final part we revisit from a statistical standpoint some results in the economics literature on prediction markets, where individuals sequentially place bets on the outcome of a future event.
    [Show full text]