
CONTENTS General Preface vii Dov Gabbay, Paul Thagard, and John Woods Preface ix Prasanta S. Bandyopadhyay and Malcolm R. Forster List of Contributors xiii Introduction Philosophy of Statistics: An Introduction 1 Prasanta S. Bandyopadhyay and Malcolm R. Forster Part I. Probability & Statistics Elementary Probability and Statistics: A Primer 53 Prasanta S. Bandyopadhyay and Steve Cherry Part II. Philosophical Controversies about Conditional Probability Conditional Probability 99 Alan H´ajek The Varieties of Conditional Probability 137 Kenny Easwaran Part III. Four Paradigms of Statistics Classical Statistics Paradigm Error Statistics 153 Deborah G. Mayo and Aris Spanos Significance Testing 199 Michael Dickson and Davis Baird Bayesian Paradigm The Bayesian Decision-Theoretic Approach to Statistics 233 Paul Weirich 18 Modern Bayesian Inference: 263 Foundations and Objective Methods Jos´eM. Bernardo Evidential Probability and Objective Bayesian Epistemology 307 Gregory Wheeler and Jon Williamson Confirmation Theory 333 James Hawthorne Challenges to Bayesian Confirmation Theory 391 John D. Norton Bayesianism as a Pure Logic of Inference 441 Colin Howson Bayesian Inductive Logic, Verisimilitude, and Statistics 473 Roberto Festa Likelihood Paradigm Likelihood and its Evidential Framework 493 Jeffrey D. Blume Evidence, Evidence Functions, and Error Probabilities 513 Mark L. Taper and Subhash R. Lele Akaikean Paradigm AIC Scores as Evidence — a Bayesian Interpretation 535 Malcolm Forster and Elliott Sober Part IV: The Likelihood Principle The Likelihood Principle 553 Jason Grossman Part V: Recent Advances in Model Selection AIC, BIC and Recent Advances in Model Selection 583 Arijit Chakrabarti and Jayanta K. Ghosh Posterior Model Probabilities 607 A. Philip Dawid 19 Part VI: Attempts to Understand Different Aspects of “Randomness” Defining Randomness 633 Deborah Bennett Mathematical Foundations of Randomness 641 Abhijit Dasgupta Part VII: Probabilistic and Statistical Paradoxes Paradoxes of Probability 713 Susan Vineberg Statistical Paradoxes: Take It to The Limit 737 C. Andy Tsao Part VIII: Statistics and Inductive Inference Statistics as Inductive Inference 751 Jan-Willem Romeijn Part IX: Various Issues about Causal Inference Common Cause in Causal Inference 777 Peter Spirtes The Logic and Philosophy of Causal Inference: A Statistical 813 Perspective Sander Greenland Part X: Some Philosophical Issues Concerning Statistical Learning Theory Statistical Learning Theory as a Framework for the 833 Philosophy of Induction Gilbert Harman and Sanjeev Kulkarni Testability and Statistical Learning Theory 849 Daniel Steel 20 Part XI: Different Approaches to Simplicity Related to Inference and Truth Luckiness and Regret in Minimum Description Length Inference 865 Steven de Rooij and Peter D. Gr¨unwald MML, Hybrid Bayesian Network Graphical Models, Statistical 901 Consistency, Invariance and Uniqueness David L. Dowe Simplicity, Truth and Probability 983 Kevin T. Kelly Part XII: Special Problems in Statistics/Computer Science Normal Approximations 1027 Robert J. Boik Stein’s Phenomenon 1073 Richard Charnigo and Cidambi Srinivasan Data, Data, Everywhere: Statistical Issues in Data Mining 1099 Choh Man Teng Part XIII: An Application of Statistics to Climate Change An Application of Statistics in Climate Change: Detection of 1121 Nonlinear Changes in a Streamflow Timing Measure in the Columbia and Missouri Headwaters Mark C. Greenwood, Joel Harper and Johnnie Moore Part XIV: Historical Approaches to Probability/Statistics The Subjective and the Objective 1149 Sandy L. Zabell Probability in Ancient India 1175 C. K. Raju Index 1197 MODERN BAYESIAN INFERENCE: FOUNDATIONS AND OBJECTIVE METHODS Jos´eM. Bernardo The field of statistics includes two major paradigms: frequentist and Bayesian. Bayesian methods provide a complete paradigm for both statistical inference and decision making under uncertainty. Bayesian methods may be derived from an axiomatic system and provide a coherent methodology which makes it possible to incorporate relevant initial information, and which solves many of the difficulties which frequentist methods are known to face. If no prior information is to be assumed, a situation often met in scientific reporting and public decision making, a formal initial prior function must be mathematically derived from the assumed model. This leads to objective Bayesian methods, objective in the precise sense that their results, like frequentist results, only depend on the assumed model and the data obtained. The Bayesian paradigm is based on an interpretation of probability as a rational conditional measure of uncertainty, which closely matches the sense of the word ‘probability’ in ordinary language. Statistical inference about a quantity of interest is described as the modification of the uncertainty about its value in the light of evidence, and Bayes’ theorem specifies how this modification should precisely be made. 1 INTRODUCTION Scientific experimental or observational results generally consist of (possibly many) sets of data of the general form D = x ,...,x , where the x ’s are somewhat { 1 n} i “homogeneous” (possibly multidimensional) observations xi. Statistical methods are then typically used to derive conclusions on both the nature of the process which has produced those observations, and on the expected behaviour at future instances of the same process. A central element of any statistical analysis is the specification of a probability model which is assumed to describe the mechanism which has generated the observed data D as a function of a (possibly multidimen- sional) parameter (vector) ω Ω, sometimes referred to as the state of nature, about whose value only limited∈ information (if any) is available. All derived sta- tistical conclusions are obviously conditional on the assumed probability model. Unlike most other branches of mathematics, frequentist methods of statistical inference suffer from the lack of an axiomatic basis; as a consequence, their pro- posed desiderata are often mutually incompatible, and the analysis of the same data may well lead to incompatible results when different, apparently intuitive Handbook of the Philosophy of Science. Volume 7: Philosophy of Statistics. Volume editors: Prasanta S. Bandyopadhyay and Malcolm R. Forster. General Editors: Dov M. Gabbay, Paul Thagard and John Woods. !c 2011 Elsevier B.V. All rights reserved. 264 Jos´eM. Bernardo procedures are tried; see Lindley [1972] and Jaynes [1976] for many instructive examples. In marked contrast, the Bayesian approach to statistical inference is firmly based on axiomatic foundations which provide a unifying logical structure, and guarantee the mutual consistency of the methods proposed. Bayesian meth- ods constitute a complete paradigm to statistical inference, a scientific revolution in Kuhn’s sense. Bayesian statistics only require the mathematics of probability theory and the interpretation of probability which most closely corresponds to the standard use of this word in everyday language: it is no accident that some of the more important seminal books on Bayesian statistics, such as the works of de Laplace [1812], Jeffreys [1939] or de Finetti [1970] are actually entitled “Probability Theory”. The practical consequences of adopting the Bayesian paradigm are far reaching. Indeed, Bayesian methods (i) reduce statistical inference to problems in probability theory, thereby minimizing the need for completely new concepts, and (ii) serve to discriminate among conventional, typically frequentist statistical techniques, by either providing a logical justification to some (and making explicit the conditions under which they are valid), or proving the logical inconsistency of others. The main result from these foundations is the mathematical need to describe by means of probability distributions all uncertainties present in the problem. In particular, unknown parameters in probability models must have a joint probabil- ity distribution which describes the available information about their values; this is often regarded as the characteristic element of a Bayesian approach. Notice that (in sharp contrast to conventional statistics) parameters are treated as ran- dom variables within the Bayesian paradigm. This is not a description of their variability (parameters are typically fixed unknown quantities) but a description of the uncertainty about their true values. A most important particular case arises when either no relevant prior infor- mation is readily available, or that information is subjective and an “objective” analysis is desired, one that is exclusively based on accepted model assumptions and well-documented public prior information. This is addressed by reference analysis which uses information-theoretic concepts to derive formal reference prior functions which, when used in Bayes’ theorem, lead to posterior distributions en- capsulating inferential conclusions on the quantities of interest solely based on the assumed model and the observed data. In this article it is assumed that probability distributions may be described through their probability density functions, and no distinction is made between a random quantity and the particular values that it may take. Bold italic roman fonts are used for observable random vectors (typically data) and bold italic greek fonts are used for unobservable random vectors (typically parameters); lower case is used for variables and calligraphic upper case for their dominion sets. Moreover, the standard mathematical convention
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages49 Page
-
File Size-