Fiducial and Structural Statistical Inference

Total Page:16

File Type:pdf, Size:1020Kb

Fiducial and Structural Statistical Inference Feudalism Rader T 1971 The Economics of Feudalism. Gordon & Breach, 1. A Simple Example New York Roth P 1863 FeudalitaW t und Unterthan!erband. Bo$ hlau, Weimar, Consider a variable y that is directly available or has Germany arisen by some preliminary reduction process and Sander P 1906 Feudalstaat und BuW rgerliche Verfassung. Ein suppose that y measures θ in an unbiased manner and Versuch uW ber das Grundproblem der deutschen Verfassungs- has error that is normal with known variance σ#. Then geschichte. Bath, Berlin, Germany ! we can say for example that P(y ! θj1.64σ!; θ) l 95 Schluchter W 1979 Die Entwicklung des okzidentalen Ration- percent. alismus. Eine Analyse !on Max Webers Gesellschaftsge- schichte. Mohr (Paul Siebeck), Tu$ bingen, Germany Parenthetically, we note that the preliminary re- Schreiner K 1983 ‘Grundherrschaft’. Entstehung und Bede- duction could have occurred as part of the underlying utungswandel eines geschichtswissenschaftlichen Ordnungs- investigation or as part of some subsequent simplifi- und Erkla$ rungsbegriffs. In: Patze H (ed.) Die Grundherrschaft cation of the statistical model by one of the common im Mittelalter. Thorbecke, Sigmaringen, Germany pp. 11–74 statistical reduction methods, sufficiency or con- Sidney P 1961 Feudalism and Liberty: Articles and Addresses. ditionality. If the reduction is to a sufficient statistic, Johans Hopkins Press, Baltimore then the conditional distribution describing possible Stephenson C 1948 Medie!al Feudalism. Cornell University antecedent data has no dependence on the parameter Press, Ithaca, NY and the model for the sufficient statistic is used in place Strayer J R 1965 Feudalism. Van Nostrand, Princeton Sweezy P u. a. 1976 The Transition from Feudalism to Capitalism. of the original model. If the reduction is by con- Verso, London ditionality then there is typically an ancillary variable Sydney H 1921\1969 The Fall of Feudalism in France. Barnes & with a distribution free of the parameter and the given Noble, New York model for the possible original data is replaced by the van Horst Bartel 1969 SachwoW rterbuch der Geschichte Deut- conditional model given the ancillary (supportive) schlands und der deutschen Arbeiterbewegung Dietz, Berlin, variable; see also Statistical Sufficiency. Germany Vol. 1, pp. 582–6 With a data value y! the fiducial methodology Von Gierke O 1868\1954 Das Deutsche Genossenschaftsrecht. would take the above probability expression and Graz Akademische Druck-und Verlangs-Anstadt, Graz, substitute the value y! and then treat θ as the variable Austria Vol. 1 for the probability statement, thus giving a 95 percent Ward J O 1985 Feudalism: Comparati!e Studies. Sydney Asso- ! ! ciation for Social Studies in Society & Culture, Sydney fiducial probability statement P(θ " y k1.64σ!; y ) Weber M 1921\1972 Wirtschaft und Gesellschaft. Grundriß der l 95 percent. The structural approach would consider # !erstehenden Soziologie, 5th edn. Mohr (Paul Siebeck), Tu$ b- the normal (0; σ!) distribution for the error e l ykθ ingen and might record for example the probability ! Wehler H-U 1987 Deutsche Gesellschaftsgeschichte, Vol.1: Vom P(e ! 1.64σ!) l 95 percent; then with data value y Feudalismus des Alten Reichs bis zur Defensi!en Modern- the probability statement would be applied to the isierung der ReformaW ra 1700–1815. Beck, Mu$ nchen, Germany error e l y!kθ giving the structural probability = Wunder H 1974 Der Feudalismus-Begriff. Uberlegungen zu P(y!kθ ! 1.64σ ; y!) l 95 percent or equivalently Mo$ glichkeiten der historischen Begriffsbildung. In: Wunder H ! ! ! (ed.) Feudalismus. Nymphenburger Verlag, Mu$ nchen, Ger- P(θ " y k1.64σ!; y ) l 95 percent. In a somewhat many related manner the Bayesian methodology might use a Wunder H 1989 Artikel ‘Feudalismus’. In: Sautier R H (ed.) uniform prior cdθ and obtain a posterior distribution ! ! Lexikon des Mittelalters. Artemis, Mu$ nchen, Germany, Vol. for θ that would have P(θ " y k1.64σ!; y ) l 95 4, pp. 411–15 percent. For this simple example the three methods give the same result at the 95 percent level and also at K. Schreiner other levels, thus saying essentially that with data value y! the probability distribution describing the ! # unknown θ is normal (y ; σ!). With more complicated models the results from the Fiducial and Structural Statistical three methodologies can differ and philosophical Inference arguments concerning substance and relative merits arise. However, for one straightforward generaliza- Fiducial inference is a statistical approach to interval tion the methods remain in agreement: the normal estimation first advocated by R. A. Fisher as an distribution can be replaced by some alternative alternative to the then dominant method of inverse distribution form; this is discussed in some detail in probability, i.e., using Bayes’ Theorem. Considerable Sect. 2. effort has gone into formalizing Fisher’s notions using such concepts as statistical invariance and pivotal quantities. This entry describes elements of the fiducial approach and relates them to other currently more 2. Fiducial Probability widelyusedstatisticalapproachestoinference.Section1 introduces some basic inferential ideas via a simple Fisher (1922, 1925; see also Fisher, Ronald A example. (1890–1962)) had already introduced most of the 5616 Fiducial and Structural Statistical Inference fundamental concepts of statistical theory, such as with a sample y ,…, y from the normal (µ; σ#) " n #! sufficiency, likelihood, efficiency, exhaustiveness distribution, the reduction would be to t(y) l (y` , sy). (minimal sufficiency), when he chose (Fisher 1930) to The methodology then suggests the use of a pivotal address directly the aspiration mentioned above. He quantity p l p(t; θ) with a fixed distribution and a took Laplace and Gauss to task for ‘fall(ing) into error one-one relationship between any two of p, t, θ; recall on a question of prime theoretical importance’ by more generally that a pivotal quantity is a function of adopting the Bayesian approach that ‘Bayes (had) the variable and parameter that provides a measure of tentatively wished to postulate in a special case’ and departure of variable value from parameter value, and which was published posthumously (Bayes, ibid). Be has a fixed distribution which allows an assessment of then proposed in a restricted context the fiducial an observed departure. Thus for the example a natural method, as discussed. pivotal is Neyman and Pearson (1933) then gave a math- ematical formulation of fiducial probability that be- 1 y` µ (n 1) s# 5 came known as confidence intervals. Fisher (1956) 2 k # k y 6 3 z l , χ l 7 however treated Neyman and Pearson’s formulation σ\Nn n−" σ# as a ‘misconception having some troubling conse- 4 8 quences …’; logical and philosophical arguments be- tween the two sides were intense for many years. In which has independent components, normal (0, 1) and particular the slight to Laplace and Gauss may well chi-square with nk1 degrees of freedom. Fisher (1956, have affected the views of the more mathematical p. 172), however, rather deviously rejected this as a participants. legitimate part of the fiducial methodology, but he was Fisher (1930) entitled his paper ‘Inverse Probability’ somewhat less explicit about what would be legitimate. and examined a statistic t(y) whose distribution This pivotal reduction procedure is now however a depended on a single parameter θ. Let P l F (t; θ) be rather familiar component of standard inference the distribution function of t, and let P itself be what theory and in particular of confidence theory. we might now call a p-value for assessing θ; of course The final step is to invert the pivotal quantity, that in the usual continuous case P has the uniform is, to insert the observed values for the variables and distribution on (0, 1). ‘If now we give P any particular then transform the distribution of the pivotal quantity value such as 0.95, we have … the perfectly objective to the parameter. For the example this gives fact that in 5 percent of samples’ t ‘will exceed the 95 percent value corresponding to the actual value of µ y z (n 1)−"/#χ s Nn θ ….’ Then to ‘any value of’ t ‘there will moreover be l ` ko \ k n−"q y\ usually a particular value of θ to which it bears this σ# l (nk1) s#\χ# relationship; we may call this the ‘fiducial 5 percent y n−" value of θ’ corresponding to’ the given t. This led to # Neyman and Pearson’s (ibid) confidence methodology where the y` , sy have their observed values. We can then but Fisher treated this as a misconception and he write followed different directions and interpretations for the fiducial methodology; for a view on related µ l y` kts \Nn approaches see Estimation: Point and Inter!al. For this y present simple case with scalar t and scalar θ, there seems little difference between the fiducial and the where t is Student with nk1 degrees of freedom. This confidence approaches and interpretations. fiducial calculation closely parallels that for the confi- This discussion effectively ascribes a distribution to dence approach, except that the limits here are θ based on an observed t; this is called the fiducial calculated from the Student distribution for µ rather distribution for θ. Just as the density of t for given θ is than from the Student distribution of the pivotal t. In obtained as (c\ct)F (t; θ) so also the fiducial density is quite wide generality fiducial regions can correspond obtained as (kc\cθ)F (t;θ); the negative sign is in- to confidence regions; it is just a matter of whether the consequential and is merely the result of F (t; θ) being limits are calculated before or after the data are examined typically for the case that is increasing with observed, a non-issue from the Bayesian viewpoint.
Recommended publications
  • STATS 305 Notes1
    STATS 305 Notes1 Art Owen2 Autumn 2013 1The class notes were beautifully scribed by Eric Min. He has kindly allowed his notes to be placed online for stat 305 students. Reading these at leasure, you will spot a few errors and omissions due to the hurried nature of scribing and probably my handwriting too. Reading them ahead of class will help you understand the material as the class proceeds. 2Department of Statistics, Stanford University. 0.0: Chapter 0: 2 Contents 1 Overview 9 1.1 The Math of Applied Statistics . .9 1.2 The Linear Model . .9 1.2.1 Other Extensions . 10 1.3 Linearity . 10 1.4 Beyond Simple Linearity . 11 1.4.1 Polynomial Regression . 12 1.4.2 Two Groups . 12 1.4.3 k Groups . 13 1.4.4 Different Slopes . 13 1.4.5 Two-Phase Regression . 14 1.4.6 Periodic Functions . 14 1.4.7 Haar Wavelets . 15 1.4.8 Multiphase Regression . 15 1.5 Concluding Remarks . 16 2 Setting Up the Linear Model 17 2.1 Linear Model Notation . 17 2.2 Two Potential Models . 18 2.2.1 Regression Model . 18 2.2.2 Correlation Model . 18 2.3 TheLinear Model . 18 2.4 Math Review . 19 2.4.1 Quadratic Forms . 20 3 The Normal Distribution 23 3.1 Friends of N (0; 1)...................................... 23 3.1.1 χ2 .......................................... 23 3.1.2 t-distribution . 23 3.1.3 F -distribution . 24 3.2 The Multivariate Normal . 24 3.2.1 Linear Transformations . 25 3.2.2 Normal Quadratic Forms .
    [Show full text]
  • Pivotal Quantities with Arbitrary Small Skewness Arxiv:1605.05985V1 [Stat
    Pivotal Quantities with Arbitrary Small Skewness Masoud M. Nasari∗ School of Mathematics and Statistics of Carleton University Ottawa, ON, Canada Abstract In this paper we present randomization methods to enhance the accuracy of the central limit theorem (CLT) based inferences about the population mean µ. We introduce a broad class of randomized versions of the Student t- statistic, the classical pivot for µ, that continue to possess the pivotal property for µ and their skewness can be made arbitrarily small, for each fixed sam- ple size n. Consequently, these randomized pivots admit CLTs with smaller errors. The randomization framework in this paper also provides an explicit relation between the precision of the CLTs for the randomized pivots and the volume of their associated confidence regions for the mean for both univariate and multivariate data. This property allows regulating the trade-off between the accuracy and the volume of the randomized confidence regions discussed in this paper. 1 Introduction The CLT is an essential tool for inferring on parameters of interest in a nonpara- metric framework. The strength of the CLT stems from the fact that, as the sample size increases, the usually unknown sampling distribution of a pivot, a function of arXiv:1605.05985v1 [stat.ME] 19 May 2016 the data and an associated parameter, approaches the standard normal distribution. This, in turn, validates approximating the percentiles of the sampling distribution of the pivot by those of the normal distribution, in both univariate and multivariate cases. The CLT is an approximation method whose validity relies on large enough sam- ples.
    [Show full text]
  • Methods to Calculate Uncertainty in the Estimated Overall Effect Size from a Random-Effects Meta-Analysis
    Methods to calculate uncertainty in the estimated overall effect size from a random-effects meta-analysis Areti Angeliki Veroniki1,2*; Email: [email protected] Dan Jackson3; Email: [email protected] Ralf Bender4; Email: [email protected] Oliver Kuss5,6; Email: [email protected] Dean Langan7; Email: [email protected] Julian PT Higgins8; Email: [email protected] Guido Knapp9; Email: [email protected] Georgia Salanti10; Email: [email protected] 1 Li Ka Shing Knowledge Institute, St. Michael’s Hospital, 209 Victoria Street, East Building. Toronto, Ontario, M5B 1T8, Canada 2 Department of Primary Education, School of Education, University of Ioannina, Ioannina, Greece 3 MRC Biostatistics Unit, Institute of Public Health, Robinson Way, Cambridge CB2 0SR, U.K | downloaded: 28.9.2021 4 Department of Medical Biometry, Institute for Quality and Efficiency in Health Care (IQWiG), Im Mediapark 8, 50670 Cologne, Germany 5 Institute for Biometrics and Epidemiology, German Diabetes Center, Leibniz Institute for Diabetes Research at Heinrich Heine University, 40225 Düsseldorf, Germany 6 Institute of Medical Statistics, Heinrich-Heine-University, Medical Faculty, Düsseldorf, Germany This article has been accepted for publication and undergone full peer review but has not https://doi.org/10.7892/boris.119524 been through the copyediting, typesetting, pagination and proofreading process which may lead to differences between this version and the Version of Record. Please cite this article as doi: 10.1002/jrsm.1319 source: This article is protected by copyright. All rights reserved. 7 Institute of Child Health, UCL, London, WC1E 6BT, UK 8 Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, U.K.
    [Show full text]
  • Fiducial Inference: an Approach Based on Bootstrap Techniques
    U.P.B. Sci. Bull., Series A, Vol. 69, No. 1, 2007 ISSN 1223-7027 FIDUCIAL INFERENCE: AN APPROACH BASED ON BOOTSTRAP TECHNIQUES H.-D. HEIKE1, C-tin TÂRCOLEA2, Adina I. TARCOLEA3, M. DEMETRESCU4 În prima parte a acestei lucrări sunt prezentate conceptele de bază ale inferenţei fiduciale. La aplicarea acestui principiu inferenţial apar dificultăţi când distribuţia variabilei pivotale folosite nu este cunoscută. În partea a doua este propusă o soluţie pentru această problemă constând în folosirea de metode bootstrap. The basic concepts of fiducial inference are presented in the first part of this paper. Problems arise if the form of the distribution of the pivot used by the fiducial argument is not known. Our main result consists in showing how bootstrap techniques can be used to handle this kind of situation. Keywords: Fiducial distribution, estimation of parameters, pivotal methods, approximation of distributions, approximate pivot. AMS 2000 Classification: 62F99 1. Introduction Commenting on Fisher’s work, [3] stated at the beginning of his paper: ’The history of fiducial probability dates back over thirty years and so is long by statistical standards; however, thirty years have not proved long enough for agreement to be reached among statisticians as to the derivation, manipulation and interpretation of fiducial probability’. Forty years later, the situation is very much the same. What is actually the fiducial theory? The fiducial model designed by Fisher leads us to objective probability statements about values of unknown parameters solely on the basis of given data, without resorting to any prior distribution. The idea that the initially assumed randomness may be preserved was one of Fisher’s great contributions to the theory of statistical estimation; the lack of acceptance arose from the fact that 1 Prof., Statistics and Econometrics, Technical University Darmstadt, Residenzschloss, D-64289 Darmstadt, GERMANY.
    [Show full text]
  • Stat 3701 Lecture Notes: Bootstrap Charles J
    Stat 3701 Lecture Notes: Bootstrap Charles J. Geyer April 17, 2017 1 License This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License (http: //creativecommons.org/licenses/by-sa/4.0/). 2 R • The version of R used to make this document is 3.3.3. • The version of the rmarkdown package used to make this document is 1.4. • The version of the knitr package used to make this document is 1.15.1. • The version of the bootstrap package used to make this document is 2017.2. 3 Relevant and Irrelevant Simulation 3.1 Irrelevant Most statisticians think a statistics paper isn’t really a statistics paper or a statistics talk isn’t really a statistics talk if it doesn’t have simulations demonstrating that the methods proposed work great (at least in some toy problems). IMHO, this is nonsense. Simulations of the kind most statisticians do prove nothing. The toy problems used are often very special and do not stress the methods at all. In fact, they may be (consciously or unconsciously) chosen to make the methods look good. In scientific experiments, we know how to use randomization, blinding, and other techniques to avoid biasing the results. Analogous things are never AFAIK done with simulations. When all of the toy problems simulated are very different from the statistical model you intend to use for your data, what could the simulation study possibly tell you that is relevant? Nothing. Hence, for short, your humble author calls all of these millions of simulation studies statisticians have done irrelevant simulation.
    [Show full text]
  • Interval Estimation Statistics (OA3102)
    Module 5: Interval Estimation Statistics (OA3102) Professor Ron Fricker Naval Postgraduate School Monterey, California Reading assignment: WM&S chapter 8.5-8.9 Revision: 1-12 1 Goals for this Module • Interval estimation – i.e., confidence intervals – Terminology – Pivotal method for creating confidence intervals • Types of intervals – Large-sample confidence intervals – One-sided vs. two-sided intervals – Small-sample confidence intervals for the mean, differences in two means – Confidence interval for the variance • Sample size calculations Revision: 1-12 2 Interval Estimation • Instead of estimating a parameter with a single number, estimate it with an interval • Ideally, interval will have two properties: – It will contain the target parameter q – It will be relatively narrow • But, as we will see, since interval endpoints are a function of the data, – They will be variable – So we cannot be sure q will fall in the interval Revision: 1-12 3 Objective for Interval Estimation • So, we can’t be sure that the interval contains q, but we will be able to calculate the probability the interval contains q • Interval estimation objective: Find an interval estimator capable of generating narrow intervals with a high probability of enclosing q Revision: 1-12 4 Why Interval Estimation? • As before, we want to use a sample to infer something about a larger population • However, samples are variable – We’d get different values with each new sample – So our point estimates are variable • Point estimates do not give any information about how far
    [Show full text]
  • The Significance Test Controversy Revisited: the Fiducial Bayesian
    Bruno LECOUTRE Jacques POITEVINEAU The Significance Test Controversy Revisited: The fiducial Bayesian Alternative July 26, 2014 Springer Contents 1 Introduction ................................................... 3 1.1 The fiducial Bayesian Inference. .4 1.2 The Stranglehold of Significance Tests . .5 1.3 Beyond the Significance Test Controversy . .6 1.4 The Feasibility of Fiducial Bayesian Methods . .6 1.5 Plan of the Book . .7 2 Preamble - Frequentist and Bayesian Inference .................... 9 2.1 Two Different Approaches to Statistical Inference . .9 2.2 The Frequentist Approach: From Unknown to Known . 10 2.2.1 Sampling Probabilities . 10 2.2.2 Null Hypothesis Significance Testing in Practice . 11 2.2.3 Confidence Interval . 12 2.3 The Bayesian Approach: From Known to Unknown . 12 2.3.1 The Likelihood Function and the Bayesian Probabilities . 12 2.3.2 An Opinion-Based Analysis . 14 2.3.3 A “No Information Initially” Analysis . 16 3 The Fisher, Neyman-Pearson and Jeffreys views of Statistical Tests . 21 3.1 The Fisher Test of Significance . 21 3.2 The Neyman-Pearson Hypothesis Test . 23 3.3 The Jeffreys Bayesian Approach to Testing . 25 3.4 Different Views of Statistical Inference . 28 3.4.1 Different Scopes of Applications: The Aim of Statistical Inference . 28 3.4.2 The Role of Bayesian Probabilities . 30 3.4.3 Statistical Tests: Judgment, Action or Decision? . 32 3.5 Is It possible to Unify the Fisher and Neyman-Pearson Approaches? 34 3.6 Concluding Remarks . 35 v vi Contents 4 GHOST: An officially Recommended Practice ..................... 37 4.1 Null Hypothesis Significance Testing . 37 4.1.1 An Amalgam .
    [Show full text]
  • Principles of Statistical Inference
    Principles of Statistical Inference In this important book, D. R. Cox develops the key concepts of the theory of statistical inference, in particular describing and comparing the main ideas and controversies over foundational issues that have rumbled on for more than 200 years. Continuing a 60-year career of contribution to statistical thought, Professor Cox is ideally placed to give the comprehensive, balanced account of the field that is now needed. The careful comparison of frequentist and Bayesian approaches to inference allows readers to form their own opinion of the advantages and disadvantages. Two appendices give a brief historical overview and the author’s more personal assessment of the merits of different ideas. The content ranges from the traditional to the contemporary. While specific applications are not treated, the book is strongly motivated by applications across the sciences and associated technologies. The underlying mathematics is kept as elementary as feasible, though some previous knowledge of statistics is assumed. This book is for every serious user or student of statistics – in particular, for anyone wanting to understand the uncertainty inherent in conclusions from statistical analyses. Principles of Statistical Inference D.R. COX Nuffield College, Oxford CAMBRIDGE UNIVERSITY PRESS Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521866736 © D. R. Cox 2006 This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press.
    [Show full text]
  • Confidence Distribution
    Confidence Distribution Xie and Singh (2013): Confidence distribution, the frequentist distribution estimator of a parameter: A Review Céline Cunen, 15/09/2014 Outline of Article ● Introduction ● The concept of Confidence Distribution (CD) ● A classical Definition and the History of the CD Concept ● A modern definition and interpretation ● Illustrative examples ● Basic parametric examples ● Significant (p-value) functions ● Bootstrap distributions ● Likelihood functions ● Asymptotically third-order accurate confidence distributions ● CD, Bootstrap, Fiducial and Bayesian approaches ● CD-random variable, Bootstrap estimator and fiducial-less interpretation ● CD, fiducial distribution and Belief function ● CD and Bayesian inference ● Inferences using a CD ● Confidence Interval ● Point estimation ● Hypothesis testing ● Optimality (comparison) of CDs ● Combining CDs from independent sources ● Combination of CDs and a unified framework for Meta-Analysis ● Incorporation of Expert opinions in clinical trials ● CD-based new methodologies, examples and applications ● CD-based likelihood caluculations ● Confidence curve ● CD-based simulation methods ● Additional examples and applications of CD-developments ● Summary CD: a sample-dependent distribution that can represent confidence intervals of all levels for a parameter of interest ● CD: a broad concept = covers all approaches that can build confidence intervals at all levels Interpretation ● A distribution on the parameter space ● A Distribution estimator = contains information for many types of
    [Show full text]
  • Elements of Statistics (MATH0487-1)
    Elements of statistics (MATH0487-1) Prof. Dr. Dr. K. Van Steen University of Li`ege, Belgium November 19, 2012 Introduction to Statistics Basic Probability Revisited Sampling Exploratory Data Analysis - EDA Estimation Confidence Intervals Hypothesis Testing Table Outline I 1 Introduction to Statistics Why? What? Probability Statistics Some Examples Making Inferences Inferential Statistics Inductive versus Deductive Reasoning 2 Basic Probability Revisited 3 Sampling Samples and Populations Sampling Schemes Deciding Who to Choose Deciding How to Choose Non-probability Sampling Probability Sampling A Practical Application Study Designs Prof. Dr. Dr. K. Van Steen Elements of statistics (MATH0487-1) Introduction to Statistics Basic Probability Revisited Sampling Exploratory Data Analysis - EDA Estimation Confidence Intervals Hypothesis Testing Table Outline II Classification Qualitative Study Designs Popular Statistics and Their Distributions Resampling Strategies 4 Exploratory Data Analysis - EDA Why? Motivating Example What? Data analysis procedures Outliers and Influential Observations How? One-way Methods Pairwise Methods Assumptions of EDA 5 Estimation Introduction Motivating Example Approaches to Estimation: The Frequentist’s Way Estimation by Methods of Moments Prof. Dr. Dr. K. Van Steen Elements of statistics (MATH0487-1) Introduction to Statistics Basic Probability Revisited Sampling Exploratory Data Analysis - EDA Estimation Confidence Intervals Hypothesis Testing Table Outline III Motivation What? How? Examples Properties of an Estimator
    [Show full text]
  • Fisher's Fiducial Argument and Bayes Theorem
    R. A. Fisher's Fiducial Argument and Bayes' Theorem Teddy Seidenfeld Statistical Science, Vol. 7, No. 3. (Aug., 1992), pp. 358-368. Stable URL: http://links.jstor.org/sici?sici=0883-4237%28199208%297%3A3%3C358%3ARAFFAA%3E2.0.CO%3B2-G Statistical Science is currently published by Institute of Mathematical Statistics. Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/ims.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. The JSTOR Archive is a trusted digital repository providing for long-term preservation and access to leading academic journals and scholarly literature from around the world. The Archive is supported by libraries, scholarly societies, publishers, and foundations. It is an initiative of JSTOR, a not-for-profit organization with a mission to help the scholarly community take advantage of advances in technology. For more information regarding JSTOR, please contact [email protected]. http://www.jstor.org Tue Mar 4 10:35:54 2008 Statistical Science 1992, Vol.
    [Show full text]
  • Improper Priors & Fiducial Inference
    IMPROPER PRIORS & FIDUCIAL INFERENCE Gunnar Taraldsen & Bo Henry Lindqvist Norwegian University of Science and Technology The 4th Bayesian, Fiducial and Frequentist Workshop Harvard University May 2017 1 / 22 Abstract The use of improper priors flourish in applications and is as sucha central part of contemporary statistics. Unfortunately, this is most often presented without a theoretical basis: “Improper priors are just limits of proper priors ... ” We present ingredients in a mathematical theory for statistics which generalize the axioms of Kolmogorov so that improper priors are included. A particular by-product is an elimination of the famous marginalization paradoxes in Bayesian and structural inference. Secondly, we demonstrate that structural and fiducial inference can be formulated naturally in this theory of conditional probability spaces. A particular by-product is then a proof of conditions which ensure coincidence between a Bayesian posterior and the fiducial distribution. The concept of a conditional fiducial model is introduced, and the interpretation of the fiducial distribution is discussed. It isin particular explained that the information given by the prior distribution in Bayesian analysis is replaced by the information given by the fiducial relation in fiducial inference. 2 / 22 Table of Contents Introduction Statistics with improper priors Fiducial inference 3 / 22 The Large Picture 4 / 22 A motivating problem gives it all I Initial problem: Generate data X = χ(U; θ) conditionally given a sufficient statistic T = τ(U; θ) = t. I Tentative solution: Adjust parameter value θ for simulated data so that the sufficient statistic is kept fixed equal to t (Trotter-Tukey, 1956; Engen-Lillegård, Biometrika 1997).
    [Show full text]