Confidence, Likelihood, Probability

Total Page:16

File Type:pdf, Size:1020Kb

Confidence, Likelihood, Probability i i “Schweder-Book” — 2015/10/21 — 17:46 — page i — #1 i i Confidence, Likelihood, Probability This lively book lays out a methodology of confidence distributions and puts them through their paces. Among other merits, they lead to optimal combinations of confidence from different sources of information, and they can make complex models amenable to objective and indeed prior-free analysis for less subjectively inclined statisticians. The generous mixture of theory, illustrations, applications and exercises is suitable for statisticians at all levels of experience, as well as for data-oriented scientists. Some confidence distributions are less dispersed than their competitors. This concept leads to a theory of risk functions and comparisons for distributions of confidence. Neyman–Pearson type theorems leading to optimal confidence are developed and richly illustrated. Exact and optimal confidence distributions are the gold standard for inferred epistemic distributions in empirical sciences. Confidence distributions and likelihood functions are intertwined, allowing prior distributions to be made part of the likelihood. Meta-analysis in likelihood terms is developed and taken beyond traditional methods, suiting it in particular to combining information across diverse data sources. TORE SCHWEDER is a professor of statistics in the Department of Economics and at the Centre for Ecology and Evolutionary Synthesis at the University of Oslo. NILS LID HJORT is a professor of mathematical statistics in the Department of Mathematics at the University of Oslo. i i i i i i “Schweder-Book” — 2015/10/21 — 17:46 — page ii — #2 i i CAMBRIDGE SERIES IN STATISTICAL AND PROBABILISTIC MATHEMATICS Editorial Board Z. Ghahramani (Department of Engineering, University of Cambridge) R. Gill (Mathematical Institute, Leiden University) F. P. Kelly (Department of Pure Mathematics and Mathematical Statistics, University of Cambridge) B. D. Ripley (Department of Statistics, University of Oxford) S. Ross (Department of Industrial and Systems Engineering, University of Southern California) M. Stein (Department of Statistics, University of Chicago) This series of high-quality upper-division textbooks and expository monographs covers all aspects of stochastic applicable mathematics. The topics range from pure and applied statistics to probability theory, operations research, optimization and mathematical programming. The books contain clear presentations of new developments in the field and also of the state of the art in classical methods. While emphasizing rigorous treatment of theoretical methods, the books also contain applications and discussions of new techniques made possible by advances in computational practice. A complete list of books in the series can be found at www.cambridge.org/statistics. Recent titles include the following: 15. Measure Theory and Filtering, by Lakhdar Aggoun and Robert Elliott 16. Essentials of Statistical Inference, by G. A. Young and R. L. Smith 17. Elements of Distribution Theory, by Thomas A. Severini 18. Statistical Mechanics of Disordered Systems, by Anton Bovier 19. The Coordinate-Free Approach to Linear Models, by Michael J. Wichura 20. Random Graph Dynamics,byRickDurrett 21. Networks, by Peter Whittle 22. Saddlepoint Approximations with Applications, by Ronald W. Butler 23. Applied Asymptotics, by A. R. Brazzale, A. C. Davison and N. Reid 24. Random Networks for Communication, by Massimo Franceschetti and Ronald Meester 25. Design of Comparative Experiments, by R. A. Bailey 26. Symmetry Studies,byMarlosA.G.Viana 27. Model Selection and Model Averaging, by Gerda Claeskens and Nils Lid Hjort 28. Bayesian Nonparametrics, edited by Nils Lid Hjort et al. 29. From Finite Sample to Asymptotic Methods in Statistics, by Pranab K. Sen, Julio M. Singer and Antonio C. Pedrosa de Lima 30. Brownian Motion, by Peter Morters¨ and Yuval Peres 31. Probability (Fourth Edition), by Rick Durrett 33. Stochastic Processes,byRichardF.Bass 34. Regression for Categorical Data,byGerhardTutz 35. Exercises in Probability (Second Edition),byLo¨ıc Chaumont and Marc Yor 36. Statistical Principles for the Design of Experiments, by R. Mead, S. G. Gilmour and A. Mead 37. Quantum Stochastics, by Mou-Hsiung Chang 38. Nonparametric Estimation under Shape Constraints, by Piet Groeneboom and Geurt Jungbloed 39. Large Sample Covariance Matrices, by Jianfeng Yao, Zhidong Bai and Shurong Zheng 40. Mathematical Foundations of Infinite-Dimensional Statistical Models,byEvaristGineand´ Richard Nickl i i i i i i “Schweder-Book” — 2015/10/21 — 17:46 — page iii — #3 i i Confidence, Likelihood, Probability Statistical Inference with Confidence Distributions Tore Schweder University of Oslo Nils Lid Hjort University of Oslo i i i i i i “Schweder-Book” — 2015/10/21 — 17:46 — page iv — #4 i i 32 Avenue of the Americas, New York, NY 10013-2473, USA Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9780521861601 c Tore Schweder and Nils Lid Hjort 2016 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2016 Printed in the United States of America A catalog record for this publication is available from the British Library. Library of Congress Cataloging in Publication Data Schweder, Tore. Confidence, likelihood, probability : statistical inference with confidence distributions / Tore Schweder, University of Oslo, Nils Lid Hjort, University of Oslo. pages cm. – (Cambridge series in statistical and probabilistic mathematics) Includes bibliographical references and index. ISBN 978-0-521-86160-1 (hardback : alk. paper) 1. Observed confidence levels (Statistics) 2. Mathematical statistics. 3. Probabilities. I. Hjort, Nils Lid. II. Title. QA277.5.S39 2016 519.2–dc23 2015016878 ISBN 978-0-521-86160-1 Hardback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party Internet Web sites referred to in this publication and does not guarantee that any content on such Web sites is, or will remain, accurate or appropriate. i i i i i i “Schweder-Book” — 2015/10/21 — 17:46 — page v — #5 i i To my four children –T.S. To my four brothers –N.L.H. i i i i i i “Schweder-Book” — 2015/10/21 — 17:46 — page vi — #6 i i i i i i i i “Schweder-Book” — 2015/10/21 — 17:46 — page vii — #7 i i Contents Preface page xiii 1 Confidence, likelihood, probability: An invitation 1 1.1 Introduction 1 1.2 Probability 4 1.3 Inverse probability 6 1.4 Likelihood 7 1.5 Frequentism 8 1.6 Confidence and confidence curves 10 1.7 Fiducial probability and confidence 14 1.8 Why not go Bayesian? 16 1.9 Notes on the literature 19 2 Inference in parametric models 23 2.1 Introduction 23 2.2 Likelihood methods and first-order large-sample theory 24 2.3 Sufficiency and the likelihood principle 30 2.4 Focus parameters, pivots and profile likelihoods 32 2.5 Bayesian inference 40 2.6 Related themes and issues 42 2.7 Notes on the literature 48 Exercises 50 3 Confidence distributions 55 3.1 Introduction 55 3.2 Confidence distributions and statistical inference 56 3.3 Graphical focus summaries 65 3.4 General likelihood-based recipes 69 3.5 Confidence distributions for the linear regression model 72 3.6 Contingency tables 78 3.7 Testing hypotheses via confidence for alternatives 80 3.8 Confidence for discrete parameters 83 vii i i i i i i “Schweder-Book” — 2015/10/21 — 17:46 — page viii — #8 i i viii Contents 3.9 Notes on the literature 91 Exercises 92 4 Further developments for confidence distribution 100 4.1 Introduction 100 4.2 Bounded parameters and bounded confidence 100 4.3 Random and mixed effects models 107 4.4 The Neyman–Scott problem 111 4.5 Multimodality 115 4.6 Ratio of two normal means 117 4.7 Hazard rate models 122 4.8 Confidence inference for Markov chains 128 4.9 Time series and models with dependence 133 4.10 Bivariate distributions and the average confidence density 138 4.11 Deviance intervals versus minimum length intervals 140 4.12 Notes on the literature 142 Exercises 144 5 Invariance, sufficiency and optimality for confidence distributions 154 5.1 Confidence power 154 5.2 Invariance for confidence distributions 157 5.3 Loss and risk functions for confidence distributions 161 5.4 Sufficiency and risk for confidence distributions 165 5.5 Uniformly optimal confidence for exponential families 173 5.6 Optimality of component confidence distributions 177 5.7 Notes on the literature 179 Exercises 180 6 The fiducial argument 185 6.1 The initial argument 185 6.2 The controversy 188 6.3 Paradoxes 191 6.4 Fiducial distributions and Bayesian posteriors 193 6.5 Coherence by restricting the range: Invariance or irrelevance? 194 6.6 Generalised fiducial inference 197 6.7 Further remarks 200 6.8 Notes on the literature 201 Exercises 202 7 Improved approximations for confidence distributions 204 7.1 Introduction 204 7.2 From first-order to second-order approximations 205 i i i i i i “Schweder-Book” — 2015/10/21 — 17:46 — page ix — #9 i i Contents ix 7.3 Pivot tuning 208 7.4 Bartlett corrections for the deviance 210 7.5 Median-bias correction 214 7.6 The t-bootstrap and abc-bootstrap method 217 7.7 Saddlepoint approximations and
Recommended publications
  • THE COPULA INFORMATION CRITERIA 1. Introduction And
    Dept. of Math. University of Oslo Statistical Research Report No. 7 ISSN 0806{3842 June 2008 THE COPULA INFORMATION CRITERIA STEFFEN GRØNNEBERG AND NILS LID HJORT Abstract. When estimating parametric copula models by the semiparametric pseudo maximum likelihood procedure (MPLE), many practitioners have used the Akaike Information Criterion (AIC) for model selection in spite of the fact that the AIC formula has no theoretical basis in this setting. We adapt the arguments leading to the original AIC formula in the fully parametric case to the MPLE. This gives a significantly different formula than the AIC, which we name the Copula Information Criterion (CIC). However, we also show that such a model-selection procedure cannot exist for a large class of commonly used copula models. We note that this research report is a revision of a research report dated June 2008. The current version encorporates corrections of the proof of Theorem 1. The conclusions of the previous manuscript are still valid, however. 1. Introduction and summary Suppose given independent, identically distributed d-dimensional observations X1;X2;:::;Xn with density f ◦(x) and distribution function ◦ ◦ ◦ F (x) = P (Xi;1 ≤ x1;Xi;2 ≤ x2;:::Xi;d ≤ xd) = C (F?(x)): ◦ ◦ ◦ ◦ Here, C is the copula of F and F? is the vector of marginal distributions of F , that is, ◦ ◦ ◦ F?(x) := (F1 (x1);:::;Fd (xd));Fi(xj) = P (Xi;j ≤ xj): Given a parametric copula model expressed through a set of densities c(u; θ) for Θ ⊆ Rp and d ^ u 2 [0; 1] , the maximum pseudo likelihood estimator θn, also
    [Show full text]
  • Mathematical Statistics (Math 30) – Course Project for Spring 2011
    Mathematical Statistics (Math 30) – Course Project for Spring 2011 Goals: To explore methods from class in a real life (historical) estimation setting To learn some basic statistical computational skills To explore a topic of interest via a short report on a selected paper The course project has been divided into 2 parts. Each part will account for 5% of your grade, because the project accounts for 10%. Part 1: German Tank Problem Due Date: April 8th Our Setting: The Purple (Amherst) and Gold (Williams) armies are at war. You are a highly ranked intelligence officer in the Purple army and you are given the job of estimating the number of tanks in the Gold army’s tank fleet. Luckily, the Gold army has decided to put sequential serial numbers on their tanks, so you can consider their tanks labeled as 1,2,3,…,N, and all you have to do is estimate N. Suppose you have access to a random sample of k serial numbers obtained by spies, found on abandoned/destroyed equipment, etc. Using your k observations, you need to estimate N. Note that we won’t deal with issues of sampling without replacement, so you can consider each observation as an independent observation from a discrete uniform distribution on 1 to N. The derivations below should lead you to consider several possible estimators of N, from which you will be picking one to propose as your estimate of N. Historical Setting: This was a real problem encountered by statisticians/intelligence in WWII. The Allies wanted to know how many tanks the Germans were producing per month (the procedure was used for things other than tanks too).
    [Show full text]
  • Local Variation As a Statistical Hypothesis Test
    Int J Comput Vis (2016) 117:131–141 DOI 10.1007/s11263-015-0855-4 Local Variation as a Statistical Hypothesis Test Michael Baltaxe1 · Peter Meer2 · Michael Lindenbaum3 Received: 7 June 2014 / Accepted: 1 September 2015 / Published online: 16 September 2015 © Springer Science+Business Media New York 2015 Abstract The goal of image oversegmentation is to divide Keywords Image segmentation · Image oversegmenta- an image into several pieces, each of which should ideally tion · Superpixels · Grouping be part of an object. One of the simplest and yet most effec- tive oversegmentation algorithms is known as local variation (LV) Felzenszwalb and Huttenlocher in Efficient graph- 1 Introduction based image segmentation. IJCV 59(2):167–181 (2004). In this work, we study this algorithm and show that algorithms Image segmentation is the procedure of partitioning an input similar to LV can be devised by applying different statisti- image into several meaningful pieces or segments, each of cal models and decisions, thus providing further theoretical which should be semantically complete (i.e., an item or struc- justification and a well-founded explanation for the unex- ture by itself). Oversegmentation is a less demanding type of pected high performance of the LV approach. Some of these segmentation. The aim is to group several pixels in an image algorithms are based on statistics of natural images and on into a single unit called a superpixel (Ren and Malik 2003) a hypothesis testing decision; we denote these algorithms so that it is fully contained within an object; it represents a probabilistic local variation (pLV). The best pLV algorithm, fragment of a conceptually meaningful structure.
    [Show full text]
  • Methods to Calculate Uncertainty in the Estimated Overall Effect Size from a Random-Effects Meta-Analysis
    Methods to calculate uncertainty in the estimated overall effect size from a random-effects meta-analysis Areti Angeliki Veroniki1,2*; Email: [email protected] Dan Jackson3; Email: [email protected] Ralf Bender4; Email: [email protected] Oliver Kuss5,6; Email: [email protected] Dean Langan7; Email: [email protected] Julian PT Higgins8; Email: [email protected] Guido Knapp9; Email: [email protected] Georgia Salanti10; Email: [email protected] 1 Li Ka Shing Knowledge Institute, St. Michael’s Hospital, 209 Victoria Street, East Building. Toronto, Ontario, M5B 1T8, Canada 2 Department of Primary Education, School of Education, University of Ioannina, Ioannina, Greece 3 MRC Biostatistics Unit, Institute of Public Health, Robinson Way, Cambridge CB2 0SR, U.K | downloaded: 28.9.2021 4 Department of Medical Biometry, Institute for Quality and Efficiency in Health Care (IQWiG), Im Mediapark 8, 50670 Cologne, Germany 5 Institute for Biometrics and Epidemiology, German Diabetes Center, Leibniz Institute for Diabetes Research at Heinrich Heine University, 40225 Düsseldorf, Germany 6 Institute of Medical Statistics, Heinrich-Heine-University, Medical Faculty, Düsseldorf, Germany This article has been accepted for publication and undergone full peer review but has not https://doi.org/10.7892/boris.119524 been through the copyediting, typesetting, pagination and proofreading process which may lead to differences between this version and the Version of Record. Please cite this article as doi: 10.1002/jrsm.1319 source: This article is protected by copyright. All rights reserved. 7 Institute of Child Health, UCL, London, WC1E 6BT, UK 8 Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, U.K.
    [Show full text]
  • Newsletter of the European Mathematical Society
    NEWSLETTER OF THE EUROPEAN MATHEMATICAL SOCIETY S E European M M Mathematical E S Society June 2019 Issue 112 ISSN 1027-488X Feature Interviews Research Center Determinantal Point Processes Peter Scholze Isaac Newton Institute The Littlewood–Paley Theory Artur Avila for Mathematical Sciences Artur Avila (photo courtesy of Daryan Dornelles/Divulgação IMPA) Journals published by the ISSN print 1463-9963 Editors: ISSN online 1463-9971 José Francisco Rodrigues (Universidade de Lisboa, Portugal), Charles M. Elliott (University 2019. Vol. 21. 4 issues of Warwick, Coventry, UK), Harald Garcke (Universität Regensburg,Germany), Juan Luis Approx. 500 pages. 17 x 24 cm Vazquez (Universidad Autónoma de Madrid, Spain) Price of subscription: 390 ¤ online only Aims and Scope 430 ¤ print+online Interfaces and Free Boundaries is dedicated to the mathematical modelling, analysis and computation of interfaces and free boundary problems in all areas where such phenomena are pertinent. The journal aims to be a forum where mathematical analysis, partial differential equations, modelling, scientific computing and the various applications which involve mathematical modelling meet. Submissions should, ideally, emphasize the combination of theory and application. ISSN print 0232-2064 A periodical edited by the University of Leipzig ISSN online 1661-4354 Managing Editors: 2019. Vol. 38. 4 issues J. Appell (Universität Würzburg, Germany), T. Eisner (Universität Leipzig, Germany), B. Approx. 500 pages. 17 x 24 cm Kirchheim (Universität Leipzig, Germany) Price of subscription: 190 ¤ online only Aims and Scope Journal for Analysis and its Applications aims at disseminating theoretical knowledge 230 ¤ print+online The in the field of analysis and, at the same time, cultivating and extending its applications.
    [Show full text]
  • Principles of Statistical Inference
    Principles of Statistical Inference In this important book, D. R. Cox develops the key concepts of the theory of statistical inference, in particular describing and comparing the main ideas and controversies over foundational issues that have rumbled on for more than 200 years. Continuing a 60-year career of contribution to statistical thought, Professor Cox is ideally placed to give the comprehensive, balanced account of the field that is now needed. The careful comparison of frequentist and Bayesian approaches to inference allows readers to form their own opinion of the advantages and disadvantages. Two appendices give a brief historical overview and the author’s more personal assessment of the merits of different ideas. The content ranges from the traditional to the contemporary. While specific applications are not treated, the book is strongly motivated by applications across the sciences and associated technologies. The underlying mathematics is kept as elementary as feasible, though some previous knowledge of statistics is assumed. This book is for every serious user or student of statistics – in particular, for anyone wanting to understand the uncertainty inherent in conclusions from statistical analyses. Principles of Statistical Inference D.R. COX Nuffield College, Oxford CAMBRIDGE UNIVERSITY PRESS Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521866736 © D. R. Cox 2006 This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press.
    [Show full text]
  • Contents 3 Parameter and Interval Estimation
    Probability and Statistics (part 3) : Parameter and Interval Estimation (by Evan Dummit, 2020, v. 1.25) Contents 3 Parameter and Interval Estimation 1 3.1 Parameter Estimation . 1 3.1.1 Maximum Likelihood Estimates . 2 3.1.2 Biased and Unbiased Estimators . 5 3.1.3 Eciency of Estimators . 8 3.2 Interval Estimation . 12 3.2.1 Condence Intervals . 12 3.2.2 Normal Condence Intervals . 13 3.2.3 Binomial Condence Intervals . 16 3 Parameter and Interval Estimation In the previous chapter, we discussed random variables and developed the notion of a probability distribution, and then established some fundamental results such as the central limit theorem that give strong and useful information about the statistical properties of a sample drawn from a xed, known distribution. Our goal in this chapter is, in some sense, to invert this analysis: starting instead with data obtained by sampling a distribution or probability model with certain unknown parameters, we would like to extract information about the most reasonable values for these parameters given the observed data. We begin by discussing pointwise parameter estimates and estimators, analyzing various properties that we would like these estimators to have such as unbiasedness and eciency, and nally establish the optimality of several estimators that arise from the basic distributions we have encountered such as the normal and binomial distributions. We then broaden our focus to interval estimation: from nding the best estimate of a single value to nding such an estimate along with a measurement of its expected precision. We treat in scrupulous detail several important cases for constructing such condence intervals, and close with some applications of these ideas to polling data.
    [Show full text]
  • Unpicking PLAID a Cryptographic Analysis of an ISO-Standards-Track Authentication Protocol
    A preliminary version of this paper appears in the proceedings of the 1st International Conference on Research in Security Standardisation (SSR 2014), DOI: 10.1007/978-3-319-14054-4_1. This is the full version. Unpicking PLAID A Cryptographic Analysis of an ISO-standards-track Authentication Protocol Jean Paul Degabriele1 Victoria Fehr2 Marc Fischlin2 Tommaso Gagliardoni2 Felix Günther2 Giorgia Azzurra Marson2 Arno Mittelbach2 Kenneth G. Paterson1 1 Information Security Group, Royal Holloway, University of London, U.K. 2 Cryptoplexity, Technische Universität Darmstadt, Germany {j.p.degabriele,kenny.paterson}@rhul.ac.uk, {victoria.fehr,tommaso.gagliardoni,giorgia.marson,arno.mittelbach}@cased.de, [email protected], [email protected] October 27, 2015 Abstract. The Protocol for Lightweight Authentication of Identity (PLAID) aims at secure and private authentication between a smart card and a terminal. Originally developed by a unit of the Australian Department of Human Services for physical and logical access control, PLAID has now been standardized as an Australian standard AS-5185-2010 and is currently in the fast-track standardization process for ISO/IEC 25185-1. We present a cryptographic evaluation of PLAID. As well as reporting a number of undesirable cryptographic features of the protocol, we show that the privacy properties of PLAID are significantly weaker than claimed: using a variety of techniques we can fingerprint and then later identify cards. These techniques involve a novel application of standard statistical
    [Show full text]
  • Confidence Distribution
    Confidence Distribution Xie and Singh (2013): Confidence distribution, the frequentist distribution estimator of a parameter: A Review Céline Cunen, 15/09/2014 Outline of Article ● Introduction ● The concept of Confidence Distribution (CD) ● A classical Definition and the History of the CD Concept ● A modern definition and interpretation ● Illustrative examples ● Basic parametric examples ● Significant (p-value) functions ● Bootstrap distributions ● Likelihood functions ● Asymptotically third-order accurate confidence distributions ● CD, Bootstrap, Fiducial and Bayesian approaches ● CD-random variable, Bootstrap estimator and fiducial-less interpretation ● CD, fiducial distribution and Belief function ● CD and Bayesian inference ● Inferences using a CD ● Confidence Interval ● Point estimation ● Hypothesis testing ● Optimality (comparison) of CDs ● Combining CDs from independent sources ● Combination of CDs and a unified framework for Meta-Analysis ● Incorporation of Expert opinions in clinical trials ● CD-based new methodologies, examples and applications ● CD-based likelihood caluculations ● Confidence curve ● CD-based simulation methods ● Additional examples and applications of CD-developments ● Summary CD: a sample-dependent distribution that can represent confidence intervals of all levels for a parameter of interest ● CD: a broad concept = covers all approaches that can build confidence intervals at all levels Interpretation ● A distribution on the parameter space ● A Distribution estimator = contains information for many types of
    [Show full text]
  • Uniform Distribution
    Uniform distribution The uniform distribution is the probability distribution that gives the same probability to all possible values. There are two types of Uniform distribution uniform distributions: Statistics for Business Josemari Sarasola - Gizapedia x1 x2 x3 xN a b Discrete uniform distribution Continuous uniform distribution We apply uniform distributions when there is absolute uncertainty about what will occur. We also use them when we take a random sample from a population, because in such cases all the elements have the same probability of being drawn. Finally, they are also usedto create random numbers, as random numbers are those that have the same probability of being drawn. Statistics for Business Uniform distribution 1 / 17 Statistics for Business Uniform distribution 2 / 17 Discrete uniform distribution Discrete uniform distribution Distribution function i Probability function F (x) = P [X x ] = ; x = x , x , , x ≤ i N i 1 2 ··· N 1 P [X = x] = ; x = x1, x2, , xN N ··· 1 (N 1)/N − 1 1 1 1 1 1 1 1 1 1 N N N N N N N N N N i/N x1 x2 x3 xN 3/N 2/N 1/N x1 x2 x3 xi xN 1xN − Statistics for Business Uniform distribution 3 / 17 Statistics for Business Uniform distribution 4 / 17 Discrete uniform distribution Discrete uniform distribution Distribution of the maximum We draw a value from a uniform distribution n times. How is Notation, mean and variance distributed the maximum among those n values? Among n values, the maximum will be less than xi when all of them are less than xi: x1 + xN µ = 2 n i i i i 2 (xN x1 + 2)(xN x1) P [Xmax xi] = = X U(x1, x2, .
    [Show full text]
  • Tilfeldig Gang, Nr. 1, Mai 2020
    ISSN 0803-8953 Tilfeldig GANG Nr. 1, årGANG 37 Mai 2020 UtgitT AV Norsk STATISTISK FORENING Redaksjonelt Fra NFR-lederen....................... 1 Fra redaksjonen....................... 2 TankenøtTER Pusleri nr. 53 ......................... 3 Pusleri nr. 52 (løsning)................... 4 Møter OG KONFERANSER Oppstart NSF Stavanger.................. 7 Medlemsmøte NSF Trondheim.............. 8 Artikler Monitoring the Level and the Slope of the Corona... 9 På leting etter sannheten bak kvantemekanikken.... 22 Tilbakeblikk: Den første nordiske konferanse i matema- tisk statistikk, Aarhus 1965 ............. 28 Kunngjøringer Kontingent.......................... 31 Meldinger Nytt fra NTNU....................... 32 Nytt fra UiB......................... 32 Nytt fra Matematisk institutt ved Universitetet i Oslo. 32 redaksjonelt Fra NFR-lederen Hei alle NFR-medlemmer, Alt vel med dere og nærmeste familie/venner i disse corona-tider håper jeg – alt vel her. Vi lever alle uvanlige liv for tiden. Mennesket er jo et sosialt vesen – og sosiale medier ivaretar bare en flik av det. Jeg har i lengre tid hatt barn som studenter i utlandet – og når vi samles om sommeren på hytta har vi et begrep ‘beyond Skype’ – det betyr en god bamse-klem. Det er behov for gode klemmer – og den tid kommer tilbake. Vi føler vel alle savnet av familie-medlemmer, jobb-kolleger og venner – samt for egen del, savnet av en Kreta-tur i mai. Jeg har, som gammel ‘low-tech’ professor, blitt påtvunget digitalt basert undervisning – og føler meg ikke trygg på at studentene er begeistret. Personlig ser jeg frem til normale undervisnings-tilstander. Statistikken i coronaens tid – lever faktisk i beste velgående. ‘Usikkerhet’ er vel det mest brukte ordet i samfunns-debatten siste måned. Statistiske modeller i epidemiologi er på alles lepper – og diskusjonen går høyt om prøvetaking og test-prosedyrer.
    [Show full text]
  • The P-Value Function and Statistical Inference
    The American Statistician ISSN: 0003-1305 (Print) 1537-2731 (Online) Journal homepage: https://www.tandfonline.com/loi/utas20 The p-value Function and Statistical Inference D. A. S. Fraser To cite this article: D. A. S. Fraser (2019) The p-value Function and Statistical Inference, The American Statistician, 73:sup1, 135-147, DOI: 10.1080/00031305.2018.1556735 To link to this article: https://doi.org/10.1080/00031305.2018.1556735 © 2019 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. Published online: 20 Mar 2019. Submit your article to this journal Article views: 1983 View Crossmark data Citing articles: 1 View citing articles Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=utas20 THE AMERICAN STATISTICIAN 2019, VOL. 73, NO. S1, 135–147: Statistical Inference in the 21st Century https://doi.org/10.1080/00031305.2019.1556735 The p-value Function and Statistical Inference D. A. S. Fraser Department of Statistical Sciences, University of Toronto, Toronto, Canada ABSTRACT ARTICLE HISTORY This article has two objectives. The first and narrower is to formalize the p-value function, which records Received March 2018 all possible p-values, each corresponding to a value for whatever the scalar parameter of interest is for the Revised November 2018 problem at hand, and to show how this p-value function directly provides full inference information for any corresponding user or scientist. The p-value function provides familiar inference objects: significance KEYWORDS Accept–Reject; Ancillarity; levels, confidence intervals, critical values for fixed-level tests, and the power function at all values ofthe Box–Cox; Conditioning; parameter of interest.
    [Show full text]