Confidence, Likelihood, Probability
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
THE COPULA INFORMATION CRITERIA 1. Introduction And
Dept. of Math. University of Oslo Statistical Research Report No. 7 ISSN 0806{3842 June 2008 THE COPULA INFORMATION CRITERIA STEFFEN GRØNNEBERG AND NILS LID HJORT Abstract. When estimating parametric copula models by the semiparametric pseudo maximum likelihood procedure (MPLE), many practitioners have used the Akaike Information Criterion (AIC) for model selection in spite of the fact that the AIC formula has no theoretical basis in this setting. We adapt the arguments leading to the original AIC formula in the fully parametric case to the MPLE. This gives a significantly different formula than the AIC, which we name the Copula Information Criterion (CIC). However, we also show that such a model-selection procedure cannot exist for a large class of commonly used copula models. We note that this research report is a revision of a research report dated June 2008. The current version encorporates corrections of the proof of Theorem 1. The conclusions of the previous manuscript are still valid, however. 1. Introduction and summary Suppose given independent, identically distributed d-dimensional observations X1;X2;:::;Xn with density f ◦(x) and distribution function ◦ ◦ ◦ F (x) = P (Xi;1 ≤ x1;Xi;2 ≤ x2;:::Xi;d ≤ xd) = C (F?(x)): ◦ ◦ ◦ ◦ Here, C is the copula of F and F? is the vector of marginal distributions of F , that is, ◦ ◦ ◦ F?(x) := (F1 (x1);:::;Fd (xd));Fi(xj) = P (Xi;j ≤ xj): Given a parametric copula model expressed through a set of densities c(u; θ) for Θ ⊆ Rp and d ^ u 2 [0; 1] , the maximum pseudo likelihood estimator θn, also -
Mathematical Statistics (Math 30) – Course Project for Spring 2011
Mathematical Statistics (Math 30) – Course Project for Spring 2011 Goals: To explore methods from class in a real life (historical) estimation setting To learn some basic statistical computational skills To explore a topic of interest via a short report on a selected paper The course project has been divided into 2 parts. Each part will account for 5% of your grade, because the project accounts for 10%. Part 1: German Tank Problem Due Date: April 8th Our Setting: The Purple (Amherst) and Gold (Williams) armies are at war. You are a highly ranked intelligence officer in the Purple army and you are given the job of estimating the number of tanks in the Gold army’s tank fleet. Luckily, the Gold army has decided to put sequential serial numbers on their tanks, so you can consider their tanks labeled as 1,2,3,…,N, and all you have to do is estimate N. Suppose you have access to a random sample of k serial numbers obtained by spies, found on abandoned/destroyed equipment, etc. Using your k observations, you need to estimate N. Note that we won’t deal with issues of sampling without replacement, so you can consider each observation as an independent observation from a discrete uniform distribution on 1 to N. The derivations below should lead you to consider several possible estimators of N, from which you will be picking one to propose as your estimate of N. Historical Setting: This was a real problem encountered by statisticians/intelligence in WWII. The Allies wanted to know how many tanks the Germans were producing per month (the procedure was used for things other than tanks too). -
Local Variation As a Statistical Hypothesis Test
Int J Comput Vis (2016) 117:131–141 DOI 10.1007/s11263-015-0855-4 Local Variation as a Statistical Hypothesis Test Michael Baltaxe1 · Peter Meer2 · Michael Lindenbaum3 Received: 7 June 2014 / Accepted: 1 September 2015 / Published online: 16 September 2015 © Springer Science+Business Media New York 2015 Abstract The goal of image oversegmentation is to divide Keywords Image segmentation · Image oversegmenta- an image into several pieces, each of which should ideally tion · Superpixels · Grouping be part of an object. One of the simplest and yet most effec- tive oversegmentation algorithms is known as local variation (LV) Felzenszwalb and Huttenlocher in Efficient graph- 1 Introduction based image segmentation. IJCV 59(2):167–181 (2004). In this work, we study this algorithm and show that algorithms Image segmentation is the procedure of partitioning an input similar to LV can be devised by applying different statisti- image into several meaningful pieces or segments, each of cal models and decisions, thus providing further theoretical which should be semantically complete (i.e., an item or struc- justification and a well-founded explanation for the unex- ture by itself). Oversegmentation is a less demanding type of pected high performance of the LV approach. Some of these segmentation. The aim is to group several pixels in an image algorithms are based on statistics of natural images and on into a single unit called a superpixel (Ren and Malik 2003) a hypothesis testing decision; we denote these algorithms so that it is fully contained within an object; it represents a probabilistic local variation (pLV). The best pLV algorithm, fragment of a conceptually meaningful structure. -
Methods to Calculate Uncertainty in the Estimated Overall Effect Size from a Random-Effects Meta-Analysis
Methods to calculate uncertainty in the estimated overall effect size from a random-effects meta-analysis Areti Angeliki Veroniki1,2*; Email: [email protected] Dan Jackson3; Email: [email protected] Ralf Bender4; Email: [email protected] Oliver Kuss5,6; Email: [email protected] Dean Langan7; Email: [email protected] Julian PT Higgins8; Email: [email protected] Guido Knapp9; Email: [email protected] Georgia Salanti10; Email: [email protected] 1 Li Ka Shing Knowledge Institute, St. Michael’s Hospital, 209 Victoria Street, East Building. Toronto, Ontario, M5B 1T8, Canada 2 Department of Primary Education, School of Education, University of Ioannina, Ioannina, Greece 3 MRC Biostatistics Unit, Institute of Public Health, Robinson Way, Cambridge CB2 0SR, U.K | downloaded: 28.9.2021 4 Department of Medical Biometry, Institute for Quality and Efficiency in Health Care (IQWiG), Im Mediapark 8, 50670 Cologne, Germany 5 Institute for Biometrics and Epidemiology, German Diabetes Center, Leibniz Institute for Diabetes Research at Heinrich Heine University, 40225 Düsseldorf, Germany 6 Institute of Medical Statistics, Heinrich-Heine-University, Medical Faculty, Düsseldorf, Germany This article has been accepted for publication and undergone full peer review but has not https://doi.org/10.7892/boris.119524 been through the copyediting, typesetting, pagination and proofreading process which may lead to differences between this version and the Version of Record. Please cite this article as doi: 10.1002/jrsm.1319 source: This article is protected by copyright. All rights reserved. 7 Institute of Child Health, UCL, London, WC1E 6BT, UK 8 Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, U.K. -
Newsletter of the European Mathematical Society
NEWSLETTER OF THE EUROPEAN MATHEMATICAL SOCIETY S E European M M Mathematical E S Society June 2019 Issue 112 ISSN 1027-488X Feature Interviews Research Center Determinantal Point Processes Peter Scholze Isaac Newton Institute The Littlewood–Paley Theory Artur Avila for Mathematical Sciences Artur Avila (photo courtesy of Daryan Dornelles/Divulgação IMPA) Journals published by the ISSN print 1463-9963 Editors: ISSN online 1463-9971 José Francisco Rodrigues (Universidade de Lisboa, Portugal), Charles M. Elliott (University 2019. Vol. 21. 4 issues of Warwick, Coventry, UK), Harald Garcke (Universität Regensburg,Germany), Juan Luis Approx. 500 pages. 17 x 24 cm Vazquez (Universidad Autónoma de Madrid, Spain) Price of subscription: 390 ¤ online only Aims and Scope 430 ¤ print+online Interfaces and Free Boundaries is dedicated to the mathematical modelling, analysis and computation of interfaces and free boundary problems in all areas where such phenomena are pertinent. The journal aims to be a forum where mathematical analysis, partial differential equations, modelling, scientific computing and the various applications which involve mathematical modelling meet. Submissions should, ideally, emphasize the combination of theory and application. ISSN print 0232-2064 A periodical edited by the University of Leipzig ISSN online 1661-4354 Managing Editors: 2019. Vol. 38. 4 issues J. Appell (Universität Würzburg, Germany), T. Eisner (Universität Leipzig, Germany), B. Approx. 500 pages. 17 x 24 cm Kirchheim (Universität Leipzig, Germany) Price of subscription: 190 ¤ online only Aims and Scope Journal for Analysis and its Applications aims at disseminating theoretical knowledge 230 ¤ print+online The in the field of analysis and, at the same time, cultivating and extending its applications. -
Principles of Statistical Inference
Principles of Statistical Inference In this important book, D. R. Cox develops the key concepts of the theory of statistical inference, in particular describing and comparing the main ideas and controversies over foundational issues that have rumbled on for more than 200 years. Continuing a 60-year career of contribution to statistical thought, Professor Cox is ideally placed to give the comprehensive, balanced account of the field that is now needed. The careful comparison of frequentist and Bayesian approaches to inference allows readers to form their own opinion of the advantages and disadvantages. Two appendices give a brief historical overview and the author’s more personal assessment of the merits of different ideas. The content ranges from the traditional to the contemporary. While specific applications are not treated, the book is strongly motivated by applications across the sciences and associated technologies. The underlying mathematics is kept as elementary as feasible, though some previous knowledge of statistics is assumed. This book is for every serious user or student of statistics – in particular, for anyone wanting to understand the uncertainty inherent in conclusions from statistical analyses. Principles of Statistical Inference D.R. COX Nuffield College, Oxford CAMBRIDGE UNIVERSITY PRESS Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521866736 © D. R. Cox 2006 This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. -
Contents 3 Parameter and Interval Estimation
Probability and Statistics (part 3) : Parameter and Interval Estimation (by Evan Dummit, 2020, v. 1.25) Contents 3 Parameter and Interval Estimation 1 3.1 Parameter Estimation . 1 3.1.1 Maximum Likelihood Estimates . 2 3.1.2 Biased and Unbiased Estimators . 5 3.1.3 Eciency of Estimators . 8 3.2 Interval Estimation . 12 3.2.1 Condence Intervals . 12 3.2.2 Normal Condence Intervals . 13 3.2.3 Binomial Condence Intervals . 16 3 Parameter and Interval Estimation In the previous chapter, we discussed random variables and developed the notion of a probability distribution, and then established some fundamental results such as the central limit theorem that give strong and useful information about the statistical properties of a sample drawn from a xed, known distribution. Our goal in this chapter is, in some sense, to invert this analysis: starting instead with data obtained by sampling a distribution or probability model with certain unknown parameters, we would like to extract information about the most reasonable values for these parameters given the observed data. We begin by discussing pointwise parameter estimates and estimators, analyzing various properties that we would like these estimators to have such as unbiasedness and eciency, and nally establish the optimality of several estimators that arise from the basic distributions we have encountered such as the normal and binomial distributions. We then broaden our focus to interval estimation: from nding the best estimate of a single value to nding such an estimate along with a measurement of its expected precision. We treat in scrupulous detail several important cases for constructing such condence intervals, and close with some applications of these ideas to polling data. -
Unpicking PLAID a Cryptographic Analysis of an ISO-Standards-Track Authentication Protocol
A preliminary version of this paper appears in the proceedings of the 1st International Conference on Research in Security Standardisation (SSR 2014), DOI: 10.1007/978-3-319-14054-4_1. This is the full version. Unpicking PLAID A Cryptographic Analysis of an ISO-standards-track Authentication Protocol Jean Paul Degabriele1 Victoria Fehr2 Marc Fischlin2 Tommaso Gagliardoni2 Felix Günther2 Giorgia Azzurra Marson2 Arno Mittelbach2 Kenneth G. Paterson1 1 Information Security Group, Royal Holloway, University of London, U.K. 2 Cryptoplexity, Technische Universität Darmstadt, Germany {j.p.degabriele,kenny.paterson}@rhul.ac.uk, {victoria.fehr,tommaso.gagliardoni,giorgia.marson,arno.mittelbach}@cased.de, [email protected], [email protected] October 27, 2015 Abstract. The Protocol for Lightweight Authentication of Identity (PLAID) aims at secure and private authentication between a smart card and a terminal. Originally developed by a unit of the Australian Department of Human Services for physical and logical access control, PLAID has now been standardized as an Australian standard AS-5185-2010 and is currently in the fast-track standardization process for ISO/IEC 25185-1. We present a cryptographic evaluation of PLAID. As well as reporting a number of undesirable cryptographic features of the protocol, we show that the privacy properties of PLAID are significantly weaker than claimed: using a variety of techniques we can fingerprint and then later identify cards. These techniques involve a novel application of standard statistical -
Confidence Distribution
Confidence Distribution Xie and Singh (2013): Confidence distribution, the frequentist distribution estimator of a parameter: A Review Céline Cunen, 15/09/2014 Outline of Article ● Introduction ● The concept of Confidence Distribution (CD) ● A classical Definition and the History of the CD Concept ● A modern definition and interpretation ● Illustrative examples ● Basic parametric examples ● Significant (p-value) functions ● Bootstrap distributions ● Likelihood functions ● Asymptotically third-order accurate confidence distributions ● CD, Bootstrap, Fiducial and Bayesian approaches ● CD-random variable, Bootstrap estimator and fiducial-less interpretation ● CD, fiducial distribution and Belief function ● CD and Bayesian inference ● Inferences using a CD ● Confidence Interval ● Point estimation ● Hypothesis testing ● Optimality (comparison) of CDs ● Combining CDs from independent sources ● Combination of CDs and a unified framework for Meta-Analysis ● Incorporation of Expert opinions in clinical trials ● CD-based new methodologies, examples and applications ● CD-based likelihood caluculations ● Confidence curve ● CD-based simulation methods ● Additional examples and applications of CD-developments ● Summary CD: a sample-dependent distribution that can represent confidence intervals of all levels for a parameter of interest ● CD: a broad concept = covers all approaches that can build confidence intervals at all levels Interpretation ● A distribution on the parameter space ● A Distribution estimator = contains information for many types of -
Uniform Distribution
Uniform distribution The uniform distribution is the probability distribution that gives the same probability to all possible values. There are two types of Uniform distribution uniform distributions: Statistics for Business Josemari Sarasola - Gizapedia x1 x2 x3 xN a b Discrete uniform distribution Continuous uniform distribution We apply uniform distributions when there is absolute uncertainty about what will occur. We also use them when we take a random sample from a population, because in such cases all the elements have the same probability of being drawn. Finally, they are also usedto create random numbers, as random numbers are those that have the same probability of being drawn. Statistics for Business Uniform distribution 1 / 17 Statistics for Business Uniform distribution 2 / 17 Discrete uniform distribution Discrete uniform distribution Distribution function i Probability function F (x) = P [X x ] = ; x = x , x , , x ≤ i N i 1 2 ··· N 1 P [X = x] = ; x = x1, x2, , xN N ··· 1 (N 1)/N − 1 1 1 1 1 1 1 1 1 1 N N N N N N N N N N i/N x1 x2 x3 xN 3/N 2/N 1/N x1 x2 x3 xi xN 1xN − Statistics for Business Uniform distribution 3 / 17 Statistics for Business Uniform distribution 4 / 17 Discrete uniform distribution Discrete uniform distribution Distribution of the maximum We draw a value from a uniform distribution n times. How is Notation, mean and variance distributed the maximum among those n values? Among n values, the maximum will be less than xi when all of them are less than xi: x1 + xN µ = 2 n i i i i 2 (xN x1 + 2)(xN x1) P [Xmax xi] = = X U(x1, x2, . -
Tilfeldig Gang, Nr. 1, Mai 2020
ISSN 0803-8953 Tilfeldig GANG Nr. 1, årGANG 37 Mai 2020 UtgitT AV Norsk STATISTISK FORENING Redaksjonelt Fra NFR-lederen....................... 1 Fra redaksjonen....................... 2 TankenøtTER Pusleri nr. 53 ......................... 3 Pusleri nr. 52 (løsning)................... 4 Møter OG KONFERANSER Oppstart NSF Stavanger.................. 7 Medlemsmøte NSF Trondheim.............. 8 Artikler Monitoring the Level and the Slope of the Corona... 9 På leting etter sannheten bak kvantemekanikken.... 22 Tilbakeblikk: Den første nordiske konferanse i matema- tisk statistikk, Aarhus 1965 ............. 28 Kunngjøringer Kontingent.......................... 31 Meldinger Nytt fra NTNU....................... 32 Nytt fra UiB......................... 32 Nytt fra Matematisk institutt ved Universitetet i Oslo. 32 redaksjonelt Fra NFR-lederen Hei alle NFR-medlemmer, Alt vel med dere og nærmeste familie/venner i disse corona-tider håper jeg – alt vel her. Vi lever alle uvanlige liv for tiden. Mennesket er jo et sosialt vesen – og sosiale medier ivaretar bare en flik av det. Jeg har i lengre tid hatt barn som studenter i utlandet – og når vi samles om sommeren på hytta har vi et begrep ‘beyond Skype’ – det betyr en god bamse-klem. Det er behov for gode klemmer – og den tid kommer tilbake. Vi føler vel alle savnet av familie-medlemmer, jobb-kolleger og venner – samt for egen del, savnet av en Kreta-tur i mai. Jeg har, som gammel ‘low-tech’ professor, blitt påtvunget digitalt basert undervisning – og føler meg ikke trygg på at studentene er begeistret. Personlig ser jeg frem til normale undervisnings-tilstander. Statistikken i coronaens tid – lever faktisk i beste velgående. ‘Usikkerhet’ er vel det mest brukte ordet i samfunns-debatten siste måned. Statistiske modeller i epidemiologi er på alles lepper – og diskusjonen går høyt om prøvetaking og test-prosedyrer. -
The P-Value Function and Statistical Inference
The American Statistician ISSN: 0003-1305 (Print) 1537-2731 (Online) Journal homepage: https://www.tandfonline.com/loi/utas20 The p-value Function and Statistical Inference D. A. S. Fraser To cite this article: D. A. S. Fraser (2019) The p-value Function and Statistical Inference, The American Statistician, 73:sup1, 135-147, DOI: 10.1080/00031305.2018.1556735 To link to this article: https://doi.org/10.1080/00031305.2018.1556735 © 2019 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. Published online: 20 Mar 2019. Submit your article to this journal Article views: 1983 View Crossmark data Citing articles: 1 View citing articles Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=utas20 THE AMERICAN STATISTICIAN 2019, VOL. 73, NO. S1, 135–147: Statistical Inference in the 21st Century https://doi.org/10.1080/00031305.2019.1556735 The p-value Function and Statistical Inference D. A. S. Fraser Department of Statistical Sciences, University of Toronto, Toronto, Canada ABSTRACT ARTICLE HISTORY This article has two objectives. The first and narrower is to formalize the p-value function, which records Received March 2018 all possible p-values, each corresponding to a value for whatever the scalar parameter of interest is for the Revised November 2018 problem at hand, and to show how this p-value function directly provides full inference information for any corresponding user or scientist. The p-value function provides familiar inference objects: significance KEYWORDS Accept–Reject; Ancillarity; levels, confidence intervals, critical values for fixed-level tests, and the power function at all values ofthe Box–Cox; Conditioning; parameter of interest.