Could Fisher, Jeffreys and Neyman Have Agreed on Testing? James 0.Berger
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Sequential Analysis Tests and Confidence Intervals
D. Siegmund Sequential Analysis Tests and Confidence Intervals Series: Springer Series in Statistics The modern theory of Sequential Analysis came into existence simultaneously in the United States and Great Britain in response to demands for more efficient sampling inspection procedures during World War II. The develop ments were admirably summarized by their principal architect, A. Wald, in his book Sequential Analysis (1947). In spite of the extraordinary accomplishments of this period, there remained some dissatisfaction with the sequential probability ratio test and Wald's analysis of it. (i) The open-ended continuation region with the concomitant possibility of taking an arbitrarily large number of observations seems intol erable in practice. (ii) Wald's elegant approximations based on "neglecting the excess" of the log likelihood ratio over the stopping boundaries are not especially accurate and do not allow one to study the effect oftaking observa tions in groups rather than one at a time. (iii) The beautiful optimality property of the sequential probability ratio test applies only to the artificial problem of 1985, XI, 274 p. testing a simple hypothesis against a simple alternative. In response to these issues and to new motivation from the direction of controlled clinical trials numerous modifications of the sequential probability ratio test were proposed and their properties studied-often Printed book by simulation or lengthy numerical computation. (A notable exception is Anderson, 1960; see III.7.) In the past decade it has become possible to give a more complete theoretical Hardcover analysis of many of the proposals and hence to understand them better. ▶ 119,99 € | £109.99 | $149.99 ▶ *128,39 € (D) | 131,99 € (A) | CHF 141.50 eBook Available from your bookstore or ▶ springer.com/shop MyCopy Printed eBook for just ▶ € | $ 24.99 ▶ springer.com/mycopy Order online at springer.com ▶ or for the Americas call (toll free) 1-800-SPRINGER ▶ or email us at: [email protected]. -
Trial Sequential Analysis: Novel Approach for Meta-Analysis
Anesth Pain Med 2021;16:138-150 https://doi.org/10.17085/apm.21038 Review pISSN 1975-5171 • eISSN 2383-7977 Trial sequential analysis: novel approach for meta-analysis Hyun Kang Received April 19, 2021 Department of Anesthesiology and Pain Medicine, Chung-Ang University College of Accepted April 25, 2021 Medicine, Seoul, Korea Systematic reviews and meta-analyses rank the highest in the evidence hierarchy. However, they still have the risk of spurious results because they include too few studies and partici- pants. The use of trial sequential analysis (TSA) has increased recently, providing more infor- mation on the precision and uncertainty of meta-analysis results. This makes it a powerful tool for clinicians to assess the conclusiveness of meta-analysis. TSA provides monitoring boundaries or futility boundaries, helping clinicians prevent unnecessary trials. The use and Corresponding author Hyun Kang, M.D., Ph.D. interpretation of TSA should be based on an understanding of the principles and assump- Department of Anesthesiology and tions behind TSA, which may provide more accurate, precise, and unbiased information to Pain Medicine, Chung-Ang University clinicians, patients, and policymakers. In this article, the history, background, principles, and College of Medicine, 84 Heukseok-ro, assumptions behind TSA are described, which would lead to its better understanding, imple- Dongjak-gu, Seoul 06974, Korea mentation, and interpretation. Tel: 82-2-6299-2586 Fax: 82-2-6299-2585 E-mail: [email protected] Keywords: Interim analysis; Meta-analysis; Statistics; Trial sequential analysis. INTRODUCTION termined number of patients was reached [2]. A systematic review is a research method that attempts Sequential analysis is a statistical method in which the fi- to collect all empirical evidence according to predefined nal number of patients analyzed is not predetermined, but inclusion and exclusion criteria to answer specific and fo- sampling or enrollment of patients is decided by a prede- cused questions [3]. -
Detecting Reinforcement Patterns in the Stream of Naturalistic Observations of Social Interactions
Portland State University PDXScholar Dissertations and Theses Dissertations and Theses 7-14-2020 Detecting Reinforcement Patterns in the Stream of Naturalistic Observations of Social Interactions James Lamar DeLaney 3rd Portland State University Follow this and additional works at: https://pdxscholar.library.pdx.edu/open_access_etds Part of the Developmental Psychology Commons Let us know how access to this document benefits ou.y Recommended Citation DeLaney 3rd, James Lamar, "Detecting Reinforcement Patterns in the Stream of Naturalistic Observations of Social Interactions" (2020). Dissertations and Theses. Paper 5553. https://doi.org/10.15760/etd.7427 This Thesis is brought to you for free and open access. It has been accepted for inclusion in Dissertations and Theses by an authorized administrator of PDXScholar. Please contact us if we can make this document more accessible: [email protected]. Detecting Reinforcement Patterns in the Stream of Naturalistic Observations of Social Interactions by James Lamar DeLaney 3rd A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Psychology Thesis Committee: Thomas Kindermann, Chair Jason Newsom Ellen A. Skinner Portland State University i Abstract How do consequences affect future behaviors in real-world social interactions? The term positive reinforcer refers to those consequences that are associated with an increase in probability of an antecedent behavior (Skinner, 1938). To explore whether reinforcement occurs under naturally occuring conditions, many studies use sequential analysis methods to detect contingency patterns (see Quera & Bakeman, 1998). This study argues that these methods do not look at behavior change following putative reinforcers, and thus, are not sufficient for declaring reinforcement effects arising in naturally occuring interactions, according to the Skinner’s (1938) operational definition of reinforcers. -
National Academy Elects IMS Fellows Have You Voted Yet?
Volume 38 • Issue 5 IMS Bulletin June 2009 National Academy elects IMS Fellows CONTENTS The United States National Academy of Sciences has elected 72 new members and 1 National Academy elects 18 foreign associates from 15 countries in recognition of their distinguished and Raftery, Wong continuing achievements in original research. Among those elected are two IMS Adrian Raftery 2 Members’ News: Jianqing Fellows: , Blumstein-Jordan Professor of Statistics and Sociology, Center Fan; SIAM Fellows for Statistics and the Social Sciences, University of Washington, Seattle, and Wing Hung Wong, Professor 3 Laha Award recipients of Statistics and Professor of Health Research and Policy, 4 COPSS Fisher Lecturer: Department of Statistics, Stanford University, California. Noel Cressie The election was held April 28, during the business 5 Members’ Discoveries: session of the 146th annual meeting of the Academy. Nicolai Meinshausen Those elected bring the total number of active members 6 Medallion Lecture: Tony Cai to 2,150. Foreign associates are non-voting members of the Academy, with citizenship outside the United States. Meeting report: SSP Above: Adrian Raftery 7 This year’s election brings the total number of foreign 8 New IMS Fellows Below: Wing H. Wong associates to 404. The National Academy of Sciences is a private 10 Obituaries: Keith Worsley; I.J. Good organization of scientists and engineers dedicated to the furtherance of science and its use for general welfare. 12-3 JSM program highlights; It was established in 1863 by a congressional act of IMS sessions at JSM incorporation signed by Abraham Lincoln that calls on 14-5 JSM tours; More things to the Academy to act as an official adviser to the federal do in DC government, upon request, in any matter of science or 16 Accepting rejections technology. -
Bayesian Versus Frequentist Statistics for Uncertainty Analysis
- 1 - Disentangling Classical and Bayesian Approaches to Uncertainty Analysis Robin Willink1 and Rod White2 1email: [email protected] 2Measurement Standards Laboratory PO Box 31310, Lower Hutt 5040 New Zealand email(corresponding author): [email protected] Abstract Since the 1980s, we have seen a gradual shift in the uncertainty analyses recommended in the metrological literature, principally Metrologia, and in the BIPM’s guidance documents; the Guide to the Expression of Uncertainty in Measurement (GUM) and its two supplements. The shift has seen the BIPM’s recommendations change from a purely classical or frequentist analysis to a purely Bayesian analysis. Despite this drift, most metrologists continue to use the predominantly frequentist approach of the GUM and wonder what the differences are, why there are such bitter disputes about the two approaches, and should I change? The primary purpose of this note is to inform metrologists of the differences between the frequentist and Bayesian approaches and the consequences of those differences. It is often claimed that a Bayesian approach is philosophically consistent and is able to tackle problems beyond the reach of classical statistics. However, while the philosophical consistency of the of Bayesian analyses may be more aesthetically pleasing, the value to science of any statistical analysis is in the long-term success rates and on this point, classical methods perform well and Bayesian analyses can perform poorly. Thus an important secondary purpose of this note is to highlight some of the weaknesses of the Bayesian approach. We argue that moving away from well-established, easily- taught frequentist methods that perform well, to computationally expensive and numerically inferior Bayesian analyses recommended by the GUM supplements is ill-advised. -
Practical Statistics for Particle Physics Lecture 1 AEPS2018, Quy Nhon, Vietnam
Practical Statistics for Particle Physics Lecture 1 AEPS2018, Quy Nhon, Vietnam Roger Barlow The University of Huddersfield August 2018 Roger Barlow ( Huddersfield) Statistics for Particle Physics August 2018 1 / 34 Lecture 1: The Basics 1 Probability What is it? Frequentist Probability Conditional Probability and Bayes' Theorem Bayesian Probability 2 Probability distributions and their properties Expectation Values Binomial, Poisson and Gaussian 3 Hypothesis testing Roger Barlow ( Huddersfield) Statistics for Particle Physics August 2018 2 / 34 Question: What is Probability? Typical exam question Q1 Explain what is meant by the Probability PA of an event A [1] Roger Barlow ( Huddersfield) Statistics for Particle Physics August 2018 3 / 34 Four possible answers PA is number obeying certain mathematical rules. PA is a property of A that determines how often A happens For N trials in which A occurs NA times, PA is the limit of NA=N for large N PA is my belief that A will happen, measurable by seeing what odds I will accept in a bet. Roger Barlow ( Huddersfield) Statistics for Particle Physics August 2018 4 / 34 Mathematical Kolmogorov Axioms: For all A ⊂ S PA ≥ 0 PS = 1 P(A[B) = PA + PB if A \ B = ϕ and A; B ⊂ S From these simple axioms a complete and complicated structure can be − ≤ erected. E.g. show PA = 1 PA, and show PA 1.... But!!! This says nothing about what PA actually means. Kolmogorov had frequentist probability in mind, but these axioms apply to any definition. Roger Barlow ( Huddersfield) Statistics for Particle Physics August 2018 5 / 34 Classical or Real probability Evolved during the 18th-19th century Developed (Pascal, Laplace and others) to serve the gambling industry. -
This History of Modern Mathematical Statistics Retraces Their Development
BOOK REVIEWS GORROOCHURN Prakash, 2016, Classic Topics on the History of Modern Mathematical Statistics: From Laplace to More Recent Times, Hoboken, NJ, John Wiley & Sons, Inc., 754 p. This history of modern mathematical statistics retraces their development from the “Laplacean revolution,” as the author so rightly calls it (though the beginnings are to be found in Bayes’ 1763 essay(1)), through the mid-twentieth century and Fisher’s major contribution. Up to the nineteenth century the book covers the same ground as Stigler’s history of statistics(2), though with notable differences (see infra). It then discusses developments through the first half of the twentieth century: Fisher’s synthesis but also the renewal of Bayesian methods, which implied a return to Laplace. Part I offers an in-depth, chronological account of Laplace’s approach to probability, with all the mathematical detail and deductions he drew from it. It begins with his first innovative articles and concludes with his philosophical synthesis showing that all fields of human knowledge are connected to the theory of probabilities. Here Gorrouchurn raises a problem that Stigler does not, that of induction (pp. 102-113), a notion that gives us a better understanding of probability according to Laplace. The term induction has two meanings, the first put forward by Bacon(3) in 1620, the second by Hume(4) in 1748. Gorroochurn discusses only the second. For Bacon, induction meant discovering the principles of a system by studying its properties through observation and experimentation. For Hume, induction was mere enumeration and could not lead to certainty. Laplace followed Bacon: “The surest method which can guide us in the search for truth, consists in rising by induction from phenomena to laws and from laws to forces”(5). -
The Likelihood Principle
1 01/28/99 ãMarc Nerlove 1999 Chapter 1: The Likelihood Principle "What has now appeared is that the mathematical concept of probability is ... inadequate to express our mental confidence or diffidence in making ... inferences, and that the mathematical quantity which usually appears to be appropriate for measuring our order of preference among different possible populations does not in fact obey the laws of probability. To distinguish it from probability, I have used the term 'Likelihood' to designate this quantity; since both the words 'likelihood' and 'probability' are loosely used in common speech to cover both kinds of relationship." R. A. Fisher, Statistical Methods for Research Workers, 1925. "What we can find from a sample is the likelihood of any particular value of r [a parameter], if we define the likelihood as a quantity proportional to the probability that, from a particular population having that particular value of r, a sample having the observed value r [a statistic] should be obtained. So defined, probability and likelihood are quantities of an entirely different nature." R. A. Fisher, "On the 'Probable Error' of a Coefficient of Correlation Deduced from a Small Sample," Metron, 1:3-32, 1921. Introduction The likelihood principle as stated by Edwards (1972, p. 30) is that Within the framework of a statistical model, all the information which the data provide concerning the relative merits of two hypotheses is contained in the likelihood ratio of those hypotheses on the data. ...For a continuum of hypotheses, this principle -
People's Intuitions About Randomness and Probability
20 PEOPLE’S INTUITIONS ABOUT RANDOMNESS AND PROBABILITY: AN EMPIRICAL STUDY4 MARIE-PAULE LECOUTRE ERIS, Université de Rouen [email protected] KATIA ROVIRA Laboratoire Psy.Co, Université de Rouen [email protected] BRUNO LECOUTRE ERIS, C.N.R.S. et Université de Rouen [email protected] JACQUES POITEVINEAU ERIS, Université de Paris 6 et Ministère de la Culture [email protected] ABSTRACT What people mean by randomness should be taken into account when teaching statistical inference. This experiment explored subjective beliefs about randomness and probability through two successive tasks. Subjects were asked to categorize 16 familiar items: 8 real items from everyday life experiences, and 8 stochastic items involving a repeatable process. Three groups of subjects differing according to their background knowledge of probability theory were compared. An important finding is that the arguments used to judge if an event is random and those to judge if it is not random appear to be of different natures. While the concept of probability has been introduced to formalize randomness, a majority of individuals appeared to consider probability as a primary concept. Keywords: Statistics education research; Probability; Randomness; Bayesian Inference 1. INTRODUCTION In recent years Bayesian statistical practice has considerably evolved. Nowadays, the frequentist approach is increasingly challenged among scientists by the Bayesian proponents (see e.g., D’Agostini, 2000; Lecoutre, Lecoutre & Poitevineau, 2001; Jaynes, 2003). In applied statistics, “objective Bayesian techniques” (Berger, 2004) are now a promising alternative to the traditional frequentist statistical inference procedures (significance tests and confidence intervals). Subjective Bayesian analysis also has a role to play in scientific investigations (see e.g., Kadane, 1996). -
Memorial to Sir Harold Jeffreys 1891-1989 JOHN A
Memorial to Sir Harold Jeffreys 1891-1989 JOHN A. HUDSON and ALAN G. SMITH University of Cambridge, Cambridge, England Harold Jeffreys was one of this century’s greatest applied mathematicians, using mathematics as a means of under standing the physical world. Principally he was a geo physicist, although statisticians may feel that his greatest contribution was to the theory of probability. However, his interest in the latter subject stemmed from his realization of the need for a clear statistical method of analysis of data— at that time, travel-time readings from seismological stations across the world. He also made contributions to astronomy, fluid dynamics, meteorology, botany, psychol ogy, and photography. Perhaps one can identify Jeffreys’s principal interests from three major books that he wrote. His mathematical skills are displayed in Methods of Mathematical Physics, which he wrote with his wife Bertha Swirles Jeffreys and which was first published in 1946 and went through three editions. His Theory o f Probability, published in 1939 and also running to three editions, espoused Bayesian statistics, which were very unfashionable at the time but which have been taken up since by others and shown to be extremely powerful for the analysis of data and, in particular, image enhancement. However, the book for which he is probably best known is The Earth, Its Origin, History and Physical Consti tution, a broad-ranging account based on observations analyzed with care, using mathematics as a tool. Jeffreys’s scientific method (now known as Inverse Theory) was a logical process, clearly stated in another of his books, Scientific Inference. -
Rothamsted in the Making of Sir Ronald Fisher Scd FRS
Rothamsted in the Making of Sir Ronald Fisher ScD FRS John Aldrich University of Southampton RSS September 2019 1 Finish 1962 “the most famous statistician and mathematical biologist in the world” dies Start 1919 29 year-old Cambridge BA in maths with no prospects takes temporary job at Rothamsted Experimental Station In between 1919-33 at Rothamsted he makes a career for himself by creating a career 2 Rothamsted helped make Fisher by establishing and elevating the office of agricultural statistician—a position in which Fisher was unsurpassed by letting him do other things—including mathematical statistics, genetics and eugenics 3 Before Fisher … (9 slides) The problems were already there viz. o the relationship between harvest and weather o design of experimental trials and the analysis of their results Leading figures in agricultural science believed the problems could be treated statistically 4 Established in 1843 by Lawes and Gilbert to investigate the effective- ness of fertilisers Sir John Bennet Lawes Sir Joseph Henry Gilbert (1814-1900) land-owner, fertiliser (1817-1901) professional magnate and amateur scientist scientist (chemist) In 1902 tired Rothamsted gets make-over when Daniel Hall becomes Director— public money feeds growth 5 Crops and weather and experimental trials Crops & the weather o examined by agriculturalists—e.g. Lawes & Gilbert 1880 o subsequently treated more by meteorologists Experimental trials a hotter topic o treated in Journal of Agricultural Science o by leading figures Wood of Cambridge and Hall -
A Parsimonious Tour of Bayesian Model Uncertainty
A Parsimonious Tour of Bayesian Model Uncertainty Pierre-Alexandre Mattei Université Côte d’Azur Inria, Maasai project-team Laboratoire J.A. Dieudonné, UMR CNRS 7351 e-mail: [email protected] Abstract: Modern statistical software and machine learning libraries are enabling semi-automated statistical inference. Within this context, it appears eas- ier and easier to try and fit many models to the data at hand, thereby reversing the Fisherian way of conducting science by collecting data after the scientific hypothesis (and hence the model) has been determined. The renewed goal of the statistician becomes to help the practitioner choose within such large and heterogeneous families of models, a task known as model selection. The Bayesian paradigm offers a systematized way of as- sessing this problem. This approach, launched by Harold Jeffreys in his 1935 book Theory of Probability, has witnessed a remarkable evolution in the last decades, that has brought about several new theoretical and methodological advances. Some of these recent developments are the focus of this survey, which tries to present a unifying perspective on work carried out by different communities. In particular, we focus on non-asymptotic out-of-sample performance of Bayesian model selection and averaging tech- niques, and draw connections with penalized maximum likelihood. We also describe recent extensions to wider classes of probabilistic frameworks in- cluding high-dimensional, unidentifiable, or likelihood-free models. Contents 1 Introduction: collecting data, fitting many models . .2 2 A brief history of Bayesian model uncertainty . .2 3 The foundations of Bayesian model uncertainty . .3 3.1 Handling model uncertainty with Bayes’s theorem .