Cosmological Model Selection and Akaike's Criterion

Total Page:16

File Type:pdf, Size:1020Kb

Cosmological Model Selection and Akaike's Criterion Cosmological Model Selection and Akaike’s Criterion A thesis presented to the faculty of the College of Arts and Sciences of Ohio University In partial fulfillment of the requirements for the degree of Master of Arts Christopher S. Arledge August 2015 © 2015 Christopher S. Arledge. All Rights Reserved. 2 This thesis titled Cosmological Model Selection and Akaike’s Criterion by CHRISTOPHER S. ARLEDGE has been approved for the Department of Philosophy and the College of Arts and Sciences by Philip Ehrlich Professor of Philosophy Robert Frank Dean, College of Arts and Sciences 3 ABSTRACT ARLEDGE, CHRISTOPHER S., M.A., August 2015, Philosophy Cosmological Model Selection and Akaike’s Criterion Director of Thesis: Philip Ehrlich Contemporary cosmology is teeming with model underdetermination and cosmologists are looking for methods with which to relieve some of this underdetermination. One such method that has found its way into cosmology in recent years is the Akaike Information Criterion (AIC). The criterion is meant to select the model that loses the least amount of information in its approximation of the data, and furthermore AIC shows a preference for simplicity by containing a penalty term that penalizes models with excessive complexity. The principle aim of this paper is to investigate some of the strengths and weaknesses of AIC against two philosophical backdrops in order to determine its usefulness in cosmological model selection. The backdrops or positions against which AIC will be assessed are I) realist and II) antirealist. It will be argued that on both of these positions there is at least one feature of AIC that proves problematic for the satisfaction of the aims of the position. 4 ACKNOWLEDGEMENTS I would like to express my gratitude to Philip Ehrlich for his invaluable help during the composition of this thesis. I’d also like to thank Yoichi Ishida for his helpful comments. I would like to thank Jordan Shonberg and Ryan Ross for their help in making the thesis more readable. Finally I would like to extend a special thanks to John Norton for his willingness to be on the committee and for his insightful comments and criticisms. 5 TABLE OF CONTENTS Page Abstract……………………………………………………………………………….…..3 Acknowledgments……………………………………………….……………...………..4 1. Introduction…………………………….…………………………….……..………....6 2. Akaike Information Criterion……………………………..…….......................….......13 3. Philosophical Positions……………….……………………………………….………15 4. Limiting Features of AIC….……………..…………..…………………………..….....20 5. Conclusion…………………………………………………………………………….31 References………………………………………………………………………………..33 6 1. INTRODUCTION Contemporary physical cosmology is rife with underdetermination. What underdetermination amounts to is the claim that for some set of empirical data x there are multiple theories that can each provide a good account of x and yet each theory is equally well supported on the basis of x.1 Underdetermination is often discussed in the context of scientific theories. But cosmologists are faced with a slightly different sort of underdetermination, namely underdetermination of cosmological models. In model underdetermination, it is the various models that are built out of the foundational theories that are underdetermined and not the foundational theories themselves. So in cosmology the foundational theories of General Relativity (GR) and Quantum Mechanics (QM) are taken for granted and it is the models constructed out of these theories that face the challenge of underdetermination (Butterfield 2012, 2014). A prime example of model underdetermination in cosmology is that of dark energy modeling (which models the acceleration of the expansion factor of the universe). Presently there are no less than nine mutually incompatible dark energy models in competition with one another.2 The available evidence is insufficient to offer an empirical distinction between the models, though nothing inherent in the models inhibits future evidence from providing an empirical distinction. Another example of cosmological model underdetermination is one in which relativistic dark matter models and modified gravity models compete for !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 1 The term “account” expresses the ability of a theory T to save the phenomena with regards to a particular data set x. 2 Several of these models are already considered to be less viable than others. For instance the cosmological constant model is considered much more viable than the Dvali-Gabadadze-Porrati model, which is a model formulated on brane-world assumptions (cf. Li et al. 2010). 7 primacy in accounting for the rotation curves of spiral galaxies and other related phenomena.3 The extent to which cosmological models are underdetermined depends on the conception of underdetermination invoked. One conception of underdetermination is that of Pierre Duhem (1954), who advocates a kind of holist underdetermination. On this view a hypothesis H cannot be tested in isolation since there is always a body of auxiliary hypotheses Hn that surround H. Therefore when an experiment fails to bear out the predictions of H it need not be the case that H is falsified since it could always be one of the auxiliary hypotheses that is the troublemaker. Consider an experiment in which a telescope is used to test some prediction made by an astronomical theory. If the prediction is not born out, it does not follow that the astronomical theory has been falsified, since it could be the optical theory on which the telescope is built or some other auxiliary hypothesis that is actually the problem. Hence for any given experiment, it always remains underdetermined as to which hypothesis has actually been falsified. On another conception theories or models are underdetermined based on the evidence that is currently available, meaning that none of the present evidence can better support one of the competing theories over another. However, this conception of underdetermination does not preclude the possibility of future evidence providing better support to one of the competing theories. The theories are therefore underdetermined in practice. Proponents of this conception of underdetermination include Larry Lauden and !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 3 Of course, the juxtaposition of relativistic dark matter models and modified gravity models may ultimately result in theory underdetermination as the modified gravity models draw GR into question. Since the paper is concerned with model selection, however, the underdetermination of the foundational theories underlying these models will not be treated. 8 Jarrett Leplin (1991). Laudan and Leplin argue that because our experimental methods and our extra-empirical assumptions change with time, it is unwarranted to conclude that any two theories that are said to be empirically equivalent at time T will remain equivalent at some future time T1 (Stanford, 2013). Hence two theories might appear to be empirical equivalents at present, but in the future may be shown to be empirically disparate. The two conceptions of underdetermination presented above make universal claims that may be seen as overzealous. Accordingly on a third conception, the underdetermination of theories or models is treated on a case-by-case basis. Some theories or models will have empirical equivalents that cannot be distinguished by any possible amount of evidence. Bas van Fraassen (1980, 46-69) contrasts formulations of Newton’s theory that differ only in regards to the velocity of the solar system with regard to absolute space. Since any given constant velocity of the solar system with respect to absolute space would be observationally indistinguishable, no possible body of evidence will be able to resolve this underdetermination. On the other hand, some theories or models will be underdetermined with respect to the currently available data. Future data collection may show one theory or model accounts for the data better than its competitor(s). A fairly recent example of this is the competition between the big bang and the steady state models of the universe in the early 20th century. Initially both models accounted for the observed data (e.g. Hubble’s law). However in the 1960s, the discovery of the Cosmic Microwave Background radiation (CMB), which was predicted by the big- bang model, showed that the steady-state model could no longer account for the data 9 when the CMB is included. Prior to the 1960s the two models were considered to be empirically equivalent, but posterior to 1960 the models were shown to be empirically inequivalent, with greater support provided to the big-bang model. Whether cosmological model underdetermination is of the second or the third kind, the point is clear: cosmologists need a method (or methods) to resolve some of the underdetermination. Various proposals have been made ranging from parameter fitting to Bayesian Inference (BI) (Mukherjee and Parkinson 2008; Wandelt et al. 2013; Watkinson et al. 2012; Weinberg 2013).4 In recent years, however, cosmologists have begun to make use of a model selection criterion known as the Akaike Information Criterion (AIC) (Biesiada 2007; Godłowski and Szydłowski 2005; Li, et al. 2010; Szydłowski, et al. 2006, Tan and Biswas 2012). AIC differs from parameter estimation and BI in that it is an information-theoretic selection criterion. This means that AIC selects for models that lose the least amount of information in the approximation of the generating model (that is the data-generating
Recommended publications
  • P Values, Hypothesis Testing, and Model Selection: It’S De´Ja` Vu All Over Again1
    FORUM P values, hypothesis testing, and model selection: it’s de´ja` vu all over again1 It was six men of Indostan To learning much inclined, Who went to see the Elephant (Though all of them were blind), That each by observation Might satisfy his mind. ... And so these men of Indostan Disputed loud and long, Each in his own opinion Exceeding stiff and strong, Though each was partly in the right, And all were in the wrong! So, oft in theologic wars The disputants, I ween, Rail on in utter ignorance Of what each other mean, And prate about an Elephant Not one of them has seen! —From The Blind Men and the Elephant: A Hindoo Fable, by John Godfrey Saxe (1872) Even if you didn’t immediately skip over this page (or the entire Forum in this issue of Ecology), you may still be asking yourself, ‘‘Haven’t I seen this before? Do we really need another Forum on P values, hypothesis testing, and model selection?’’ So please bear with us; this elephant is still in the room. We thank Paul Murtaugh for the reminder and the invited commentators for their varying perspectives on the current shape of statistical testing and inference in ecology. Those of us who went through graduate school in the 1970s, 1980s, and 1990s remember attempting to coax another 0.001 out of SAS’s P ¼ 0.051 output (maybe if I just rounded to two decimal places ...), raising a toast to P ¼ 0.0499 (and the invention of floating point processors), or desperately searching the back pages of Sokal and Rohlf for a different test that would cross the finish line and satisfy our dissertation committee.
    [Show full text]
  • Some Statistical Heresies
    The Statistician (1999) 48, Part 1, pp. 1±40 Some statistical heresies J. K. Lindsey Limburgs Universitair Centrum, Diepenbeek, Belgium [Read before The Royal Statistical Society on Wednesday, July 15th, 1998, the President, Professor R. N. Curnow, in the Chair ] Summary. Shortcomings of modern views of statistical inference have had negative effects on the image of statistics, whether through students, clients or the press. Here, I question the underlying foundations of modern inference, including the existence of `true' models, the need for probability, whether frequentist or Bayesian, to make inference statements, the assumed continuity of observed data, the ideal of large samples and the need for procedures to be insensitive to assumptions. In the context of exploratory inferences, I consider how much can be done by using minimal assumptions related to interpreting a likelihood function. Questions addressed include the appropriate probabil- istic basis of models, ways of calibrating likelihoods involving differing numbers of parameters, the roles of model selection and model checking, the precision of parameter estimates, the use of prior empirical information and the relationship of these to sample size. I compare this direct likelihood approach with classical Bayesian and frequentist methods in analysing the evolution of cases of acquired immune de®ciency syndrome in the presence of reporting delays. Keywords: Acquired immune de®ciency syndrome; Akaike's information criterion; Asymptotics; Compatibility; Consistency; Discrete data; Hypothesis test; Likelihood; Likelihood principle; Model selection; Nonparametric models; Normal distribution; Poisson distribution; Robustness; Sample size; Standard error 1. Introduction Statisticians are greatly concerned about the low public esteem for statistics. The discipline is often viewed as dif®cult and unnecessary, or at best as a necessary evil.
    [Show full text]
  • A Hierarchy of Limitations in Machine Learning
    A Hierarchy of Limitations in Machine Learning Momin M. Malik Berkman Klein Center for Internet & Society at Harvard University momin [email protected] 29 February 2020∗ Abstract \All models are wrong, but some are useful," wrote George E. P. Box(1979). Machine learning has focused on the usefulness of probability models for prediction in social systems, but is only now coming to grips with the ways in which these models are wrong|and the consequences of those shortcomings. This paper attempts a comprehensive, structured overview of the specific conceptual, procedural, and statistical limitations of models in machine learning when applied to society. Machine learning modelers themselves can use the described hierarchy to identify possible failure points and think through how to address them, and consumers of machine learning models can know what to question when confronted with the decision about if, where, and how to apply machine learning. The limitations go from commitments inherent in quantification itself, through to showing how unmodeled dependencies can lead to cross-validation being overly optimistic as a way of assessing model performance. Introduction There is little argument about whether or not machine learning models are useful for applying to social systems. But if we take seriously George Box's dictum, or indeed the even older one that \the map is not the territory' (Korzybski, 1933), then there has been comparatively less systematic attention paid within the field to how machine learning models are wrong (Selbst et al., 2019) and seeing possible harms in that light. By \wrong" I do not mean in terms of making misclassifications, or even fitting over the `wrong' class of functions, but more fundamental mathematical/statistical assumptions, philosophical (in the sense used by Abbott, 1988) commitments about how we represent the world, and sociological processes of how models interact with target phenomena.
    [Show full text]
  • Page 72 Page 73 5 Specification 5.1 Introduction at One Time Econometricians Tended to Assume That the Model Provided by Economi
    page_72 Page 73 5 Specification 5.1 Introduction At one time econometricians tended to assume that the model provided by economic theory represented accurately the real-world mechanism generating the data, and viewed their role as one of providing "good" estimates for the key parameters of that model. If any uncertainty was expressed about the model specification, there was a tendency to think in terms of using econometrics to "find" the real-world data-generating mechanism. Both these views of econometrics are obsolete. It is now generally acknowledged that econometric models are ''false" and that there is no hope, or pretense, that through them "truth" will be found. Feldstein's (1982, p. 829) remarks are typical of this view: "in practice all econometric specifications are necessarily 'false' models. The applied econometrician, like the theorist, soon discovers from experience that a useful model is not one that is 'true' or 'realistic' but one that is parsimonious, plausible and informative." This is echoed by an oft-quoted remark attributed to George Box - "All models are wrong, but some are useful" - and another from Theil (1971, p. vi): "Models are to be used, but not to be believed." In light of this recognition, econometricians have been forced to articulate more clearly what econometric models are. There is some consensus that models are metaphors, or windows, through which researchers view the observable world, and that their acceptance and use depends not upon whether they can be deemed "true" but rather upon whether they can be said to correspond to the facts. Econometric specification analysis is a means of formalizing what is meant by "corresponding to the facts," thereby defining what is meant by a "correctly specified model." From this perspective econometric analysis becomes much more than estimation and inference in the context of a given model; in conjunction with economic theory, it plays a crucial, preliminary role of searching for and evaluating a model, leading ultimately to its acceptance or rejection.
    [Show full text]
  • Model Selection Techniques —An Overview Jie Ding, Vahid Tarokh, and Yuhong Yang
    IEEE SIGNAL PROCESSING MAGAZINE 1 Model Selection Techniques —An Overview Jie Ding, Vahid Tarokh, and Yuhong Yang Abstract—In the era of “big data”, analysts usually lead to purely noisy “discoveries”, severely misleading explore various statistical models or machine learning conclusions, or disappointing predictive performances. methods for observed data in order to facilitate scientific Therefore, a crucial step in a typical data analysis is discoveries or gain predictive power. Whatever data and to consider a set of candidate models (referred to as the fitting procedures are employed, a crucial step is to select model class), and then select the most appropriate one. the most appropriate model or method from a set of model selection candidates. Model selection is a key ingredient in data In other words, is the task of selecting a analysis for reliable and reproducible statistical inference statistical model from a model class, given a set of data. or prediction, and thus central to scientific studies in For example, we may be interested in the selection of fields such as ecology, economics, engineering, finance, • variables for linear regression, political science, biology, and epidemiology. There has • basis terms such as polynomials, splines, or been a long history of model selection techniques that arise from researches in statistics, information theory, wavelets in function estimation, and signal processing. A considerable number of methods • order of an autoregressive process, have been proposed, following different philosophies and • number of components in a mixture model, exhibiting varying performances. The purpose of this • most appropriate parametric family among a num- article is to bring a comprehensive overview of them, ber of alternatives, in terms of their motivation, large sample performance, • number of change points in time series models, and applicability.
    [Show full text]
  • Multimodel Inference. Understanding AIC and BIC in Model Selection
    Multimodel Inference Understanding AIC and BIC in Model Selection KENNETH P. BURNHAM DAVID R. ANDERSON Colorado Cooperative Fish and Wildlife Research Unit (USGS-BRD) The model selection literature has been generally poor at reflecting the deep foundations of the Akaike information criterion (AIC) and at making appropriate comparisons to the Bayesian information criterion (BIC). There is a clear philosophy, a sound criterion based in information theory, and a rigorous statistical foundation for AIC. AIC can be justified as Bayesian using a “savvy” prior on models that is a function of sample size and the number of model parameters. Furthermore, BIC can be derived as a non- Bayesian result. Therefore, arguments about using AIC versus BIC for model selection cannot be from a Bayes versus frequentist perspective. The philosophical context of what is assumed about reality, approximating models, and the intent of model-based infer- ence should determine whether AIC or BIC is used. Various facets of such multimodel inference are presented here, particularly methods of model averaging. Keywords: AIC; BIC; model averaging; model selection; multimodel inference 1. INTRODUCTION For a model selection context, we assume that there are data and a set of models and that statistical inference is to be model based. Clas- sically, it is assumed that there is a single correct (or even true) or, at least, best model, and that model suffices as the sole model for making inferences from the data. Although the identity (and para- meter values) of that model is unknown, it seems to be assumed that it can be estimated—in fact, well estimated.
    [Show full text]
  • A Goodness-Of-Fit Test for Statistical Models Arxiv:2006.08864V1 [Stat.ME] 16 Jun 2020
    A Goodness-of-Fit Test for Statistical Models Hangjin Jiang Center for Data Science, Zhejiang University June 17, 2020 Abstract Statistical modeling plays a fundamental role in understanding the underlying mechanism of massive data (statistical inference) and predicting the future (statisti- cal prediction). Although all models are wrong, researchers try their best to make some of them be useful. The question here is how can we measure the usefulness of a statistical model for the data in hand? This is key to statistical prediction. The important statistical problem of testing whether the observations follow the pro- posed statistical model has only attracted relatively few attentions. In this paper, we proposed a new framework for this problem through building its connection with two-sample distribution comparison. The proposed method can be applied to eval- uate a wide range of models. Examples are given to show the performance of the proposed method. Keywords: Statistical modeling, Model assessment, Distribution test, Goodness-of-fit arXiv:2006.08864v1 [stat.ME] 16 Jun 2020 1 1 Introduction Statistical prediction and inference is the core of statistics. Statistical prediction is a very important contribution of statistical research to the society, and it depends more strongly than statistical inference on the statistical model proposed by the expert or learned from the past. However, as George Box said that \all models are wrong, but some are useful". Here comes a question that \To what extent, they (statistical models) are useful?" Or we may ask in another way that \Is our statistical model built for given dataset acceptable or good enough?" This paper seeks a statistical answer to this question.
    [Show full text]
  • Evaluation of Regression Models: Model Assessment, Model Selection and Generalization Error
    machine learning & knowledge extraction Review Evaluation of Regression Models: Model Assessment, Model Selection and Generalization Error Frank Emmert-Streib 1,2,* and Matthias Dehmer 3,4,5 1 Predictive Society and Data Analytics Lab, Faculty of Information Technolgy and Communication Sciences, Tampere University, 33100 Tampere, Finland 2 Institute of Biosciences and Medical Technology, 33520 Tampere, Finland 3 Institute for Intelligent Production, Faculty for Management, University of Applied Sciences Upper Austria, Steyr Campus, 4400 Steyr, Austria; [email protected] 4 Department of Mechatronics and Biomedical Computer Science, University for Health Sciences, Medical Informatics and Technology, 6060 Hall in Tirol, Austria 5 College of Computer and Control Engineering, Nankai University, Tianjin 300071, China * Correspondence: [email protected]; Tel.: +358-50-301-5353 Received: 9 February 2019; Accepted: 18 March 2019; Published: 22 March 2019 Abstract: When performing a regression or classification analysis, one needs to specify a statistical model. This model should avoid the overfitting and underfitting of data, and achieve a low generalization error that characterizes its prediction performance. In order to identify such a model, one needs to decide which model to select from candidate model families based on performance evaluations. In this paper, we review the theoretical framework of model selection and model assessment, including error-complexity curves, the bias-variance tradeoff, and learning curves for evaluating statistical models. We discuss criterion-based, step-wise selection procedures and resampling methods for model selection, whereas cross-validation provides the most simple and generic means for computationally estimating all required entities. To make the theoretical concepts transparent, we present worked examples for linear regression models.
    [Show full text]
  • Decision Theory
    Decision Theory Principles and Approaches Giovanni Parmigiani Johns Hopkins University, Baltimore, USA Lurdes Y. T. Inoue University of Washington, Seattle, USA with contributions by Hedibert F. Lopes University of Chicago, USA A John Wiley and Sons, Ltd., Publication Decision Theory WILEY SERIES IN PROBABILITY AND STATISTICS Established by WALTER A. SHEWHART and SAMUEL S. WILKS Editors DAVID J. BALDING,NOEL A. C. CRESSIE,GARRETT M. FITZMAURICE,IAIN M. JOHN- STONE,GEERT MOLENBERGHS,DAVID W. SCOTT,ADRIAN F. M. SMITH,RUEY S. TSAY, SANFORD WEISBERG,HARVEY GOLDSTEIN. Editors Emeriti VIC BARNETT,J.STUART HUNTER,JOZEF L. TEUGELS A COMPLETE LIST OF THE TITLES IN THIS SERIES APPEARS AT THE END OF THIS VOLUME. Decision Theory Principles and Approaches Giovanni Parmigiani Johns Hopkins University, Baltimore, USA Lurdes Y. T. Inoue University of Washington, Seattle, USA with contributions by Hedibert F. Lopes University of Chicago, USA A John Wiley and Sons, Ltd., Publication This edition first published 2009 c 2009 John Wiley & Sons, Ltd. Registered office John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com. The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
    [Show full text]
  • A Few Statistical Principles for Data Science
    A few statistical principles for data science Noel Cressie National Institute for Applied Statistics Australia University of Wollongong Wollongong NSW 2522, Australia email: [email protected] 4 February 2021 Abstract In any other circumstance, it might make sense to define the extent of the terrain (Data Science) first, and then locate and describe the landmarks (Principles). But this data revolution we are experiencing defies a cadastral survey. Areas are continually being annexed into Data Science. For example, biometrics was traditionally statistics for agriculture in all its forms but now, in Data Science, it means the study of characteristics that can be used to identify an individual. Examples of non-intrusive measurements include height, weight, fingerprints, retina scan, voice, photograph/video (facial landmarks and facial expressions), and gait. A multivariate analysis of such data would be a complex project for a statistician, but a software engineer might appear to have no trouble with it at all. In any applied-statistics project, the statistician worries about uncertainty and quantifies it by modelling data as realisations generated from a probability space. Another approach to uncertainty quantification is to find similar data sets, and then use the variability of results between these data sets to capture the uncertainty. Both approaches allow ‘error bars’ to be put on estimates obtained from the original data set, although the interpretations are different. A third approach, that concentrates on giving a single answer and gives up on uncertainty quantification, could be considered as Data Engineering, although it has staked a claim in the Data Science terrain. This article presents a few (actually nine) statistical principles for data scientists that have helped me, and continue to help me, when I work on complex interdisciplinary projects.
    [Show full text]
  • Prof. Dr. Ernst C. Wit Professor of Statistics and Data Science
    Curriculum Vitae Prof. dr. Ernst C. Wit Professor of Statistics and Data Science 21 May, 2021 Prof. Dr. Ernst Wit Institute of Computing, Director Universit`adella Svizzera italiana Via G. Buffi 13 6900 Lugano Switzerland Honorary appointment: Bernoulli Institute Rijksuniversiteit Groningen PO Box 407, 9700 AK Groningen The Netherlands Home address: Via delle Coste 2 6933 Muzzano Switzerland Doctorate: May 1997 (Philosophy, Pennsylvania State University), June 2000 (Statistics, University of Chicago). Other details: Email: [email protected] Website: https://www.rug.nl/staff/e.c.wit/ Telephone: +41 (0)58 666 4952 DOB: 5 April, 1972 1 Employment 1.1 Current employment Institute of Computing, Universit`adella Svizzera italiana Director (2021 { ) 1 Full professor of Statistics and Data Science (permanent) 1.2 Activities University related activities { Director of the Institute of Computing { Head of Data Science (CI, USI) (2018 { ) { Director of the Master in Computational Science (USI) International activities { Scientific advisor, Ministry of Internal Affairs, The Netherlands (since 2014) { Chair of a Europe-wide EU COST Action (CA15109) for a network of European Scien- tists working on Statistical Network Science (Since 2015). { Associate editor Biometrics (since 2014), Statistical applications in genetics and micro- biology (since 2012). Reviewer for many journals. 1.3 Employment record 1 June, 2018 { current, Full Professor of Statistics and Data Science, 1 fte, permanent Institute of Computing, Universit`adella Svizzera italiana. 1 June, 2008 { 1 June, 2018, Full Professor of Statistics and Probability, 1 fte, permanent Johann Bernoulli Institute, University of Groningen. Chair of Johann Bernoulli Institute, consisting of 32 fte permanent staff. Management of Statistics and Probability Unit, consisting of 3.6 fte permanent staff Supervision of 15 PhD students, 2 postdocs, 1 instructor, 40+ BSc/Master students.
    [Show full text]
  • AMSTATNEWS the Membership Magazine of the American Statistical Association •
    May 2015 • Issue #455 AMSTATNEWS The Membership Magazine of the American Statistical Association • http://magazine.amstat.org 6000+ Statisticians Expected in This August ALSO: Negotiating a Statistical Career Part 1: A JSM Panel Discussion Cultural Values, Statistical Displays AMSTATNEWS MAY 2015 • ISSUE #455 Executive Director Ron Wasserstein: [email protected] Associate Executive Director and Director of Operations Stephen Porzio: [email protected] Director of Science Policy features Steve Pierson: [email protected] 3 President’s Corner Director of Education [email protected] Rebecca Nichols: 5 Recognizing the ASA’s Longtime Members Managing Editor 13 ASA Leaders Reminisce: Vincent P. Barabba Megan Murphy: [email protected] Production Coordinators/Graphic Designers 16 Negotiating a Statistical Career Sara Davidson: [email protected] Part 1: A JSM Panel Discussion Megan Ruyle: [email protected] 16 Call for Abstracts for 2016 Conference on New Data Linkages Publications Coordinator Val Nirala: [email protected] 17 Staff Spotlight: Amanda Conageski Advertising Manager Claudine Donovan: [email protected] Contributing Staff Members Amanda Conageski • Amy Farris • Rick Peterson • Kathleen Wert Amstat News welcomes news items and letters from readers on matters of interest to the association and the profession. Address correspondence to columns Managing Editor, Amstat News, American Statistical Association, 732 North Washington Street, Alexandria VA 22314-1943 USA, or email amstat@ amstat.org. Items must be received by the first day of the preceding month 18 MASTER'S NOTEBOOK to ensure appearance in the next issue (for example, June 1 for the July issue). Cultural Values, Statistical Displays Material can be sent as a Microsoft Word document, PDF, or within an email.
    [Show full text]