Cautionary Tales in Designed Experiments

Total Page:16

File Type:pdf, Size:1020Kb

Cautionary Tales in Designed Experiments The correct bibliographic citation for this manual is as follows: Salsburg, David S. 2020. Cautionary Tales in Designed Experiments. Cary, NC: SAS Institute Inc. Cautionary Tales in Designed Experiments Copyright © 2020, SAS Institute Inc., Cary, NC, USA ISBN 978-1-952363-24-5 (Paperback) ISBN 978-1-952363-23-8 (Web PDF) All Rights Reserved. Produced in the United States of America. For a hard copy book: No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, or otherwise, without the prior written permission of the publisher, SAS Institute Inc. For a web download or e-book: Your use of this publication shall be governed by the terms established by the vendor at the time you acquire this publication. The scanning, uploading, and distribution of this book via the Internet or any other means without the permission of the publisher is illegal and punishable by law. Please purchase only authorized electronic editions and do not participate in or encourage electronic piracy of copyrighted materials. Your support of others’ rights is appreciated. U.S. Government License Rights; Restricted Rights: The Software and its documentation is commercial computer software developed at private expense and is provided with RESTRICTED RIGHTS to the United States Government. Use, duplication, or disclosure of the Software by the United States Government is subject to the license terms of this Agreement pursuant to, as applicable, FAR 12.212, DFAR 227.7202-1(a), DFAR 227.7202-3(a), and DFAR 227.7202-4, and, to the extent required under U.S. federal law, the minimum restricted rights as set out in FAR 52.227-19 (DEC 2007). If FAR 52.227-19 is applicable, this provision serves as notice under clause (c) thereof and no other notice is required to be affixed to the Software or documentation. The Government’s rights in Software and documentation shall be only those set forth in this Agreement. SAS Institute Inc., SAS Campus Drive, Cary, NC 27513-2414 September 2020 SAS® and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. ® indicates USA registration. Other brand and product names are trademarks of their respective companies. SAS software may be provided with certain third-party software, including but not limited to open-source software, which is licensed under its applicable third-party software license agreement. For license information about third- party software distributed with SAS software, refer to http://support.sas.com/thirdpartylicenses. Contents Preface .............................................................................................................. vii Chapter 1: Experimental Design from Harvey to Fisher ..................................... 1 1.1 The Circulation of the Blood ....................................................................................................... 1 1.2 The Statistical Model of Experimental Design ............................................................................ 3 1.3 Summary .................................................................................................................................... 6 References ....................................................................................................................................... 7 Chapter 2: Measuring the “Good” in Milk ........................................................... 9 2.1 Pasteurization and the “Good” in Milk......................................................................................... 9 2.2 Experiments on Mice ................................................................................................................ 10 2.3 Summary .................................................................................................................................. 12 Reference ....................................................................................................................................... 12 Chapter 3: Designing the Lanarkshire Milk Experiment ................................... 13 3.1 “Man Is Not a Big Mouse” ......................................................................................................... 13 3.2 Measuring the “Good” in Milk ................................................................................................... 14 3.3 Adjusting for Differences among the Children .......................................................................... 15 3.4 Summary .................................................................................................................................. 16 Reference ....................................................................................................................................... 16 Chapter 4: The Execution of the Lanarkshire Experiment ................................ 17 4.1 Matching the Children............................................................................................................... 17 4.2 Fisher’s Model .......................................................................................................................... 18 4.3 Randomization ......................................................................................................................... 19 4.4 The Lanarkshire Experiment Begins ........................................................................................ 20 4.5 Summary .................................................................................................................................. 21 Chapter 5: The Results of the Lanarkshire Experiment.................................... 23 5.1 Gossett’s Criticism .................................................................................................................... 23 5.2 What Went Wrong .................................................................................................................... 25 5.3 Mismanaged Experiments ........................................................................................................ 25 5.4 Summary .................................................................................................................................. 26 References ..................................................................................................................................... 26 iv Cautionary Tales in Designed Experiments Chapter 6: How to Randomize Experimental Units ........................................... 27 6.1 Random versus Haphazard Treatment Assignment .................................................................. 27 6.2 Using Random Numbers to Assign Treatments ........................................................................ 29 6.3 Pseudo Random Numbers on the Computer ............................................................................ 31 6.4 How a Pseudo Random Number Generator Works ................................................................... 32 6.5 Summary ................................................................................................................................... 32 References ...................................................................................................................................... 32 Chapter 7: When Randomization Cannot Be Done ........................................... 33 7.1 Observational Studies ............................................................................................................... 33 7.2 The Development of “Evidence-Based Medicine” ..................................................................... 35 7.3 The DES Debacle ...................................................................................................................... 36 7.4 Deickmann’s Study .................................................................................................................... 36 7.5 The Misuse of Observational Studies ........................................................................................ 37 7.6 Summary ................................................................................................................................... 37 References ...................................................................................................................................... 38 Chapter 8: Different Designs for Experiments ................................................. 39 8.1 Crossover Designs .................................................................................................................... 39 8.2 Carry-over Effects ..................................................................................................................... 41 8.3 Latent Response Models ........................................................................................................... 42 8.4 Summary ................................................................................................................................... 43 Reference ........................................................................................................................................ 43 Chapter 9: Analysis of Variance ....................................................................... 45 9.1 Fisher’s Model ........................................................................................................................... 45 9.2 “Errors” to “Residual” ................................................................................................................
Recommended publications
  • Harmonizing Fully Optimal Designs with Classic Randomization in Fixed Trial Experiments Arxiv:1810.08389V1 [Stat.ME] 19 Oct 20
    Harmonizing Fully Optimal Designs with Classic Randomization in Fixed Trial Experiments Adam Kapelner, Department of Mathematics, Queens College, CUNY, Abba M. Krieger, Department of Statistics, The Wharton School of the University of Pennsylvania, Uri Shalit and David Azriel Faculty of Industrial Engineering and Management, The Technion October 22, 2018 Abstract There is a movement in design of experiments away from the classic randomization put forward by Fisher, Cochran and others to one based on optimization. In fixed-sample trials comparing two groups, measurements of subjects are known in advance and subjects can be divided optimally into two groups based on a criterion of homogeneity or \imbalance" between the two groups. These designs are far from random. This paper seeks to understand the benefits and the costs over classic randomization in the context of different performance criterions such as Efron's worst-case analysis. In the criterion that we motivate, randomization beats optimization. However, the optimal arXiv:1810.08389v1 [stat.ME] 19 Oct 2018 design is shown to lie between these two extremes. Much-needed further work will provide a procedure to find this optimal designs in different scenarios in practice. Until then, it is best to randomize. Keywords: randomization, experimental design, optimization, restricted randomization 1 1 Introduction In this short survey, we wish to investigate performance differences between completely random experimental designs and non-random designs that optimize for observed covariate imbalance. We demonstrate that depending on how we wish to evaluate our estimator, the optimal strategy will change. We motivate a performance criterion that when applied, does not crown either as the better choice, but a design that is a harmony between the two of them.
    [Show full text]
  • Optimal Design of Experiments Session 4: Some Theory
    Optimal design of experiments Session 4: Some theory Peter Goos 1 / 40 Optimal design theory Ï continuous or approximate optimal designs Ï implicitly assume an infinitely large number of observations are available Ï is mathematically convenient Ï exact or discrete designs Ï finite number of observations Ï fewer theoretical results 2 / 40 Continuous versus exact designs continuous Ï ½ ¾ x1 x2 ... xh Ï » Æ w1 w2 ... wh Ï x1,x2,...,xh: design points or support points w ,w ,...,w : weights (w 0, P w 1) Ï 1 2 h i ¸ i i Æ Ï h: number of different points exact Ï ½ ¾ x1 x2 ... xh Ï » Æ n1 n2 ... nh Ï n1,n2,...,nh: (integer) numbers of observations at x1,...,xn P n n Ï i i Æ Ï h: number of different points 3 / 40 Information matrix Ï all criteria to select a design are based on information matrix Ï model matrix 2 3 2 T 3 1 1 1 1 1 1 f (x1) ¡ ¡ Å Å Å T 61 1 1 1 1 17 6f (x2)7 6 Å ¡ ¡ Å Å 7 6 T 7 X 61 1 1 1 1 17 6f (x3)7 6 ¡ Å ¡ Å Å 7 6 7 Æ 6 . 7 Æ 6 . 7 4 . 5 4 . 5 T 100000 f (xn) " " " " "2 "2 I x1 x2 x1x2 x1 x2 4 / 40 Information matrix Ï (total) information matrix 1 1 Xn M XT X f(x )fT (x ) 2 2 i i Æ σ Æ σ i 1 Æ Ï per observation information matrix 1 T f(xi)f (xi) σ2 5 / 40 Information matrix industrial example 2 3 11 0 0 0 6 6 6 0 6 0 0 0 07 6 7 1 6 0 0 6 0 0 07 ¡XT X¢ 6 7 2 6 7 σ Æ 6 0 0 0 4 0 07 6 7 4 6 0 0 0 6 45 6 0 0 0 4 6 6 / 40 Information matrix Ï exact designs h X T M ni f(xi)f (xi) Æ i 1 Æ where h = number of different points ni = number of replications of point i Ï continuous designs h X T M wi f(xi)f (xi) Æ i 1 Æ 7 / 40 D-optimality criterion Ï seeks designs that minimize variance-covariance matrix of ¯ˆ ¯ 2 T 1¯ Ï .
    [Show full text]
  • You May Be (Stuck) Here! and Here Are Some Potential Reasons Why
    You may be (stuck) here! And here are some potential reasons why. As published in Benchmarks RSS Matters, May 2015 http://web3.unt.edu/benchmarks/issues/2015/05/rss-matters Jon Starkweather, PhD 1 Jon Starkweather, PhD [email protected] Consultant Research and Statistical Support http://www.unt.edu http://www.unt.edu/rss RSS hosts a number of “Short Courses”. A list of them is available at: http://www.unt.edu/rss/Instructional.htm Those interested in learning more about R, or how to use it, can find information here: http://www.unt.edu/rss/class/Jon/R_SC 2 You may be (stuck) here! And here are some potential reasons why. I often read R-bloggers (Galili, 2015) to see new and exciting things users are doing in the wonderful world of R. Recently I came across Norm Matloff’s (2014) blog post with the title “Why are we still teaching t-tests?” To be honest, many RSS personnel have echoed Norm’s sentiments over the years. There do seem to be some fields which are perpetually stuck in decades long past — in terms of the statistical methods they teach and use. Reading Norm’s post got me thinking it might be good to offer some explanations, or at least opinions, on why some fields tend to be stubbornly behind the analytic times. This month’s article will offer some of my own thoughts on the matter. I offer these opinions having been academically raised in one such Rip Van Winkle (Washington, 1819) field and subsequently realized how much of what I was taught has very little practical utility with real world research problems and data.
    [Show full text]
  • Advanced Power Analysis Workshop
    Advanced Power Analysis Workshop www.nicebread.de PD Dr. Felix Schönbrodt & Dr. Stella Bollmann www.researchtransparency.org Ludwig-Maximilians-Universität München Twitter: @nicebread303 •Part I: General concepts of power analysis •Part II: Hands-on: Repeated measures ANOVA and multiple regression •Part III: Power analysis in multilevel models •Part IV: Tailored design analyses by simulations in 2 Part I: General concepts of power analysis •What is “statistical power”? •Why power is important •From power analysis to design analysis: Planning for precision (and other stuff) •How to determine the expected/minimally interesting effect size 3 What is statistical power? A 2x2 classification matrix Reality: Reality: Effect present No effect present Test indicates: True Positive False Positive Effect present Test indicates: False Negative True Negative No effect present 5 https://effectsizefaq.files.wordpress.com/2010/05/type-i-and-type-ii-errors.jpg 6 https://effectsizefaq.files.wordpress.com/2010/05/type-i-and-type-ii-errors.jpg 7 A priori power analysis: We assume that the effect exists in reality Reality: Reality: Effect present No effect present Power = α=5% Test indicates: 1- β p < .05 True Positive False Positive Test indicates: β=20% p > .05 False Negative True Negative 8 Calibrate your power feeling total n Two-sample t test (between design), d = 0.5 128 (64 each group) One-sample t test (within design), d = 0.5 34 Correlation: r = .21 173 Difference between two correlations, 428 r₁ = .15, r₂ = .40 ➙ q = 0.273 ANOVA, 2x2 Design: Interaction effect, f = 0.21 180 (45 each group) All a priori power analyses with α = 5%, β = 20% and two-tailed 9 total n Two-sample t test (between design), d = 0.5 128 (64 each group) One-sample t test (within design), d = 0.5 34 Correlation: r = .21 173 Difference between two correlations, 428 r₁ = .15, r₂ = .40 ➙ q = 0.273 ANOVA, 2x2 Design: Interaction effect, f = 0.21 180 (45 each group) All a priori power analyses with α = 5%, β = 20% and two-tailed 10 The power ofwithin-SS designs Thepower 11 May, K., & Hittner, J.
    [Show full text]
  • Approximation Algorithms for D-Optimal Design
    Approximation Algorithms for D-optimal Design Mohit Singh School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332, [email protected]. Weijun Xie Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA 24061, [email protected]. Experimental design is a classical statistics problem and its aim is to estimate an unknown m-dimensional vector β from linear measurements where a Gaussian noise is introduced in each measurement. For the com- binatorial experimental design problem, the goal is to pick k out of the given n experiments so as to make the most accurate estimate of the unknown parameters, denoted as βb. In this paper, we will study one of the most robust measures of error estimation - D-optimality criterion, which corresponds to minimizing the volume of the confidence ellipsoid for the estimation error β − βb. The problem gives rise to two natural vari- ants depending on whether repetitions of experiments are allowed or not. We first propose an approximation 1 algorithm with a e -approximation for the D-optimal design problem with and without repetitions, giving the first constant factor approximation for the problem. We then analyze another sampling approximation algo- 4m 12 1 rithm and prove that it is (1 − )-approximation if k ≥ + 2 log( ) for any 2 (0; 1). Finally, for D-optimal design with repetitions, we study a different algorithm proposed by literature and show that it can improve this asymptotic approximation ratio. Key words: D-optimal Design; approximation algorithm; determinant; derandomization. History: 1. Introduction Experimental design is a classical problem in statistics [5, 17, 18, 23, 31] and recently has also been applied to machine learning [2, 39].
    [Show full text]
  • The Political Methodologist Newsletter of the Political Methodology Section American Political Science Association Volume 10, Number 2, Spring, 2002
    The Political Methodologist Newsletter of the Political Methodology Section American Political Science Association Volume 10, Number 2, Spring, 2002 Editor: Suzanna De Boef, Pennsylvania State University [email protected] Contents Summer 2002 at ICPSR . 27 Notes on the 35th Essex Summer Program . 29 Notes from the Editor 1 EITM Summer Training Institute Announcement 29 Note from the Editor of PA . 31 Teaching and Learning Graduate Methods 2 Michael Herron: Teaching Introductory Proba- Notes From the Editor bility Theory . 2 Eric Plutzer: First Things First . 4 This issue of TPM features articles on teaching the first Lawrence J. Grossback: Reflections from a Small course in the graduate methods sequence and on testing and Diverse Program . 6 theory with empirical methods. The first course presents Charles Tien: A Stealth Approach . 8 unique challenges: what to expect of students and where to begin. The contributions suggest that the first gradu- Christopher H. Achen: Advice for Students . 10 ate methods course varies greatly with respect to goals, content, and rigor across programs. Whatever the na- Testing Theory 12 ture of the course and whether you teach or are taking John Londregan: Political Theory and Political your first methods course, Chris Achen’s advice for stu- Reality . 12 dents beginning the methods sequence will be excellent reading. Rebecca Morton: EITM*: Experimental Impli- cations of Theoretical Models . 14 With the support of the National Science Foun- dation, the empirical implications of theoretical models (EITM) are the subject of increased attention. Given the Articles 16 empirical nature of the endeavor, EITM should inspire us. Andrew D. Martin: LATEX For the Rest of Us .
    [Show full text]
  • A Review on Optimal Experimental Design
    A Review On Optimal Experimental Design Noha A. Youssef London School Of Economics 1 Introduction Finding an optimal experimental design is considered one of the most impor- tant topics in the context of the experimental design. Optimal design is the design that achieves some targets of our interest. The Bayesian and the Non- Bayesian approaches have introduced some criteria that coincide with the target of the experiment based on some speci¯c utility or loss functions. The choice between the Bayesian and Non-Bayesian approaches depends on the availability of the prior information and also the computational di±culties that might be encountered when using any of them. This report aims mainly to provide a short summary on optimal experi- mental design focusing more on Bayesian optimal one. Therefore a background about the Bayesian analysis was given in Section 2. Optimality criteria for non-Bayesian design of experiments are reviewed in Section 3 . Section 4 il- lustrates how the Bayesian analysis is employed in the design of experiments. The remaining sections of this report give a brief view of the paper written by (Chalenor & Verdinelli 1995). An illustration for the Bayesian optimality criteria for normal linear model associated with di®erent objectives is given in Section 5. Also, some problems related to Bayesian optimal designs for nor- mal linear model are discussed in Section 5. Section 6 presents some ideas for Bayesian optimal design for one way and two way analysis of variance. Section 7 discusses the problems associated with nonlinear models and presents some ideas for solving these problems.
    [Show full text]
  • Minimax Efficient Random Designs with Application to Model-Robust Design
    Minimax efficient random designs with application to model-robust design for prediction Tim Waite [email protected] School of Mathematics University of Manchester, UK Joint work with Dave Woods S3RI, University of Southampton, UK Supported by the UK Engineering and Physical Sciences Research Council 8 Aug 2017, Banff, AB, Canada Tim Waite (U. Manchester) Minimax efficient random designs Banff - 8 Aug 2017 1 / 36 Outline Randomized decisions and experimental design Random designs for prediction - correct model Extension of G-optimality Model-robust random designs for prediction Theoretical results - tractable classes Algorithms for optimization Examples: illustration of bias-variance tradeoff Tim Waite (U. Manchester) Minimax efficient random designs Banff - 8 Aug 2017 2 / 36 Randomized decisions A well known fact in statistical decision theory and game theory: Under minimax expected loss, random decisions beat deterministic ones. Experimental design can be viewed as a game played by the Statistician against nature (Wu, 1981; Berger, 1985). Therefore a random design strategy should often be beneficial. Despite this, consideration of minimax efficient random design strategies is relatively unusual. Tim Waite (U. Manchester) Minimax efficient random designs Banff - 8 Aug 2017 3 / 36 Game theory Consider a two-person zero-sum game. Player I takes action θ 2 Θ and Player II takes action ξ 2 Ξ. Player II experiences a loss L(θ; ξ), to be minimized. A random strategy for Player II is a probability measure π on Ξ. Deterministic actions are a special case (point mass distribution). Strategy π1 is preferred to π2 (π1 π2) iff Eπ1 L(θ; ξ) < Eπ2 L(θ; ξ) : Tim Waite (U.
    [Show full text]
  • Practical Aspects for Designing Statistically Optimal Experiments
    Journal of Statistical Science and Application 2 (2014) 85-92 D D AV I D PUBLISHING Practical Aspects for Designing Statistically Optimal Experiments Mark J. Anderson, Patrick J. Whitcomb Stat-Ease, Inc., Minneapolis, MN USA Due to operational or physical considerations, standard factorial and response surface method (RSM) design of experiments (DOE) often prove to be unsuitable. In such cases a computer-generated statistically-optimal design fills the breech. This article explores vital mathematical properties for evaluating alternative designs with a focus on what is really important for industrial experimenters. To assess “goodness of design” such evaluations must consider the model choice, specific optimality criteria (in particular D and I), precision of estimation based on the fraction of design space (FDS), the number of runs to achieve required precision, lack-of-fit testing, and so forth. With a focus on RSM, all these issues are considered at a practical level, keeping engineers and scientists in mind. This brings to the forefront such considerations as subject-matter knowledge from first principles and experience, factor choice and the feasibility of the experiment design. Key words: design of experiments, optimal design, response surface methods, fraction of design space. Introduction Statistically optimal designs emerged over a half century ago (Kiefer, 1959) to provide these advantages over classical templates for factorials and RSM: Efficiently filling out an irregularly shaped experimental region such as that shown in Figure 1 (Anderson and Whitcomb, 2005), 1300 e Flash r u s s e r 1200 P - B 1100 Short Shot 1000 450 460 A - Temperature Figure 1. Example of irregularly-shaped experimental region (a molding process).
    [Show full text]
  • A Conversation with Richard A. Olshen 3
    Statistical Science 2015, Vol. 30, No. 1, 118–132 DOI: 10.1214/14-STS492 c Institute of Mathematical Statistics, 2015 A Conversation with Richard A. Olshen John A. Rice Abstract. Richard Olshen was born in Portland, Oregon, on May 17, 1942. Richard spent his early years in Chevy Chase, Maryland, but has lived most of his life in California. He received an A.B. in Statistics at the University of California, Berkeley, in 1963, and a Ph.D. in Statistics from Yale University in 1966, writing his dissertation under the direction of Jimmie Savage and Frank Anscombe. He served as Research Staff Statistician and Lecturer at Yale in 1966–1967. Richard accepted a faculty appointment at Stanford University in 1967, and has held tenured faculty positions at the University of Michigan (1972–1975), the University of California, San Diego (1975–1989), and Stanford University (since 1989). At Stanford, he is Professor of Health Research and Policy (Biostatistics), Chief of the Division of Biostatistics (since 1998) and Professor (by courtesy) of Electrical Engineering and of Statistics. At various times, he has had visiting faculty positions at Columbia, Harvard, MIT, Stanford and the Hebrew University. Richard’s research interests are in statistics and mathematics and their applications to medicine and biology. Much of his work has concerned binary tree-structured algorithms for classification, regression, survival analysis and clustering. Those for classification and survival analysis have been used with success in computer-aided diagnosis and prognosis, especially in cardiology, oncology and toxicology. He coauthored the 1984 book Classi- fication and Regression Trees (with Leo Brieman, Jerome Friedman and Charles Stone) which gives motivation, algorithms, various examples and mathematical theory for what have come to be known as CART algorithms.
    [Show full text]
  • Inferential Statistics the Controlled Experiment, Hypothesis Testing, and the Z Distribution
    05-Coolidge-4857.qxd 1/2/2006 6:52 PM Page 119 5 Inferential Statistics The Controlled Experiment, Hypothesis Testing, and the z Distribution Chapter 5 Goals • Understand hypothesis testing in controlled experiments • Understand why the null hypothesis is usually a conservative beginning • Understand nondirectional and directional alternative hypotheses and their advantages and disadvantages • Learn the four possible outcomes in hypothesis testing • Learn the difference between significant and nonsignificant statisti- cal findings • Learn the fine art of baloney detection • Learn again why experimental designs are more important than the statistical analyses • Learn the basics of probability theory, some theorems, and proba- bility distributions ecently, when I was shopping at the grocery store, I became aware that R music was softly playing throughout the store (in this case, the ancient rock group Strawberry Alarm Clock’s “Incense and Peppermint”). In a curi- ous mood, I asked the store manager, “Why music?” and “why this type of music?” In a very serious manner, he told me that “studies” had shown people buy more groceries listening to this type of music. Perhaps more 119 05-Coolidge-4857.qxd 1/2/2006 6:52 PM Page 120 120 STATISTICS: A GENTLE INTRODUCTION businesses would stay in business if they were more skeptical and fell less for scams that promise “a buying atmosphere.” In this chapter on inferential statistics, you will learn how to test hypotheses such as “music makes people buy more,” or “HIV does not cause AIDS,” or “moving one’s eyes back and forth helps to forget traumatic events.” Inferential statistics is concerned with making conclusions about popula- tions from smaller samples drawn from the population.
    [Show full text]
  • “I Didn't Want to Be a Statistician”
    “I didn’t want to be a statistician” Making mathematical statisticians in the Second World War John Aldrich University of Southampton Seminar Durham January 2018 1 The individual before the event “I was interested in mathematics. I wanted to be either an analyst or possibly a mathematical physicist—I didn't want to be a statistician.” David Cox Interview 1994 A generation after the event “There was a large increase in the number of people who knew that statistics was an interesting subject. They had been given an excellent training free of charge.” George Barnard & Robin Plackett (1985) Statistics in the United Kingdom,1939-45 Cox, Barnard and Plackett were among the people who became mathematical statisticians 2 The people, born around 1920 and with a ‘name’ by the 60s : the 20/60s Robin Plackett was typical Born in 1920 Cambridge mathematics undergraduate 1940 Off the conveyor belt from Cambridge mathematics to statistics war-work at SR17 1942 Lecturer in Statistics at Liverpool in 1946 Professor of Statistics King’s College, Durham 1962 3 Some 20/60s (in 1968) 4 “It is interesting to note that a number of these men now hold statistical chairs in this country”* Egon Pearson on SR17 in 1973 In 1939 he was the UK’s only professor of statistics * Including Dennis Lindley Aberystwyth 1960 Peter Armitage School of Hygiene 1961 Robin Plackett Durham/Newcastle 1962 H. J. Godwin Royal Holloway 1968 Maurice Walker Sheffield 1972 5 SR 17 women in statistical chairs? None Few women in SR17: small skills pool—in 30s Cambridge graduated 5 times more men than women Post-war careers—not in statistics or universities Christine Stockman (1923-2015) Maths at Cambridge.
    [Show full text]