An Introduction to Generalized Linear Models Second Edition

Total Page:16

File Type:pdf, Size:1020Kb

An Introduction to Generalized Linear Models Second Edition CHAPMAN & HALL/CRC Texts in Statistical Science Series Series Editors C. Chatfield, University of Bath, UK J. Zidek, University of British Columbia, Canada The Analysis of Time Series — An Introduction to Generalized An Introduction, Fifth Edition Linear Models, Second Edition C. Chatfield A.J. Dobson Applied Bayesian Forecasting and Time Introduction to Multivariate Analysis Series Analysis C. Chatfield and A.J. Collins A. Pole, M. West and J. Harrison Introduction to Optimization Methods Applied Nonparametric Statistical and their Applications in Statistics Methods, Third Edition B.S. Everitt P. Sprent and N.C. Smeeton Large Sample Methods in Statistics Applied Statistics — Principles and P.K. Sen and J. da Motta Singer Examples Markov Chain Monte Carlo — Stochastic D.R. Cox and E.J. Snell Simulation for Bayesian Inference Bayesian Data Analysis D. Gamerman A. Gelman, J. Carlin, H. Stern and D. Rubin Mathematical Statistics Beyond ANOVA — Basics of Applied K. Knight Statistics Modeling and Analysis of Stochastic R.G. Miller, Jr. Systems V. Kulkarni Computer-Aided Multivariate Analysis, Third Edition Modelling Binary Data A.A. Afifi and V.A. Clark D. Collett A Course in Categorical Data Analysis Modelling Survival Data in Medical T. Leonard Research D. Collett A Course in Large Sample Theory T.S. Ferguson Multivariate Analysis of Variance and Repeated Measures — A Practical Data Driven Statistical Methods Approach for Behavioural Scientists P. Sprent D.J. Hand and C.C. Taylor Decision Analysis — A Bayesian Approach Multivariate Statistics — J.Q. Smith A Practical Approach Elementary Applications of Probability B. Flury and H. Riedwyl Theory, Second Edition Practical Data Analysis for Designed H.C. Tuckwell Experiments Elements of Simulation B.S. Yandell B.J.T. Morgan Practical Longitudinal Data Analysis Epidemiology — Study Design and D.J. Hand and M. Crowder Data Analysis Practical Statistics for Medical Research M. Woodward D.G. Altman Essential Statistics, Fourth Edition Probability — Methods and Measurement D.G. Rees A. O’Hagan Interpreting Data — A First Course Problem Solving — A Statistician’s Guide, in Statistics Second Edition A.J.B. Anderson C. Chatfield Randomization, Bootstrap and Statistical Theory, Fourth Edition Monte Carlo Methods in Biology, B.W. Lindgren Second Edition Statistics for Accountants, Fourth Edition B.F.J. Manly S. Letchford Readings in Decision Analysis Statistics for Technology — S. French A Course in Applied Statistics, Sampling Methodologies with Third Edition Applications C. Chatfield P. R a o Statistics in Engineering — Statistical Analysis of Reliability Data A Practical Approach M.J. Crowder, A.C. Kimber, A.V. Metcalfe T.J. Sweeting and R.L. Smith Statistics in Research and Development, Statistical Methods for SPC and TQM Second Edition D. Bissell R. Caulcutt Statistical Methods in Agriculture and The Theory of Linear Models Experimental Biology, Second Edition B. Jørgensen R. Mead, R.N. Curnow and A.M. Hasted Statistical Process Control — Theory and Practice, Third Edition G.B. Wetherill and D.W. Brown AN INTRODUCTION TO GENERALIZED LINEAR MODELS SECOND EDITION Annette J. Dobson CHAPMAN & HALL/CRC A CRC Press Company Boca Raton London New York Washington, D.C. Library of Congress Cataloging-in-Publication Data Dobson, Annette J., 1945- An introduction to generalized linear models / Annette J. Dobson.—2nd ed. p. cm.— (Chapman & Hall/CRC texts in statistical science series) Includes bibliographical references and index. ISBN 1-58488-165-8 (alk. paper) 1. Linear models (Statistics) I. Title. II. Texts in statistical science. QA276 .D589 2001 519.5′35—dc21 2001047417 This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use. Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming, and recording, or by any information storage or retrieval system, without prior permission in writing from the publisher. The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific permission must be obtained in writing from CRC Press LLC for such copying. Direct all inquiries to CRC Press LLC, 2000 N.W. Corporate Blvd., Boca Raton, Florida 33431. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation, without intent to infringe. Visit the CRC Press Web site at www.crcpress.com © 2002 by Chapman & Hall/CRC No claim to original U.S. Government works International Standard Book Number 1-58488-165-8 Library of Congress Card Number 2001047417 Printed in the United States of America 1 2 3 4 5 6 7 8 9 0 Printed on acid-free paper Contents Preface 1Introduction 1.1Background 1.2Scope 1.3Notation 1.4DistributionsrelatedtotheNormaldistribution 1.5Quadraticforms 1.6Estimation 1.7Exercises 2ModelFitting 2.1Introduction 2.2Examples 2.3Someprinciplesofstatistica lmodelling 2.4Notationandcodingforexplanatoryvariables 2.5Exercises 3ExponentialFamilyandGeneralizedLinearModels 3.1Introduction 3.2Exponentialfamilyofdistributions 3.3Propertiesofdistribution sintheexponentialf amily 3.4Generalizedlinearmodels 3.5Examples 3.6Exercises 4 Estimation 4.1Introduction 4.2Example:Failuretimesforpressurevessels 4.3Maximumlikelihoodestimation 4.4Poissonregressionexample 4.5Exercises 5Inference 5.1Introduction 5.2Samplingdistributionforscorestatistics © 2002 by Chapman & Hall/CRC 5.3Taylorseriesapproximations 5.4Samplingdistributionformaximumlikelihoodestimators 5.5Log-likelihoodratiostatistic 5.6Samplingdistributionforthedeviance 5.7Hypothesistesting 5.8Exercises 6NormalLinearModels 6.1Introduction 6.2Basicresults 6.3Multiplelinearregression 6.4Analysisof variance 6.5Analysisofc ovariance 6.6Generallinearmodels 6.7Exercises 7BinaryVariablesandLogisticRegression 7.1Probabilitydistributions 7.2Generalizedlinearmodels 7.3Doseresponsemodels 7.4Generallogisticregressionmodel 7.5Goodnessoffi tstatistics 7.6Residuals 7.7Otherdiagnostics 7.8Example:SenilityandWAIS 7.9Exercises 8NominalandOrdinalLogisticRegression 8.1Introduction 8.2Multinomialdistribution 8.3Nominallogisticregression 8.4Ordinallogisticregression 8.5Generalcomments 8.6Exercises 9CountData,PoissonRegressionandLog-LinearModels 9.1Introduction 9.2Poissonregression 9.3Examplesofco ntingencytables 9.4Probabilitymodelsforcontingencytables 9.5Log-linearmodels 9.6Inferenceforlog-linearmodels 9.7Numericalexamples 9.8Remarks 9.9Exercises © 2002 by Chapman & Hall/CRC 10SurvivalAnalysis 10.1Introduction 10.2Survivorfunctionsandhazardfunctions 10.3Empiricalsurvivorfunction 10.4Estimation 10.5Inference 10.6Modelchecking 10.7Example:remissiontimes 10.8Exercises 11ClusteredandLongitudinalData 11.1Introduction 11.2Example:Recoveryfromstroke 11.3RepeatedmeasuresmodelsforNormaldata 11.4Repeatedmeasuresmodelsfornon-Normaldata 11.5Multilevelmodels 11.6Strokeexamplecontinued 11.7Comments 11.8Exercises Software References © 2002 by Chapman & Hall/CRC Preface Statistical tools for analyzing data are developing rapidly so that the 1990 edition ofthis book is now out ofdate. The original purpose ofthe book was to present a unified theoretical and conceptual framework for statistical modelling in a way that was accessible to undergraduate students and researchers in other fields. This new edition has been expanded to include nominal (or multinomial) and ordinal logistic regression, survival analysis and analysis oflongitudinal and clustered data. Although these topics do not fall strictly within the definition of generalized linear models, the underlying principles and methods are very similar and their inclusion is consistent with the original purpose ofthe book. The new edition relies on numerical methods more than the previous edition did. Some ofthe calculations can be performedwith a spreadsheet while others require statistical software. There is an emphasis on graphical methods for exploratory data analysis, visualizing numerical optimization (for example, ofthe likelihood function)and plotting residuals to check the adequacy of models. The data sets and outline solutions ofthe exercises are available on the publisher’s website: www.crcpress.com/us/ElectronicProducts/downandup.asp?mscssid= I am grateful to colleagues and students at the Universities of Queensland and Newcastle, Australia, for their helpful suggestions and comments about the material. Annette Dobson © 2002 by Chapman & Hall/CRC 1 Introduction 1.1 Background This book is designed to introduce the reader to generalized linear models; these provide a unifying framework for many commonly used statistical tech- niques. They also illustrate the ideas ofstatistical modelling. The reader is assumed to have some familiarity with statistical principles and methods. In particular, understanding the concepts ofestimation, sam- pling distributions and hypothesis testing is necessary. Experience in the use oft-tests, analysis ofvariance, simple linear regression and chi-squared tests of independence for two-dimensional contingency tables is assumed. In addition, some knowledge ofmatrix algebra and calculus is required. The reader will find it
Recommended publications
  • Mathematical Statistics
    Mathematical Statistics MAS 713 Chapter 1.2 Previous subchapter 1 What is statistics ? 2 The statistical process 3 Population and samples 4 Random sampling Questions? Mathematical Statistics (MAS713) Ariel Neufeld 2 / 70 This subchapter Descriptive Statistics 1.2.1 Introduction 1.2.2 Types of variables 1.2.3 Graphical representations 1.2.4 Descriptive measures Mathematical Statistics (MAS713) Ariel Neufeld 3 / 70 1.2 Descriptive Statistics 1.2.1 Introduction Introduction As we said, statistics is the art of learning from data However, statistical data, obtained from surveys, experiments, or any series of measurements, are often so numerous that they are virtually useless unless they are condensed ; data should be presented in ways that facilitate their interpretation and subsequent analysis Naturally enough, the aspect of statistics which deals with organising, describing and summarising data is called descriptive statistics Mathematical Statistics (MAS713) Ariel Neufeld 4 / 70 1.2 Descriptive Statistics 1.2.2 Types of variables Types of variables Mathematical Statistics (MAS713) Ariel Neufeld 5 / 70 1.2 Descriptive Statistics 1.2.2 Types of variables Types of variables The range of available descriptive tools for a given data set depends on the type of the considered variable There are essentially two types of variables : 1 categorical (or qualitative) variables : take a value that is one of several possible categories (no numerical meaning) Ex. : gender, hair color, field of study, political affiliation, status, ... 2 numerical (or quantitative)
    [Show full text]
  • This History of Modern Mathematical Statistics Retraces Their Development
    BOOK REVIEWS GORROOCHURN Prakash, 2016, Classic Topics on the History of Modern Mathematical Statistics: From Laplace to More Recent Times, Hoboken, NJ, John Wiley & Sons, Inc., 754 p. This history of modern mathematical statistics retraces their development from the “Laplacean revolution,” as the author so rightly calls it (though the beginnings are to be found in Bayes’ 1763 essay(1)), through the mid-twentieth century and Fisher’s major contribution. Up to the nineteenth century the book covers the same ground as Stigler’s history of statistics(2), though with notable differences (see infra). It then discusses developments through the first half of the twentieth century: Fisher’s synthesis but also the renewal of Bayesian methods, which implied a return to Laplace. Part I offers an in-depth, chronological account of Laplace’s approach to probability, with all the mathematical detail and deductions he drew from it. It begins with his first innovative articles and concludes with his philosophical synthesis showing that all fields of human knowledge are connected to the theory of probabilities. Here Gorrouchurn raises a problem that Stigler does not, that of induction (pp. 102-113), a notion that gives us a better understanding of probability according to Laplace. The term induction has two meanings, the first put forward by Bacon(3) in 1620, the second by Hume(4) in 1748. Gorroochurn discusses only the second. For Bacon, induction meant discovering the principles of a system by studying its properties through observation and experimentation. For Hume, induction was mere enumeration and could not lead to certainty. Laplace followed Bacon: “The surest method which can guide us in the search for truth, consists in rising by induction from phenomena to laws and from laws to forces”(5).
    [Show full text]
  • Generalized Linear Models
    Generalized Linear Models A generalized linear model (GLM) consists of three parts. i) The first part is a random variable giving the conditional distribution of a response Yi given the values of a set of covariates Xij. In the original work on GLM’sby Nelder and Wedderburn (1972) this random variable was a member of an exponential family, but later work has extended beyond this class of random variables. ii) The second part is a linear predictor, i = + 1Xi1 + 2Xi2 + + ··· kXik . iii) The third part is a smooth and invertible link function g(.) which transforms the expected value of the response variable, i = E(Yi) , and is equal to the linear predictor: g(i) = i = + 1Xi1 + 2Xi2 + + kXik. ··· As shown in Tables 15.1 and 15.2, both the general linear model that we have studied extensively and the logistic regression model from Chapter 14 are special cases of this model. One property of members of the exponential family of distributions is that the conditional variance of the response is a function of its mean, (), and possibly a dispersion parameter . The expressions for the variance functions for common members of the exponential family are shown in Table 15.2. Also, for each distribution there is a so-called canonical link function, which simplifies some of the GLM calculations, which is also shown in Table 15.2. Estimation and Testing for GLMs Parameter estimation in GLMs is conducted by the method of maximum likelihood. As with logistic regression models from the last chapter, the generalization of the residual sums of squares from the general linear model is the residual deviance, Dm 2(log Ls log Lm), where Lm is the maximized likelihood for the model of interest, and Ls is the maximized likelihood for a saturated model, which has one parameter per observation and fits the data as well as possible.
    [Show full text]
  • Applied Time Series Analysis
    Applied Time Series Analysis SS 2018 February 12, 2018 Dr. Marcel Dettling Institute for Data Analysis and Process Design Zurich University of Applied Sciences CH-8401 Winterthur Table of Contents 1 INTRODUCTION 1 1.1 PURPOSE 1 1.2 EXAMPLES 2 1.3 GOALS IN TIME SERIES ANALYSIS 8 2 MATHEMATICAL CONCEPTS 11 2.1 DEFINITION OF A TIME SERIES 11 2.2 STATIONARITY 11 2.3 TESTING STATIONARITY 13 3 TIME SERIES IN R 15 3.1 TIME SERIES CLASSES 15 3.2 DATES AND TIMES IN R 17 3.3 DATA IMPORT 21 4 DESCRIPTIVE ANALYSIS 23 4.1 VISUALIZATION 23 4.2 TRANSFORMATIONS 26 4.3 DECOMPOSITION 29 4.4 AUTOCORRELATION 50 4.5 PARTIAL AUTOCORRELATION 66 5 STATIONARY TIME SERIES MODELS 69 5.1 WHITE NOISE 69 5.2 ESTIMATING THE CONDITIONAL MEAN 70 5.3 AUTOREGRESSIVE MODELS 71 5.4 MOVING AVERAGE MODELS 85 5.5 ARMA(P,Q) MODELS 93 6 SARIMA AND GARCH MODELS 99 6.1 ARIMA MODELS 99 6.2 SARIMA MODELS 105 6.3 ARCH/GARCH MODELS 109 7 TIME SERIES REGRESSION 113 7.1 WHAT IS THE PROBLEM? 113 7.2 FINDING CORRELATED ERRORS 117 7.3 COCHRANE‐ORCUTT METHOD 124 7.4 GENERALIZED LEAST SQUARES 125 7.5 MISSING PREDICTOR VARIABLES 131 8 FORECASTING 137 8.1 STATIONARY TIME SERIES 138 8.2 SERIES WITH TREND AND SEASON 145 8.3 EXPONENTIAL SMOOTHING 152 9 MULTIVARIATE TIME SERIES ANALYSIS 161 9.1 PRACTICAL EXAMPLE 161 9.2 CROSS CORRELATION 165 9.3 PREWHITENING 168 9.4 TRANSFER FUNCTION MODELS 170 10 SPECTRAL ANALYSIS 175 10.1 DECOMPOSING IN THE FREQUENCY DOMAIN 175 10.2 THE SPECTRUM 179 10.3 REAL WORLD EXAMPLE 186 11 STATE SPACE MODELS 187 11.1 STATE SPACE FORMULATION 187 11.2 AR PROCESSES WITH MEASUREMENT NOISE 188 11.3 DYNAMIC LINEAR MODELS 191 ATSA 1 Introduction 1 Introduction 1.1 Purpose Time series data, i.e.
    [Show full text]
  • Robust Bayesian General Linear Models ⁎ W.D
    www.elsevier.com/locate/ynimg NeuroImage 36 (2007) 661–671 Robust Bayesian general linear models ⁎ W.D. Penny, J. Kilner, and F. Blankenburg Wellcome Department of Imaging Neuroscience, University College London, 12 Queen Square, London WC1N 3BG, UK Received 22 September 2006; revised 20 November 2006; accepted 25 January 2007 Available online 7 May 2007 We describe a Bayesian learning algorithm for Robust General Linear them from the data (Jung et al., 1999). This is, however, a non- Models (RGLMs). The noise is modeled as a Mixture of Gaussians automatic process and will typically require user intervention to rather than the usual single Gaussian. This allows different data points disambiguate the discovered components. In fMRI, autoregressive to be associated with different noise levels and effectively provides a (AR) modeling can be used to downweight the impact of periodic robust estimation of regression coefficients. A variational inference respiratory or cardiac noise sources (Penny et al., 2003). More framework is used to prevent overfitting and provides a model order recently, a number of approaches based on robust regression have selection criterion for noise model order. This allows the RGLM to default to the usual GLM when robustness is not required. The method been applied to imaging data (Wager et al., 2005; Diedrichsen and is compared to other robust regression methods and applied to Shadmehr, 2005). These approaches relax the assumption under- synthetic data and fMRI. lying ordinary regression that the errors be normally (Wager et al., © 2007 Elsevier Inc. All rights reserved. 2005) or identically (Diedrichsen and Shadmehr, 2005) distributed. In Wager et al.
    [Show full text]
  • Generalized Linear Models Outline for Today
    Generalized linear models Outline for today • What is a generalized linear model • Linear predictors and link functions • Example: estimate a proportion • Analysis of deviance • Example: fit dose-response data using logistic regression • Example: fit count data using a log-linear model • Advantages and assumptions of glm • Quasi-likelihood models when there is excessive variance Review: what is a (general) linear model A model of the following form: Y = β0 + β1X1 + β2 X 2 + ...+ error • Y is the response variable • The X ‘s are the explanatory variables • The β ‘s are the parameters of the linear equation • The errors are normally distributed with equal variance at all values of the X variables. • Uses least squares to fit model to data, estimate parameters • lm in R The predicted Y, symbolized here by µ, is modeled as µ = β0 + β1X1 + β2 X 2 + ... What is a generalized linear model A model whose predicted values are of the form g(µ) = β0 + β1X1 + β2 X 2 + ... • The model still include a linear predictor (to right of “=”) • g(µ) is the “link function” • Wide diversity of link functions accommodated • Non-normal distributions of errors OK (specified by link function) • Unequal error variances OK (specified by link function) • Uses maximum likelihood to estimate parameters • Uses log-likelihood ratio tests to test parameters • glm in R The two most common link functions Log • used for count data η η = logµ The inverse function is µ = e Logistic or logit • used for binary data µ eη η = log µ = 1− µ The inverse function is 1+ eη In all cases log refers to natural logarithm (base e).
    [Show full text]
  • Mathematical Statistics the Sample Distribution of the Median
    Course Notes for Math 162: Mathematical Statistics The Sample Distribution of the Median Adam Merberg and Steven J. Miller February 15, 2008 Abstract We begin by introducing the concept of order statistics and ¯nding the density of the rth order statistic of a sample. We then consider the special case of the density of the median and provide some examples. We conclude with some appendices that describe some of the techniques and background used. Contents 1 Order Statistics 1 2 The Sample Distribution of the Median 2 3 Examples and Exercises 4 A The Multinomial Distribution 5 B Big-Oh Notation 6 C Proof That With High Probability jX~ ¡ ¹~j is Small 6 D Stirling's Approximation Formula for n! 7 E Review of the exponential function 7 1 Order Statistics Suppose that the random variables X1;X2;:::;Xn constitute a sample of size n from an in¯nite population with continuous density. Often it will be useful to reorder these random variables from smallest to largest. In reordering the variables, we will also rename them so that Y1 is a random variable whose value is the smallest of the Xi, Y2 is the next smallest, and th so on, with Yn the largest of the Xi. Yr is called the r order statistic of the sample. In considering order statistics, it is naturally convenient to know their probability density. We derive an expression for the distribution of the rth order statistic as in [MM]. Theorem 1.1. For a random sample of size n from an in¯nite population having values x and density f(x), the probability th density of the r order statistic Yr is given by ·Z ¸ ·Z ¸ n! yr r¡1 1 n¡r gr(yr) = f(x) dx f(yr) f(x) dx : (1.1) (r ¡ 1)!(n ¡ r)! ¡1 yr Proof.
    [Show full text]
  • Nonparametric Multivariate Kurtosis and Tailweight Measures
    Nonparametric Multivariate Kurtosis and Tailweight Measures Jin Wang1 Northern Arizona University and Robert Serfling2 University of Texas at Dallas November 2004 – final preprint version, to appear in Journal of Nonparametric Statistics, 2005 1Department of Mathematics and Statistics, Northern Arizona University, Flagstaff, Arizona 86011-5717, USA. Email: [email protected]. 2Department of Mathematical Sciences, University of Texas at Dallas, Richardson, Texas 75083- 0688, USA. Email: [email protected]. Website: www.utdallas.edu/∼serfling. Support by NSF Grant DMS-0103698 is gratefully acknowledged. Abstract For nonparametric exploration or description of a distribution, the treatment of location, spread, symmetry and skewness is followed by characterization of kurtosis. Classical moment- based kurtosis measures the dispersion of a distribution about its “shoulders”. Here we con- sider quantile-based kurtosis measures. These are robust, are defined more widely, and dis- criminate better among shapes. A univariate quantile-based kurtosis measure of Groeneveld and Meeden (1984) is extended to the multivariate case by representing it as a transform of a dispersion functional. A family of such kurtosis measures defined for a given distribution and taken together comprises a real-valued “kurtosis functional”, which has intuitive appeal as a convenient two-dimensional curve for description of the kurtosis of the distribution. Several multivariate distributions in any dimension may thus be compared with respect to their kurtosis in a single two-dimensional plot. Important properties of the new multivariate kurtosis measures are established. For example, for elliptically symmetric distributions, this measure determines the distribution within affine equivalence. Related tailweight measures, influence curves, and asymptotic behavior of sample versions are also discussed.
    [Show full text]
  • Stat 714 Linear Statistical Models
    STAT 714 LINEAR STATISTICAL MODELS Fall, 2010 Lecture Notes Joshua M. Tebbs Department of Statistics The University of South Carolina TABLE OF CONTENTS STAT 714, J. TEBBS Contents 1 Examples of the General Linear Model 1 2 The Linear Least Squares Problem 13 2.1 Least squares estimation . 13 2.2 Geometric considerations . 15 2.3 Reparameterization . 25 3 Estimability and Least Squares Estimators 28 3.1 Introduction . 28 3.2 Estimability . 28 3.2.1 One-way ANOVA . 35 3.2.2 Two-way crossed ANOVA with no interaction . 37 3.2.3 Two-way crossed ANOVA with interaction . 39 3.3 Reparameterization . 40 3.4 Forcing least squares solutions using linear constraints . 46 4 The Gauss-Markov Model 54 4.1 Introduction . 54 4.2 The Gauss-Markov Theorem . 55 4.3 Estimation of σ2 in the GM model . 57 4.4 Implications of model selection . 60 4.4.1 Underfitting (Misspecification) . 60 4.4.2 Overfitting . 61 4.5 The Aitken model and generalized least squares . 63 5 Distributional Theory 68 5.1 Introduction . 68 i TABLE OF CONTENTS STAT 714, J. TEBBS 5.2 Multivariate normal distribution . 69 5.2.1 Probability density function . 69 5.2.2 Moment generating functions . 70 5.2.3 Properties . 72 5.2.4 Less-than-full-rank normal distributions . 73 5.2.5 Independence results . 74 5.2.6 Conditional distributions . 76 5.3 Noncentral χ2 distribution . 77 5.4 Noncentral F distribution . 79 5.5 Distributions of quadratic forms . 81 5.6 Independence of quadratic forms . 85 5.7 Cochran's Theorem .
    [Show full text]
  • Why Distinguish Between Statistics and Mathematical Statistics – the Case of Swedish Academia
    Why distinguish between statistics and mathematical statistics { the case of Swedish academia Peter Guttorp1 and Georg Lindgren2 1Department of Statistics, University of Washington, Seattle 2Mathematical Statistics, Lund University, Lund 2017/10/11 Abstract A separation between the academic subjects statistics and mathematical statistics has existed in Sweden almost as long as there have been statistics professors. The same distinction has not been maintained in other countries. Why has it been kept so for long in Sweden, and what consequences may it have had? In May 2015 it was 100 years since Mathematical Statistics was formally estab- lished as an academic discipline at a Swedish university where Statistics had existed since the turn of the century. We give an account of the debate in Lund and elsewhere about this division dur- ing the first decades after 1900 and present two of its leading personalities. The Lund University astronomer (and mathematical statistician) C.V.L. Charlier was a lead- ing proponent for a position in mathematical statistics at the university. Charlier's adversary in the debate was Pontus Fahlbeck, professor in political science and statis- tics, who reserved the word statistics for \statistics as a social science". Charlier not only secured the first academic position in Sweden in mathematical statistics for his former Ph.D. student Sven Wicksell, but he also demonstrated that a mathematical statistician can be influential in matters of state, finance, as well as in different nat- ural sciences. Fahlbeck saw mathematical statistics as a set of tools that sometimes could be useful in his brand of statistics. After a summary of the organisational, educational, and scientific growth of the statistical sciences in Sweden that has taken place during the last 50 years, we discuss what effects the Charlier-Fahlbeck divergence might have had on this development.
    [Show full text]
  • Iterative Approaches to Handling Heteroscedasticity with Partially Known Error Variances
    International Journal of Statistics and Probability; Vol. 8, No. 2; March 2019 ISSN 1927-7032 E-ISSN 1927-7040 Published by Canadian Center of Science and Education Iterative Approaches to Handling Heteroscedasticity With Partially Known Error Variances Morteza Marzjarani Correspondence: Morteza Marzjarani, National Marine Fisheries Service, Southeast Fisheries Science Center, Galveston Laboratory, 4700 Avenue U, Galveston, Texas 77551, USA. Received: January 30, 2019 Accepted: February 20, 2019 Online Published: February 22, 2019 doi:10.5539/ijsp.v8n2p159 URL: https://doi.org/10.5539/ijsp.v8n2p159 Abstract Heteroscedasticity plays an important role in data analysis. In this article, this issue along with a few different approaches for handling heteroscedasticity are presented. First, an iterative weighted least square (IRLS) and an iterative feasible generalized least square (IFGLS) are deployed and proper weights for reducing heteroscedasticity are determined. Next, a new approach for handling heteroscedasticity is introduced. In this approach, through fitting a multiple linear regression (MLR) model or a general linear model (GLM) to a sufficiently large data set, the data is divided into two parts through the inspection of the residuals based on the results of testing for heteroscedasticity, or via simulations. The first part contains the records where the absolute values of the residuals could be assumed small enough to the point that heteroscedasticity would be ignorable. Under this assumption, the error variances are small and close to their neighboring points. Such error variances could be assumed known (but, not necessarily equal).The second or the remaining portion of the said data is categorized as heteroscedastic. Through real data sets, it is concluded that this approach reduces the number of unusual (such as influential) data points suggested for further inspection and more importantly, it will lowers the root MSE (RMSE) resulting in a more robust set of parameter estimates.
    [Show full text]
  • Bayes Estimates for the Linear Model
    Bayes Estimates for the Linear Model By D. V. LINDLEY AND A. F. M. SMITH University College, London [Read before the ROYAL STATISTICAL SOCIETY at a meeting organized by the RESEARCH SECTION on Wednesday, December 8th, 1971, Mr M. J. R. HEALY in the Chair] SUMMARY The usual linear statistical model is reanalyzed using Bayesian methods and the concept of exchangeability. The general method is illustrated by applica­ tions to two-factor experimental designs and multiple regression. Keywords: LINEAR MODEL; LEAST SQUARES; BAYES ESTIMATES; EXCHANGEABILITY; ADMISSIBILITY; TWO-FACTOR EXPERIMENTAL DESIGN; MULTIPLE REGRESSION; RIDGE REGRESSION; MATRIX INVERSION. INTRODUCTION ATIENTION is confined in this paper to the linear model, E(y) = Ae, where y is a vector of observations, A a known design matrix and e a vector of unknown para­ meters. The usual estimate of e employed in this situation is that derived by the method of least squares. We argue that it is typically true that there is available prior information about the parameters and that this may be exploited to find improved, and sometimes substantially improved, estimates. In this paper we explore a particular form of prior information based on de Finetti's (1964) important idea of exchangeability. The argument is entirely within the Bayesian framework. Recently there has been much discussion of the respective merits of Bayesian and non-Bayesian approaches to statistics: we cite, for example, the paper by Cornfield (1969) and its ensuing discussion. We do not feel that it is necessary or desirable to add to this type of literature, and since we know of no reasoned argument against the Bayesian position we have adopted it here.
    [Show full text]