Lecture Notes on Statistical Theory1 Ryan Martin Department of Mathematics, Statistics, and Computer Science University of Illinois at Chicago www.math.uic.edu/~rgmartin January 8, 2015 1These notes are meant to supplement the lectures for Stat 411 at UIC given by the author. The course roughly follows the text by Hogg, McKean, and Craig, Introduction to Mathematical Statistics, 7th edition, 2012, henceforth referred to as HMC. The author makes no guarantees that these notes are free of typos or other, more serious errors. Contents 1 Statistics and Sampling Distributions 4 1.1 Introduction . .4 1.2 Model specification . .5 1.3 Two kinds of inference problems . .6 1.3.1 Point estimation . .6 1.3.2 Hypothesis testing . .6 1.4 Statistics . .7 1.5 Sampling distributions . .8 1.5.1 Basics . .8 1.5.2 Asymptotic results . .9 1.5.3 Two numerical approximations . 11 1.6 Appendix . 14 1.6.1 R code for Monte Carlo simulation in Example 1.7 . 14 1.6.2 R code for bootstrap calculation in Example 1.8 . 15 2 Point Estimation Basics 16 2.1 Introduction . 16 2.2 Notation and terminology . 16 2.3 Properties of estimators . 18 2.3.1 Unbiasedness . 18 2.3.2 Consistency . 20 2.3.3 Mean-square error . 23 2.4 Where do estimators come from? . 25 3 Likelihood and Maximum Likelihood Estimation 27 3.1 Introduction . 27 3.2 Likelihood . 27 3.3 Maximum likelihood estimators (MLEs) . 29 3.4 Basic properties . 31 3.4.1 Invariance . 31 3.4.2 Consistency . 31 3.5 Fisher information and the Cramer{Rao bound . 34 3.6 Efficiency and asymptotic normality . 37 1 3.7 Multi-parameter cases . 41 3.8 MLE computation . 43 3.8.1 Newton's method . 43 3.8.2 Estimation of the Fisher information . 47 3.8.3 An aside: one-step estimators . 47 3.8.4 Remarks . 47 3.9 Confidence intervals . 48 3.10 Appendix . 52 3.10.1 R code implementation Newton's method . 52 3.10.2 R code for Example 3.4 . 53 3.10.3 R code for Example 3.5 . 54 3.10.4 R code for Example 3.6 . 54 3.10.5 R code for Example 3.7 . 54 3.10.6 R code for Example 3.8 . 55 3.10.7 R code for Example 3.9 . 55 3.10.8 R code for Example 3.10 . 56 3.10.9 Interchanging derivatives and sums/integrals . 56 4 Sufficiency and Minimum Variance Estimation 58 4.1 Introduction . 58 4.2 Sufficiency . 59 4.2.1 Intuition . 59 4.2.2 Definition . 59 4.2.3 Neyman{Fisher factorization theorem . 61 4.3 Minimum variance unbiased estimators . 62 4.4 Rao{Blackwell theorem . 63 4.5 Completeness and Lehmann{Scheffe theorem . 65 4.6 Exponential families . 66 4.7 Multi-parameter cases . 67 4.8 Minimal sufficiency and ancillarity . 70 4.9 Appendix . 73 4.9.1 Rao{Blackwell as a complete-class theorem . 73 4.9.2 Proof of Lehmann{Scheffe Theorem . 73 4.9.3 Connection between sufficiency and conditioning . 74 5 Hypothesis Testing 76 5.1 Introduction . 76 5.2 Motivation and setup . 77 5.3 Basics . 78 5.3.1 Definitions . 78 5.3.2 Examples . 79 5.3.3 Remarks . 80 5.3.4 P-values . 80 2 5.4 Most powerful tests . 82 5.4.1 Setup . 82 5.4.2 Neyman{Pearson lemma . 83 5.4.3 Uniformly most powerful tests . 84 5.5 Likelihood ratio tests . 85 5.5.1 Motivation and setup . 85 5.5.2 One-parameter problems . 85 5.5.3 Multi-parameter problems . 88 5.6 Likelihood ratio confidence intervals . 90 5.7 Appendix . 91 5.7.1 R code for Example 5.3 . 91 5.7.2 Proof of Neyman{Pearson lemma . 92 5.7.3 Randomized tests . 92 6 Bayesian Statistics 94 6.1 Introduction . 94 6.2 Mechanics of Bayesian analysis . 96 6.2.1 Ingredients . 96 6.2.2 Bayes theorem and the posterior distribution . 96 6.2.3 Bayesian inference . 97 6.2.4 Marginalization . 100 6.3 Choice of prior . 100 6.3.1 Elicitation from experts . 101 6.3.2 Convenient priors . 101 6.3.3 Non-informative priors . 102 6.4 Other important points . 104 6.4.1 Hierarchical models . 104 6.4.2 Complete-class theorems . 105 6.4.3 Computation . 105 6.4.4 Asymptotic theory . 106 7 What Else is There to Learn? 109 7.1 Introduction . 109 7.2 Sampling and experimental design . 109 7.3 Non-iid models . 110 7.4 High-dimensional models . 111 7.5 Nonparametric models . 112 7.6 Advanced asymptotic theory . 112 7.7 Computational methods . 114 7.8 Foundations of statistics . 114 3 Chapter 1 Statistics and Sampling Distributions 1.1 Introduction Statistics is closely related to probability theory, but the two fields have entirely different goals. Recall, from Stat 401, that a typical probability problem starts with some assumptions about the distribution of a random variable (e.g., that it's binomial), and the objective is to derive some properties (probabilities, expected values, etc) of said random variable based on the stated assumptions. The statistics problem goes almost completely the other way around. Indeed, in statistics, a sample from a given population is observed, and the goal is to learn something about that population based on the sample. In other.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages117 Page
-
File Size-