Chapter 4 Parameter Estimation
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
A Recursive Formula for Moments of a Binomial Distribution Arp´ Ad´ Benyi´ ([email protected]), University of Massachusetts, Amherst, MA 01003 and Saverio M
A Recursive Formula for Moments of a Binomial Distribution Arp´ ad´ Benyi´ ([email protected]), University of Massachusetts, Amherst, MA 01003 and Saverio M. Manago ([email protected]) Naval Postgraduate School, Monterey, CA 93943 While teaching a course in probability and statistics, one of the authors came across an apparently simple question about the computation of higher order moments of a ran- dom variable. The topic of moments of higher order is rarely emphasized when teach- ing a statistics course. The textbooks we came across in our classes, for example, treat this subject rather scarcely; see [3, pp. 265–267], [4, pp. 184–187], also [2, p. 206]. Most of the examples given in these books stop at the second moment, which of course suffices if one is only interested in finding, say, the dispersion (or variance) of a ran- 2 2 dom variable X, D (X) = M2(X) − M(X) . Nevertheless, moments of order higher than 2 are relevant in many classical statistical tests when one assumes conditions of normality. These assumptions may be checked by examining the skewness or kurto- sis of a probability distribution function. The skewness, or the first shape parameter, corresponds to the the third moment about the mean. It describes the symmetry of the tails of a probability distribution. The kurtosis, also known as the second shape pa- rameter, corresponds to the fourth moment about the mean and measures the relative peakedness or flatness of a distribution. Significant skewness or kurtosis indicates that the data is not normal. However, we arrived at higher order moments unintentionally. -
Phase Transition Unbiased Estimation in High Dimensional Settings Arxiv
Phase Transition Unbiased Estimation in High Dimensional Settings St´ephaneGuerrier, Mucyo Karemera, Samuel Orso & Maria-Pia Victoria-Feser Research Center for Statistics, University of Geneva Abstract An important challenge in statistical analysis concerns the control of the finite sample bias of estimators. This problem is magnified in high dimensional settings where the number of variables p diverge with the sample size n. However, it is difficult to establish whether an estimator θ^ of θ0 is unbiased and the asymptotic ^ order of E[θ] − θ0 is commonly used instead. We introduce a new property to assess the bias, called phase transition unbiasedness, which is weaker than unbiasedness but stronger than asymptotic results. An estimator satisfying this property is such ^ ∗ that E[θ] − θ0 2 = 0, for all n greater than a finite sample size n . We propose a phase transition unbiased estimator by matching an initial estimator computed on the sample and on simulated data. It is computed using an algorithm which is shown to converge exponentially fast. The initial estimator is not required to be consistent and thus may be conveniently chosen for computational efficiency or for other properties. We demonstrate the consistency and the limiting distribution of the estimator in high dimension. Finally, we develop new estimators for logistic regression models, with and without random effects, that enjoy additional properties such as robustness to data contamination and to the problem of separability. arXiv:1907.11541v3 [math.ST] 1 Nov 2019 Keywords: Finite sample bias, Iterative bootstrap, Two-step estimators, Indirect inference, Robust estimation, Logistic regression. 1 1. Introduction An important challenge in statistical analysis concerns the control of the (finite sample) bias of estimators. -
Bias, Mean-Square Error, Relative Efficiency
3 Evaluating the Goodness of an Estimator: Bias, Mean-Square Error, Relative Efficiency Consider a population parameter ✓ for which estimation is desired. For ex- ample, ✓ could be the population mean (traditionally called µ) or the popu- lation variance (traditionally called σ2). Or it might be some other parame- ter of interest such as the population median, population mode, population standard deviation, population minimum, population maximum, population range, population kurtosis, or population skewness. As previously mentioned, we will regard parameters as numerical charac- teristics of the population of interest; as such, a parameter will be a fixed number, albeit unknown. In Stat 252, we will assume that our population has a distribution whose density function depends on the parameter of interest. Most of the examples that we will consider in Stat 252 will involve continuous distributions. Definition 3.1. An estimator ✓ˆ is a statistic (that is, it is a random variable) which after the experiment has been conducted and the data collected will be used to estimate ✓. Since it is true that any statistic can be an estimator, you might ask why we introduce yet another word into our statistical vocabulary. Well, the answer is quite simple, really. When we use the word estimator to describe a particular statistic, we already have a statistical estimation problem in mind. For example, if ✓ is the population mean, then a natural estimator of ✓ is the sample mean. If ✓ is the population variance, then a natural estimator of ✓ is the sample variance. More specifically, suppose that Y1,...,Yn are a random sample from a population whose distribution depends on the parameter ✓.The following estimators occur frequently enough in practice that they have special notations. -
Section 7 Testing Hypotheses About Parameters of Normal Distribution. T-Tests and F-Tests
Section 7 Testing hypotheses about parameters of normal distribution. T-tests and F-tests. We will postpone a more systematic approach to hypotheses testing until the following lectures and in this lecture we will describe in an ad hoc way T-tests and F-tests about the parameters of normal distribution, since they are based on a very similar ideas to confidence intervals for parameters of normal distribution - the topic we have just covered. Suppose that we are given an i.i.d. sample from normal distribution N(µ, ν2) with some unknown parameters µ and ν2 : We will need to decide between two hypotheses about these unknown parameters - null hypothesis H0 and alternative hypothesis H1: Hypotheses H0 and H1 will be one of the following: H : µ = µ ; H : µ = µ ; 0 0 1 6 0 H : µ µ ; H : µ < µ ; 0 ∼ 0 1 0 H : µ µ ; H : µ > µ ; 0 ≈ 0 1 0 where µ0 is a given ’hypothesized’ parameter. We will also consider similar hypotheses about parameter ν2 : We want to construct a decision rule α : n H ; H X ! f 0 1g n that given an i.i.d. sample (X1; : : : ; Xn) either accepts H0 or rejects H0 (accepts H1). Null hypothesis is usually a ’main’ hypothesis2 X in a sense that it is expected or presumed to be true and we need a lot of evidence to the contrary to reject it. To quantify this, we pick a parameter � [0; 1]; called level of significance, and make sure that a decision rule α rejects H when it is2 actually true with probability �; i.e. -
Weak Instruments and Finite-Sample Bias
7 Weak instruments and finite-sample bias In this chapter, we consider the effect of weak instruments on instrumen- tal variable (IV) analyses. Weak instruments, which were introduced in Sec- tion 4.5.2, are those that do not explain a large proportion of the variation in the exposure, and so the statistical association between the IV and the expo- sure is not strong. This is of particular relevance in Mendelian randomization studies since the associations of genetic variants with exposures of interest are often weak. This chapter focuses on the impact of weak instruments on the bias and coverage of IV estimates. 7.1 Introduction Although IV techniques can be used to give asymptotically unbiased estimates of causal effects in the presence of confounding, these estimates suffer from bias when evaluated in finite samples [Nelson and Startz, 1990]. A weak instrument (or a weak IV) is still a valid IV, in that it satisfies the IV assumptions, and es- timates using the IV with an infinite sample size will be unbiased; but for any finite sample size, the average value of the IV estimator will be biased. This bias, known as weak instrument bias, is towards the observational confounded estimate. Its magnitude depends on the strength of association between the IV and the exposure, which is measured by the F statistic in the regression of the exposure on the IV [Bound et al., 1995]. In this chapter, we assume the context of ‘one-sample’ Mendelian randomization, in which evidence on the genetic variant, exposure, and outcome are taken on the same set of indi- viduals, rather than subsample (Section 8.5.2) or two-sample (Section 9.8.2) Mendelian randomization, in which genetic associations with the exposure and outcome are estimated in different sets of individuals (overlapping sets in subsample, non-overlapping sets in two-sample Mendelian randomization). -
STAT 830 the Basics of Nonparametric Models The
STAT 830 The basics of nonparametric models The Empirical Distribution Function { EDF The most common interpretation of probability is that the probability of an event is the long run relative frequency of that event when the basic experiment is repeated over and over independently. So, for instance, if X is a random variable then P (X ≤ x) should be the fraction of X values which turn out to be no more than x in a long sequence of trials. In general an empirical probability or expected value is just such a fraction or average computed from the data. To make this precise, suppose we have a sample X1;:::;Xn of iid real valued random variables. Then we make the following definitions: Definition: The empirical distribution function, or EDF, is n 1 X F^ (x) = 1(X ≤ x): n n i i=1 This is a cumulative distribution function. It is an estimate of F , the cdf of the Xs. People also speak of the empirical distribution of the sample: n 1 X P^(A) = 1(X 2 A) n i i=1 ^ This is the probability distribution whose cdf is Fn. ^ Now we consider the qualities of Fn as an estimate, the standard error of the estimate, the estimated standard error, confidence intervals, simultaneous confidence intervals and so on. To begin with we describe the best known summaries of the quality of an estimator: bias, variance, mean squared error and root mean squared error. Bias, variance, MSE and RMSE There are many ways to judge the quality of estimates of a parameter φ; all of them focus on the distribution of the estimation error φ^−φ. -
11. Parameter Estimation
11. Parameter Estimation Chris Piech and Mehran Sahami May 2017 We have learned many different distributions for random variables and all of those distributions had parame- ters: the numbers that you provide as input when you define a random variable. So far when we were working with random variables, we either were explicitly told the values of the parameters, or, we could divine the values by understanding the process that was generating the random variables. What if we don’t know the values of the parameters and we can’t estimate them from our own expert knowl- edge? What if instead of knowing the random variables, we have a lot of examples of data generated with the same underlying distribution? In this chapter we are going to learn formal ways of estimating parameters from data. These ideas are critical for artificial intelligence. Almost all modern machine learning algorithms work like this: (1) specify a probabilistic model that has parameters. (2) Learn the value of those parameters from data. Parameters Before we dive into parameter estimation, first let’s revisit the concept of parameters. Given a model, the parameters are the numbers that yield the actual distribution. In the case of a Bernoulli random variable, the single parameter was the value p. In the case of a Uniform random variable, the parameters are the a and b values that define the min and max value. Here is a list of random variables and the corresponding parameters. From now on, we are going to use the notation q to be a vector of all the parameters: Distribution Parameters Bernoulli(p) q = p Poisson(l) q = l Uniform(a,b) q = (a;b) Normal(m;s 2) q = (m;s 2) Y = mX + b q = (m;b) In the real world often you don’t know the “true” parameters, but you get to observe data. -
Bias in Parametric Estimation: Reduction and Useful Side-Effects
Bias in parametric estimation: reduction and useful side-effects Ioannis Kosmidis Department of Statistical Science, University College London August 30, 2018 Abstract The bias of an estimator is defined as the difference of its expected value from the parameter to be estimated, where the expectation is with respect to the model. Loosely speaking, small bias reflects the desire that if an experiment is repeated in- definitely then the average of all the resultant estimates will be close to the parameter value that is estimated. The current paper is a review of the still-expanding repository of methods that have been developed to reduce bias in the estimation of parametric models. The review provides a unifying framework where all those methods are seen as attempts to approximate the solution of a simple estimating equation. Of partic- ular focus is the maximum likelihood estimator, which despite being asymptotically unbiased under the usual regularity conditions, has finite-sample bias that can re- sult in significant loss of performance of standard inferential procedures. An informal comparison of the methods is made revealing some useful practical side-effects in the estimation of popular models in practice including: i) shrinkage of the estimators in binomial and multinomial regression models that guarantees finiteness even in cases of data separation where the maximum likelihood estimator is infinite, and ii) inferential benefits for models that require the estimation of dispersion or precision parameters. Keywords: jackknife/bootstrap, indirect inference, penalized likelihood, infinite es- timates, separation in models with categorical responses 1 Impact of bias in estimation By its definition, bias necessarily depends on how the model is written in terms of its parameters and this dependence makes it not a strong statistical principle in terms of evaluating the performance of estimators; for example, unbiasedness of the familiar sample variance S2 as an estimator of σ2 does not deliver an unbiased estimator of σ itself. -
A Widely Applicable Bayesian Information Criterion
JournalofMachineLearningResearch14(2013)867-897 Submitted 8/12; Revised 2/13; Published 3/13 A Widely Applicable Bayesian Information Criterion Sumio Watanabe [email protected] Department of Computational Intelligence and Systems Science Tokyo Institute of Technology Mailbox G5-19, 4259 Nagatsuta, Midori-ku Yokohama, Japan 226-8502 Editor: Manfred Opper Abstract A statistical model or a learning machine is called regular if the map taking a parameter to a prob- ability distribution is one-to-one and if its Fisher information matrix is always positive definite. If otherwise, it is called singular. In regular statistical models, the Bayes free energy, which is defined by the minus logarithm of Bayes marginal likelihood, can be asymptotically approximated by the Schwarz Bayes information criterion (BIC), whereas in singular models such approximation does not hold. Recently, it was proved that the Bayes free energy of a singular model is asymptotically given by a generalized formula using a birational invariant, the real log canonical threshold (RLCT), instead of half the number of parameters in BIC. Theoretical values of RLCTs in several statistical models are now being discovered based on algebraic geometrical methodology. However, it has been difficult to estimate the Bayes free energy using only training samples, because an RLCT depends on an unknown true distribution. In the present paper, we define a widely applicable Bayesian information criterion (WBIC) by the average log likelihood function over the posterior distribution with the inverse temperature 1/logn, where n is the number of training samples. We mathematically prove that WBIC has the same asymptotic expansion as the Bayes free energy, even if a statistical model is singular for or unrealizable by a statistical model. -
Nearly Weighted Risk Minimal Unbiased Estimation✩ ∗ Ulrich K
Journal of Econometrics 209 (2019) 18–34 Contents lists available at ScienceDirect Journal of Econometrics journal homepage: www.elsevier.com/locate/jeconom Nearly weighted risk minimal unbiased estimationI ∗ Ulrich K. Müller a, , Yulong Wang b a Economics Department, Princeton University, United States b Economics Department, Syracuse University, United States article info a b s t r a c t Article history: Consider a small-sample parametric estimation problem, such as the estimation of the Received 26 July 2017 coefficient in a Gaussian AR(1). We develop a numerical algorithm that determines an Received in revised form 7 August 2018 estimator that is nearly (mean or median) unbiased, and among all such estimators, comes Accepted 27 November 2018 close to minimizing a weighted average risk criterion. We also apply our generic approach Available online 18 December 2018 to the median unbiased estimation of the degree of time variation in a Gaussian local-level JEL classification: model, and to a quantile unbiased point forecast for a Gaussian AR(1) process. C13, C22 ' 2018 Elsevier B.V. All rights reserved. Keywords: Mean bias Median bias Autoregression Quantile unbiased forecast 1. Introduction Competing estimators are typically evaluated by their bias and risk properties, such as their mean bias and mean-squared error, or their median bias. Often estimators have no known small sample optimality. What is more, if the estimation problem does not reduce to a Gaussian shift experiment even asymptotically, then in many cases, no optimality -
Statistic: a Quantity That We Can Calculate from Sample Data That Summarizes a Characteristic of That Sample
STAT 509 – Section 4.1 – Estimation Parameter: A numerical characteristic of a population. Examples: Statistic: A quantity that we can calculate from sample data that summarizes a characteristic of that sample. Examples: Point Estimator: A statistic which is a single number meant to estimate a parameter. It would be nice if the average value of the estimator (over repeated sampling) equaled the target parameter. An estimator is called unbiased if the mean of its sampling distribution is equal to the parameter being estimated. Examples: Another nice property of an estimator: we want it to be as precise as possible. The standard deviation of a statistic’s sampling distribution is called the standard error of the statistic. The standard error of the sample mean Y is / n . Note: As the sample size gets larger, the spread of the sampling distribution gets smaller. When the sample size is large, the sample mean varies less across samples. Evaluating an estimator: (1) Is it unbiased? (2) Does it have a small standard error? Interval Estimates • With a point estimate, we used a single number to estimate a parameter. • We can also use a set of numbers to serve as “reasonable” estimates for the parameter. Example: Assume we have a sample of size n from a normally distributed population. Y T We know: s / n has a t-distribution with n – 1 degrees of freedom. (Exactly true when data are normal, approximately true when data non-normal but n is large.) Y P(t t ) So: 1 – = n1, / 2 s / n n1, / 2 = where tn–1, /2 = the t-value with /2 area to the right (can be found from Table 2) This formula is called a “confidence interval” for . -
THE ONE-SAMPLE Z TEST
10 THE ONE-SAMPLE z TEST Only the Lonely Difficulty Scale ☺ ☺ ☺ (not too hard—this is the first chapter of this kind, but youdistribute know more than enough to master it) or WHAT YOU WILL LEARN IN THIS CHAPTERpost, • Deciding when the z test for one sample is appropriate to use • Computing the observed z value • Interpreting the z value • Understandingcopy, what the z value means • Understanding what effect size is and how to interpret it not INTRODUCTION TO THE Do ONE-SAMPLE z TEST Lack of sleep can cause all kinds of problems, from grouchiness to fatigue and, in rare cases, even death. So, you can imagine health care professionals’ interest in seeing that their patients get enough sleep. This is especially the case for patients 186 Copyright ©2020 by SAGE Publications, Inc. This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher. Chapter 10 ■ The One-Sample z Test 187 who are ill and have a real need for the healing and rejuvenating qualities that sleep brings. Dr. Joseph Cappelleri and his colleagues looked at the sleep difficul- ties of patients with a particular illness, fibromyalgia, to evaluate the usefulness of the Medical Outcomes Study (MOS) Sleep Scale as a measure of sleep problems. Although other analyses were completed, including one that compared a treat- ment group and a control group with one another, the important analysis (for our discussion) was the comparison of participants’ MOS scores with national MOS norms. Such a comparison between a sample’s mean score (the MOS score for par- ticipants in this study) and a population’s mean score (the norms) necessitates the use of a one-sample z test.