All-In-One Robust Estimator of the Gaussian Mean

Total Page:16

File Type:pdf, Size:1020Kb

All-In-One Robust Estimator of the Gaussian Mean Submitted to the Annals of Statistics ALL-IN-ONE ROBUST ESTIMATOR OF THE GAUSSIAN MEAN BY ARNAK S. DALALYAN1 AND ARSHAK MINASYAN2 1ENSAE-CREST, [email protected] 2Yerevan State University, YerevaNN, [email protected] The goal of this paper is to show that a single robust estimator of the mean of a multivariate Gaussian distribution can enjoy five desirable proper- ties. First, it is computationally tractable in the sense that it can be computed in a time which is at most polynomial in dimension, sample size and the log- arithm of the inverse of the contamination rate. Second, it is equivariant by translations, uniform scaling and orthogonal transformations. Third, it has a high breakdown point equal to 0:5, and a nearly-minimax-rate-breakdown point approximately equal to 0:28. Fourth, it is minimax rate optimal, up to a logarithmic factor, when data consists of independent observations corrupted by adversarially chosen outliers. Fifth, it is asymptotically efficient when the rate of contamination tends to zero. The estimator is obtained by an iterative reweighting approach. Each sample point is assigned a weight that is itera- tively updated by solving a convex optimization problem. We also establish a dimension-free non-asymptotic risk bound for the expected error of the pro- posed estimator. It is the first result of this kind in the literature and involves only the effective rank of the covariance matrix. Finally, we show that the obtained results can be extended to sub-Gaussian distributions, as well as to the cases of unknown rate of contamination or unknown covariance matrix. CONTENTS 1 Introduction . .1 2 Desirable properties of a robust estimator . .3 3 Iterative reweighting approach . .6 4 Relation to prior work and discussion . 10 5 Formal statement of main building blocks . 11 6 Sub-Gaussian distributions, high-probability bounds and adaptation . 13 7 Empirical results . 16 arXiv:2002.01432v2 [math.ST] 4 Mar 2021 8 Postponed proofs . 18 1. Introduction. Robust estimation is one of the most fundamental problems in statistics. Its goal is to design efficient methods capable of processing data sets contaminated by out- liers, so that these outliers have little influence on the final result. The notion of an outlier is hard to define for a single data point. It is also hard, inefficient and often impossible to clean data by removing the outliers. Instead, one can build methods that take as input the contaminated data set and provide as output an estimate which is not very sensitive to the AMS 2000 subject classifications: Primary 62H12, ; secondary 62F35. Keywords and phrases: Gaussian mean, robust estimation, breakdown point, minimax rate, computational tractability. 1 2 DALALYAN AND MINASYAN contamination. Recent advances in data acquisition and computational power provoked a re- vival of interest in robust estimation and learning, with a focus on finite sample results and computationally tractable procedures. This was in contrast to the more traditional studies analyzing asymptotic properties of such statistical methods. This paper builds on recent advances made in robust estimation and suggests a method that has attractive properties both from asymptotic and finite-sample points of view. Furthermore, it is computationally tractable and its statistical complexity depends optimally on the dimen- sion. As a matter of fact, we even show that what really matters is the intrinsic dimension, defined in the Gaussian model as the effective rank of the covariance matrix. Note that in the framework of robust estimation, the high-dimensional setting is qualitatively different from the one dimensional setting. This qualitative difference can be shown at two levels. First, from a computational point of view, the running time of several robust meth- ods scales poorly with dimension. Second, from a statistical point of view, while a simple “remove then average” strategy might be successful in low-dimensional settings, it can eas- ily be seen to fail in the high dimensional case. Indeed, assume that for some " 2 (0; 1=2), p 2 N, and n 2 N, the data X1;:::; Xn consist of n(1 − ") points (inliers) drawn from a p-dimensional Gaussian distribution Np(0; Ip) (where Ip is the p × p identity matrix) and "n points (outliers) equal to a given vector u. Consider an idealized setting in which, for a given threshold r > 0, an oracle tells the user whether or not Xi is within a distance r of the true mean 0. A simple strategy for robust mean estimation consists of removing all the points of p Euclidean norm larger than 2 p and averaging all the remaining points. If the norm of u is p equal to p, one can check that the distance between this estimator and the true mean µ = 0 p p p is of order p=n + "kuk2 = p=n + " p. This error rate is provably optimal in the small dimensional setting p = O(1), but suboptimal as compared to the optimal rate pp=n + " when the dimension p is not constant. The reason of this suboptimality is that the individu- ally harmless outliers, lying close to the bulk of the point cloud, have a strong joint impact on the quality of estimation. We postpone a review of the relevant prior work to Section4 in order to ease comparison with our results, and proceed here with a summary of our contributions. In the context of a data set subject to a fully adversarial corruption, we introduce a new estimator of the Gaussian mean that enjoys the following properties (the precise meaning of these properties is given in Section2): • it is computable in polynomial time, • it is equivariant with respect to similarity transformations (translations, uniform scaling and orthogonal transformations), p • it has a high (minimax) breakdown point: "∗ = (5 − 5)=10 ≈ 0:28, • it is minimax-rate-optimal, up to a logarithmic factor, • it is asymptotically efficient when the rate of contamination tends to zero, • for inhomogeneous covariance matrices, it achieves a better sample complexity than all the other previously studied methods. In order to keep the presentation simple, all the aforementioned results are established in the case where the inliers are drawn from the Gaussian distribution. We then show that the extension to a sub-Gaussian distribution can be carried out along the same lines. Furthermore, we prove that using Lepski’s method, one can get rid of the knowledge of the contamination p p rate. More precisely, we establish that the ratep p=n + " log(1=") can be achieved without any information on " other than " < (5 − 5)=10 ≈ 0:28. Finally, we prove that the same order of magnitude of the estimation error is achieved when the covariance matrix Σ is ROBUST ESTIMATION OF A GAUSSIAN MEAN 3 unknown but isotropic (i.e., proportional to the identity matrix). When the covariance matrix is an arbitrary unknownp matrix with bounded operator norm, our estimator has an error of order pp=n + ", which is the best known rate of estimation by a computationally tractable procedure in the case of unknown covariance matrices. The rest of this paper is organized as follows. We complete this introduction by presenting the notation used throughout the paper. Section2 describes the problem setting and provides the definitions of the properties of robust estimators such as rate optimality or breakdown point. The iteratively reweighted mean estimator is introduced in Section3. This section also contains the main facts characterizing the iteratively reweighted mean estimator along with their high-level proofs. A detailed discussion of relation to prior work is included in Section4. Section5 is devoted to a formal statement of the main building blocks of the proofs. Extensions to the cases of sub-Gaussian distributions, unknown " and Σ are examined in Section6. Some empirical results illustrating our theoretical claims are reported in Section7. Postponed proofs are gathered in Section8 and in the appendix. For any vector v, we use the norm notations kvk2 for the standard Euclidean norm, kvk1 for the sum of absolute values of entries and kvk1 for the largest in absolute value entry of v. The tensor product of v by itself is denoted by v⊗2 = vv>. We denote by ∆n−1 and n−1 n by S , respectively, the probability simplex and the unit sphere in R . For any symmetric matrix M, λmax(M) is the largest eigenvalue of M, while λmax;+(M) is its positive part. The operator norm of M is denoted by kMkop. We will often use the effective rank rM defined as Tr(M)=kMkop, where Tr(M) is the trace of matrix M. For symmetric matrices A and B of the same size we write A B, if the matrix A − B is positive semidefinite. For a rectangular p × n matrix A, we let smin(A) and smax(A) be the smallest and the largest singular values of A defined respectively as smin(A) = infv2 n−1 kAvk2 and smax(A) = supv2 n−1 kAvk2. S p S The set of all p × p positive semidefinite matrices is denoted by S+. 2. Desirable properties of a robust estimator. We consider the setting in which the sam- ple points are corrupted versions of independent and identically distributed random vectors drawn from a p-variate Gaussian distribution with mean µ∗ and covariance matrix Σ. In what follows, we will assume that the rate of contamination and the covariance matrix are known and, therefore, can be used for constructing an estimator of µ∗. We present in Section6 some additional results which are valid under relaxations of this assumption. DEFINITION 1. We say that the distribution Pn of data X1;:::; Xn is Gaussian with ad- ∗ versarial contamination, denoted by Pn 2 GAC(µ ; Σ;") with " 2 (0; 1=2) and Σ 0, if there is a set of n independent and identically distributed random vectors Y 1;:::; Y n drawn ∗ from Np(µ ; Σ) satisfying fi : Xi 6= Y ig ≤ "n: In what follows, the sample points Xi with indices in the set O = fi : Xi 6= Y ig are called outliers, while all the other sample points are called inliers.
Recommended publications
  • Lecture 12 Robust Estimation
    Lecture 12 Robust Estimation Prof. Dr. Svetlozar Rachev Institute for Statistics and Mathematical Economics University of Karlsruhe Financial Econometrics, Summer Semester 2007 Prof. Dr. Svetlozar Rachev Institute for Statistics and MathematicalLecture Economics 12 Robust University Estimation of Karlsruhe Copyright These lecture-notes cannot be copied and/or distributed without permission. The material is based on the text-book: Financial Econometrics: From Basics to Advanced Modeling Techniques (Wiley-Finance, Frank J. Fabozzi Series) by Svetlozar T. Rachev, Stefan Mittnik, Frank Fabozzi, Sergio M. Focardi,Teo Jaˇsic`. Prof. Dr. Svetlozar Rachev Institute for Statistics and MathematicalLecture Economics 12 Robust University Estimation of Karlsruhe Outline I Robust statistics. I Robust estimators of regressions. I Illustration: robustness of the corporate bond yield spread model. Prof. Dr. Svetlozar Rachev Institute for Statistics and MathematicalLecture Economics 12 Robust University Estimation of Karlsruhe Robust Statistics I Robust statistics addresses the problem of making estimates that are insensitive to small changes in the basic assumptions of the statistical models employed. I The concepts and methods of robust statistics originated in the 1950s. However, the concepts of robust statistics had been used much earlier. I Robust statistics: 1. assesses the changes in estimates due to small changes in the basic assumptions; 2. creates new estimates that are insensitive to small changes in some of the assumptions. I Robust statistics is also useful to separate the contribution of the tails from the contribution of the body of the data. Prof. Dr. Svetlozar Rachev Institute for Statistics and MathematicalLecture Economics 12 Robust University Estimation of Karlsruhe Robust Statistics I Peter Huber observed, that robust, distribution-free, and nonparametrical actually are not closely related properties.
    [Show full text]
  • Should We Think of a Different Median Estimator?
    Comunicaciones en Estad´ıstica Junio 2014, Vol. 7, No. 1, pp. 11–17 Should we think of a different median estimator? ¿Debemos pensar en un estimator diferente para la mediana? Jorge Iv´an V´eleza Juan Carlos Correab [email protected] [email protected] Resumen La mediana, una de las medidas de tendencia central m´as populares y utilizadas en la pr´actica, es el valor num´erico que separa los datos en dos partes iguales. A pesar de su popularidad y aplicaciones, muchos desconocen la existencia de dife- rentes expresiones para calcular este par´ametro. A continuaci´on se presentan los resultados de un estudio de simulaci´on en el que se comparan el estimador cl´asi- co y el propuesto por Harrell & Davis (1982). Mostramos que, comparado con el estimador de Harrell–Davis, el estimador cl´asico no tiene un buen desempe˜no pa- ra tama˜nos de muestra peque˜nos. Basados en los resultados obtenidos, se sugiere promover la utilizaci´on de un mejor estimador para la mediana. Palabras clave: mediana, cuantiles, estimador Harrell-Davis, simulaci´on estad´ısti- ca. Abstract The median, one of the most popular measures of central tendency widely-used in the statistical practice, is often described as the numerical value separating the higher half of the sample from the lower half. Despite its popularity and applica- tions, many people are not aware of the existence of several formulas to estimate this parameter. We present the results of a simulation study comparing the classic and the Harrell-Davis (Harrell & Davis 1982) estimators of the median for eight continuous statistical distributions.
    [Show full text]
  • Bias, Mean-Square Error, Relative Efficiency
    3 Evaluating the Goodness of an Estimator: Bias, Mean-Square Error, Relative Efficiency Consider a population parameter ✓ for which estimation is desired. For ex- ample, ✓ could be the population mean (traditionally called µ) or the popu- lation variance (traditionally called σ2). Or it might be some other parame- ter of interest such as the population median, population mode, population standard deviation, population minimum, population maximum, population range, population kurtosis, or population skewness. As previously mentioned, we will regard parameters as numerical charac- teristics of the population of interest; as such, a parameter will be a fixed number, albeit unknown. In Stat 252, we will assume that our population has a distribution whose density function depends on the parameter of interest. Most of the examples that we will consider in Stat 252 will involve continuous distributions. Definition 3.1. An estimator ✓ˆ is a statistic (that is, it is a random variable) which after the experiment has been conducted and the data collected will be used to estimate ✓. Since it is true that any statistic can be an estimator, you might ask why we introduce yet another word into our statistical vocabulary. Well, the answer is quite simple, really. When we use the word estimator to describe a particular statistic, we already have a statistical estimation problem in mind. For example, if ✓ is the population mean, then a natural estimator of ✓ is the sample mean. If ✓ is the population variance, then a natural estimator of ✓ is the sample variance. More specifically, suppose that Y1,...,Yn are a random sample from a population whose distribution depends on the parameter ✓.The following estimators occur frequently enough in practice that they have special notations.
    [Show full text]
  • A Joint Central Limit Theorem for the Sample Mean and Regenerative Variance Estimator*
    Annals of Operations Research 8(1987)41-55 41 A JOINT CENTRAL LIMIT THEOREM FOR THE SAMPLE MEAN AND REGENERATIVE VARIANCE ESTIMATOR* P.W. GLYNN Department of Industrial Engineering, University of Wisconsin, Madison, W1 53706, USA and D.L. IGLEHART Department of Operations Research, Stanford University, Stanford, CA 94305, USA Abstract Let { V(k) : k t> 1 } be a sequence of independent, identically distributed random vectors in R d with mean vector ~. The mapping g is a twice differentiable mapping from R d to R 1. Set r = g(~). A bivariate central limit theorem is proved involving a point estimator for r and the asymptotic variance of this point estimate. This result can be applied immediately to the ratio estimation problem that arises in regenerative simulation. Numerical examples show that the variance of the regenerative variance estimator is not necessarily minimized by using the "return state" with the smallest expected cycle length. Keywords and phrases Bivariate central limit theorem,j oint limit distribution, ratio estimation, regenerative simulation, simulation output analysis. 1. Introduction Let X = {X(t) : t I> 0 } be a (possibly) delayed regenerative process with regeneration times 0 = T(- 1) ~< T(0) < T(1) < T(2) < .... To incorporate regenerative sequences {Xn: n I> 0 }, we pass to the continuous time process X = {X(t) : t/> 0}, where X(0 = X[t ] and [t] is the greatest integer less than or equal to t. Under quite general conditions (see Smith [7] ), *This research was supported by Army Research Office Contract DAAG29-84-K-0030. The first author was also supported by National Science Foundation Grant ECS-8404809 and the second author by National Science Foundation Grant MCS-8203483.
    [Show full text]
  • 1 Estimation and Beyond in the Bayes Universe
    ISyE8843A, Brani Vidakovic Handout 7 1 Estimation and Beyond in the Bayes Universe. 1.1 Estimation No Bayes estimate can be unbiased but Bayesians are not upset! No Bayes estimate with respect to the squared error loss can be unbiased, except in a trivial case when its Bayes’ risk is 0. Suppose that for a proper prior ¼ the Bayes estimator ±¼(X) is unbiased, Xjθ (8θ)E ±¼(X) = θ: This implies that the Bayes risk is 0. The Bayes risk of ±¼(X) can be calculated as repeated expectation in two ways, θ Xjθ 2 X θjX 2 r(¼; ±¼) = E E (θ ¡ ±¼(X)) = E E (θ ¡ ±¼(X)) : Thus, conveniently choosing either EθEXjθ or EX EθjX and using the properties of conditional expectation we have, θ Xjθ 2 θ Xjθ X θjX X θjX 2 r(¼; ±¼) = E E θ ¡ E E θ±¼(X) ¡ E E θ±¼(X) + E E ±¼(X) θ Xjθ 2 θ Xjθ X θjX X θjX 2 = E E θ ¡ E θ[E ±¼(X)] ¡ E ±¼(X)E θ + E E ±¼(X) θ Xjθ 2 θ X X θjX 2 = E E θ ¡ E θ ¢ θ ¡ E ±¼(X)±¼(X) + E E ±¼(X) = 0: Bayesians are not upset. To check for its unbiasedness, the Bayes estimator is averaged with respect to the model measure (Xjθ), and one of the Bayesian commandments is: Thou shall not average with respect to sample space, unless you have Bayesian design in mind. Even frequentist agree that insisting on unbiasedness can lead to bad estimators, and that in their quest to minimize the risk by trading off between variance and bias-squared a small dosage of bias can help.
    [Show full text]
  • 11. Parameter Estimation
    11. Parameter Estimation Chris Piech and Mehran Sahami May 2017 We have learned many different distributions for random variables and all of those distributions had parame- ters: the numbers that you provide as input when you define a random variable. So far when we were working with random variables, we either were explicitly told the values of the parameters, or, we could divine the values by understanding the process that was generating the random variables. What if we don’t know the values of the parameters and we can’t estimate them from our own expert knowl- edge? What if instead of knowing the random variables, we have a lot of examples of data generated with the same underlying distribution? In this chapter we are going to learn formal ways of estimating parameters from data. These ideas are critical for artificial intelligence. Almost all modern machine learning algorithms work like this: (1) specify a probabilistic model that has parameters. (2) Learn the value of those parameters from data. Parameters Before we dive into parameter estimation, first let’s revisit the concept of parameters. Given a model, the parameters are the numbers that yield the actual distribution. In the case of a Bernoulli random variable, the single parameter was the value p. In the case of a Uniform random variable, the parameters are the a and b values that define the min and max value. Here is a list of random variables and the corresponding parameters. From now on, we are going to use the notation q to be a vector of all the parameters: Distribution Parameters Bernoulli(p) q = p Poisson(l) q = l Uniform(a,b) q = (a;b) Normal(m;s 2) q = (m;s 2) Y = mX + b q = (m;b) In the real world often you don’t know the “true” parameters, but you get to observe data.
    [Show full text]
  • Bayes Estimator Recap - Example
    Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Last Lecture . Biostatistics 602 - Statistical Inference Lecture 16 • What is a Bayes Estimator? Evaluation of Bayes Estimator • Is a Bayes Estimator the best unbiased estimator? . • Compared to other estimators, what are advantages of Bayes Estimator? Hyun Min Kang • What is conjugate family? • What are the conjugate families of Binomial, Poisson, and Normal distribution? March 14th, 2013 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 1 / 28 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 2 / 28 Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Recap - Bayes Estimator Recap - Example • θ : parameter • π(θ) : prior distribution i.i.d. • X1, , Xn Bernoulli(p) • X θ fX(x θ) : sampling distribution ··· ∼ | ∼ | • π(p) Beta(α, β) • Posterior distribution of θ x ∼ | • α Prior guess : pˆ = α+β . Joint fX(x θ)π(θ) π(θ x) = = | • Posterior distribution : π(p x) Beta( xi + α, n xi + β) | Marginal m(x) | ∼ − • Bayes estimator ∑ ∑ m(x) = f(x θ)π(θ)dθ (Bayes’ rule) | α + x x n α α + β ∫ pˆ = i = i + α + β + n n α + β + n α + β α + β + n • Bayes Estimator of θ is ∑ ∑ E(θ x) = θπ(θ x)dθ | θ Ω | ∫ ∈ Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 3 / 28 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 4 / 28 Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Loss Function Optimality Loss Function Let L(θ, θˆ) be a function of θ and θˆ.
    [Show full text]
  • Ch. 2 Estimators
    Chapter Two Estimators 2.1 Introduction Properties of estimators are divided into two categories; small sample and large (or infinite) sample. These properties are defined below, along with comments and criticisms. Four estimators are presented as examples to compare and determine if there is a "best" estimator. 2.2 Finite Sample Properties The first property deals with the mean location of the distribution of the estimator. P.1 Biasedness - The bias of on estimator is defined as: Bias(!ˆ ) = E(!ˆ ) - θ, where !ˆ is an estimator of θ, an unknown population parameter. If E(!ˆ ) = θ, then the estimator is unbiased. If E(!ˆ ) ! θ then the estimator has either a positive or negative bias. That is, on average the estimator tends to over (or under) estimate the population parameter. A second property deals with the variance of the distribution of the estimator. Efficiency is a property usually reserved for unbiased estimators. ˆ ˆ 1 ˆ P.2 Efficiency - Let ! 1 and ! 2 be unbiased estimators of θ with equal sample sizes . Then, ! 1 ˆ ˆ ˆ is a more efficient estimator than ! 2 if var(! 1) < var(! 2 ). Restricting the definition of efficiency to unbiased estimators, excludes biased estimators with smaller variances. For example, an estimator that always equals a single number (or a constant) has a variance equal to zero. This type of estimator could have a very large bias, but will always have the smallest variance possible. Similarly an estimator that multiplies the sample mean by [n/(n+1)] will underestimate the population mean but have a smaller variance.
    [Show full text]
  • Estimators, Bias and Variance
    Deep Learning Srihari Machine Learning Basics: Estimators, Bias and Variance Sargur N. Srihari [email protected] This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/CSE676 1 Deep Learning Topics in Basics of ML Srihari 1. Learning Algorithms 2. Capacity, Overfitting and Underfitting 3. Hyperparameters and Validation Sets 4. Estimators, Bias and Variance 5. Maximum Likelihood Estimation 6. Bayesian Statistics 7. Supervised Learning Algorithms 8. Unsupervised Learning Algorithms 9. Stochastic Gradient Descent 10. Building a Machine Learning Algorithm 11. Challenges Motivating Deep Learning 2 Deep Learning Srihari Topics in Estimators, Bias, Variance 0. Statistical tools useful for generalization 1. Point estimation 2. Bias 3. Variance and Standard Error 4. Bias-Variance tradeoff to minimize MSE 5. Consistency 3 Deep Learning Srihari Statistics provides tools for ML • The field of statistics provides many tools to achieve the ML goal of solving a task not only on the training set but also to generalize • Foundational concepts such as – Parameter estimation – Bias – Variance • They characterize notions of generalization, over- and under-fitting 4 Deep Learning Srihari Point Estimation • Point Estimation is the attempt to provide the single best prediction of some quantity of interest – Quantity of interest can be: • A single parameter • A vector of parameters – E.g., weights in linear regression • A whole function 5 Deep Learning Srihari Point estimator or Statistic • To distinguish estimates of parameters
    [Show full text]
  • Lecture 8 — October 15 8.1 Bayes Estimators and Average Risk
    STATS 300A: Theory of Statistics Fall 2015 Lecture 8 | October 15 Lecturer: Lester Mackey Scribe: Hongseok Namkoong, Phan Minh Nguyen Warning: These notes may contain factual and/or typographic errors. 8.1 Bayes Estimators and Average Risk Optimality 8.1.1 Setting We discuss the average risk optimality of estimators within the framework of Bayesian de- cision problems. As with the general decision problem setting the Bayesian setup considers a model P = fPθ : θ 2 Ωg, for our data X, a loss function L(θ; d), and risk R(θ; δ). In the frequentist approach, the parameter θ was considered to be an unknown deterministic quan- tity. In the Bayesian paradigm, we consider a measure Λ over the parameter space which we call a prior. Assuming this measure defines a probability distribution, we interpret the parameter θ as an outcome of the random variable Θ ∼ Λ. So, in this setup both X and θ are random. Conditioning on Θ = θ, we assume the data is generated by the distribution Pθ. Now, the optimality goal for our decision problem of estimating g(θ) is the minimization of the average risk r(Λ; δ) = E[L(Θ; δ(X))] = E[E[L(Θ; δ(X)) j X]]: An estimator δ which minimizes this average risk is a Bayes estimator and is sometimes referred to as being Bayes. Note that the average risk is an expectation over both the random variables Θ and X. Then by using the tower property, we showed last time that it suffices to find an estimator δ which minimizes the posterior risk E[L(Θ; δ(X))jX = x] for almost every x.
    [Show full text]
  • Unbiasedness and Bayes Estimators
    Unbiasedness and Bayes Estimators Siamak Noorbaloochi Center for Chronic Disease Outcomes Research Minneapolis VA Medical Center and Glen Meeden1 School of Statistics University of Minnesota2 A simple geometric representation of Bayes and unbiased rules for squared error loss is provided. Some orthogonality relationships between them and the functions they are estimating are proved. Bayes estimators are shown to be behave asymptotically like unbiased estimators. Key Words: Unbiasedness, Bayes estimators, squared error loss and con- sistency 1Research supported in part by NSF Grant DMS 9971331 2Glen Meeden School of Statistics 313 Ford Hall 224 Church ST S.E. University of Minnesota Minneapolis, MN 55455-0460 1 1 Introduction Let X be a random variable, possible vector valued, with a family of possible probability distributions indexed by the parameter θ ∈ Θ. Suppose γ, some real-valued function defined on Θ, is to be estimated using X. An estimator δ is said to be unbiased for γ if Eθδ(X) = γ(θ) for all θ ∈ Θ. Lehmann (1951) proposed a generalization of this notion of unbiasedness which takes into account the loss function for the problem. Noorbaloochi and Meeden (1983) proposed a generalization of Lehmann’s definition but which depends on a prior distribution π for θ. Assuming squared error loss let the Bayes risk of an estimator δ for estimating γ be denoted by r(δ, γ; π). Then under their definition δ is unbiased for estimating γ for the prior π if r(δ, γ; π) = inf r(δ, γ0; π) γ0 Under very weak assumptions it is easy to see that this definition reduces to the usual one.
    [Show full text]
  • Section 2 Simple Regression
    Section 2 Simple Regression What regression does • Relationship o Often in economics we believe that there is a (perhaps causal) relationship between two variables. o Usually more than two, but that’s deferred to another day. • Form o Is the relationship linear? YX=β01 +β This is natural first assumption, unless theory rejects it. β1 is slope, which determines whether relationship between X and Y is positive or negative. β0 is intercept or constant term, which determines where the linear relationship intersects the Y axis. o Is it plausible that this is an exact, “deterministic” relationship? No. Data (almost) never fit exactly along line. Why? • Measurement error (incorrect definition or mismeasurement) • Other variables that affect Y • Relationship is not purely linear • Relationship may be different for different observations o Adding an error term for a “stochastic” relationship YXu=β01 +β + Error term u captures all of the above problems. Error term is considered to be a random variable and is not observed directly. o Does it matter which variable is on the left-hand side? At one level, no: 1 • XY=−β−()0 u, so β1 β0 11 • XYv=γ01 +γ + , where γ≡−01,, γ=vu =− . β11ββ 1 For purposes of most estimators, yes: • We shall see that a critically important assumption is that the error term is independent of the “regressors” or exogenous variables. ~ 14 ~ • Are the errors shocks to Y for given X or shocks to X for given Y? o It might not seem like there is much difference, but the assumption is crucial to valid estimation.
    [Show full text]