A Robust Univariate Mean Estimator Is All You Need

Total Page:16

File Type:pdf, Size:1020Kb

A Robust Univariate Mean Estimator Is All You Need A Robust Univariate Mean Estimator is All You Need Adarsh Prasadz Sivaraman Balakrishnany Pradeep Ravikumarz Machine Learning Departmentz Department of Statistics and Data Sciencey Carnegie Mellon University Abstract these huge datasets is thus fraught with methodologi- cal challenges. We study the problem of designing estima- To understand the fundamental challenges and trade- tors when the data has heavy-tails and is offs in handling such “dirty data” is precisely the corrupted by outliers. In such an adversar- premise of the field of robust statistics. Here, the afore- ial setup, we aim to design statistically op- mentioned complexities are largely formalized under timal estimators for flexible non-parametric two different models of robustness: (1) The heavy- distribution classes such as distributions with tailed model: In this model the sampling distribu- bounded-2k moments and symmetric distri- tion can have thick tails, for instance, only low-order butions. Our primary workhorse is a concep- moments of the distribution are assumed to be finite; tually simple reduction from multivariate es- and (2) The -contamination model: Here the timation to univariate estimation. Using this sampling distribution is modeled as a well-behaved dis- reduction, we design estimators which are op- tribution contaminated by an fraction of arbitrary timal in both heavy-tailed and contaminated outliers. In each case, classical estimators of the distri- settings. Our estimators achieve an opti- bution (based for instance on the maximum likelihood mal dimension independent bias in the con- estimator) can behave considerably worse (potentially taminated setting, while also simultaneously arbitrarily worse) than under standard settings where achieving high-probability error guarantees the data is better behaved, satisfying various regular- with optimal sample complexity. These re- ity properties. In particular, these classical estimators sults provide some of the first such estima- can be extremely sensitive to the tails of the distribu- tors for a broad range of problems includ- tion or to the outliers and the broad goal in robust ing Mean Estimation, Sparse Mean Estima- statistics is to construct estimators that improve on tion, Covariance Estimation, Sparse Covari- these classical estimators by reducing their sensitivity ance Estimation and Sparse PCA. to outliers. 1 Introduction Heavy Tailed Model. Concretely, focusing on the fundamental problem of robust mean estimation, Modern data sets that arise in various branches of sci- in the heavy tailed model we observe n samples ence and engineering are characterized by their ever x1; : : : ; xn drawn independently from a distribution P , increasing scale and richness. This has been spurred which is only assumed to have low-order moments fi- in part by easier access to computer, internet and var- nite (for instance, P only has finite variance). The ious sensor-based technologies that enable the auto- goal of past work Catoni(2012); Minsker(2015); Lu- mated acquisition of such heterogeneous datasets. On gosi and Mendelson(2017); Catoni and Giulini(2017) the flip side, these large and rich data-sets are no has been to design an estimator θbn of the true mean µ longer carefully curated, are often collected in a de- of P which has a small `2-error with high-probability. centralized, distributed fashion, and consequently are Formally, for a given δ > 0, we would like an estimator plagued with the complexities of heterogeneity, adver- with minimal rδ such that, sarial manipulations, and outliers. The analysis of P (kθbn − µk2 ≤ rδ) ≥ 1 − δ: (1) Proceedings of the 23rdInternational Conference on Artifi- cial Intelligence and Statistics (AISTATS) 2020, Palermo, As a benchmark for estimators in the heavy-tailed Italy. PMLR: Volume 108. Copyright 2020 by the au- model, we observe that when P is a multivariate nor- thor(s). mal distribution (or more generally is a sub-Gaussian A Robust Univariate Mean Estimator is All You Need distribution) with mean µ and covariance Σ, it can be et al.(2018); Charikar et al.(2017); Diakonikolas et al. shown (see Hanson and Wright(1971)) that the sam- (2017); Balakrishnan et al.(2017); Prasad et al.(2018); P ple mean µbn = (1=n) i xi satisfies, with probability Diakonikolas et al.(2018) designing provably robust at least 1 − δ1, which are computationally tractable while achieving near-optimal contamination dependence (i.e. depen- r r trace (Σ) kΣk log(1/δ) dence on the fraction of outliers ) for computing kµ − µk + 2 : (2) bn 2 . n n means and covariances. In the Huber model, using information-theoretic lower bounds Chen et al.(2016); where kΣk2 denotes the operator norm of the covari- Lai et al.(2016); Hopkins and Li(2018), it can be ance matrix Σ. shown that any estimator must suffer a non-zero bias The bound is referred to as a sub-Gaussian-style er- (the asymptotic error as the number of samples go to ror bound. However, for heavy tailed distributions, infinity). For example, for the class of distributions 2 as for instance showed in Catoni(2012), the sam- with bounded variance, Σ - σ Ipp, the bias of any es- timator is lower bounded by Ω(σ ). Surprisingly, the ple mean only satisfies the sub-optimal bound rδ = Ω(pd/nδ). Somewhat surprisingly, recent work Lu- optimal bias that can be achieved is often independent gosi and Mendelson(2017) showed that the sub- of the data dimension. In other words, in many in- Gaussian error bound is achievable while only assum- teresting cases optimally robust estimators in Huber’s ing that P has finite variance, but by a carefully de- model can tolerate a constant fraction of outliers, signed estimator. In the univariate setting, the classi- independent of the dimension. cal median-of-means estimator Alon et al.(1996); Ne- While the aforementioned recent estimators for mean mirovski and Yudin(1983); Jerrum et al.(1986) and estimation under Huber contamination have a poly- Catoni’s M-estimator Catoni(2012) achieve this sur- nomial computational complexity, their correspond- prising result but designing such estimators in the mul- ing sample complexities are only known to be polyno- tivariate setting has proved challenging. Estimators mial in the dimension p. For example, Kothari et al. that achieve truly sub-Gaussian bounds, but which are (2018) and Hopkins and Li(2018) designed estima- computationally intractable, were proposed recently tors which achieve optimal bias for distributions with by Lugosi and Mendelson(2017) and subsequently certifiably bounded 2k-moments, but their statistical Catoni and Giulini(2017). Hopkins(2018) and Cher- sample complexity scales as O(pk). Steinhardt et al. apanamjeri et al.(2019) developed a sum-of-squares (2017) studied mean estimation and presented an es- based relaxation of Lugosi and Mendelson(2017)’s es- timator which has a sample complexity of Ω p1:5. timator, thereby giving a polynomial time algorithm which achieves optimal rates. Despite their apparent similarity, developments of es- timators that are robust in each of these models, have Huber’s -Contamination Model. In this setting, remained relatively independent. Focusing on mean instead of observing samples directly from the true dis- estimation we notice subtle differences, in the heavy- tribution P , we observe samples drawn from P, which tailed model our target is the mean of the sampling for an arbitrary distribution Q is defined as a mixture distribution whereas in the Huber model our target is model, the mean of the decontaminated sampling distribution P . Beyond this distinction, it is also important to note P = (1 − )P + Q. (3) that as highlighted above the natural focus in heavy- tailed mean estimation is on achieving strong, high- The distribution Q allows one to model arbitrary out- probability error guarantees, while in Huber’s model liers, which may correspond to gross corruptions, or the focus has been on achieving dimension indepen- subtle deviations from the true model. There has dent bias. been a lot of classical work studying estimators in the -contamination model under the umbrella of ro- Contributions. In this work, we aim to design esti- bust statistics (see for instance Hampel et al.(1986) mators which are statistically optimally robust in both and references therein). However, most of the estima- models simultaneously, i.e. they achieve a dimension- tors come that come with strong guarantees are com- independent asymptotic bias in the -contamination putationally intractable Tukey(1975), while others model and achieve high probability deviation bounds are statistically sub-optimal heuristics Hastings et al. similar to (2). Our main workhorse is a conceptually (1947). Recently, there has been substantial progress simple way of reducing multivariate estimation to the Diakonikolas et al.(2016); Lai et al.(2016); Kothari univariate setting. Then, by carefully solving mean estimation in the univariate setting, we are able to 1 Here and throughout our paper we use the notation . design optimal estimators for multivariate mean and to denote an inequality with universal constants dropped for conciseness. covariance estimation for non-parametric distribution Prasad, Balakrishnan, Ravikumar classes both in the low-dimensional (n ≥ p) and high- of H-symmetric distributions with unique center of H- t0,κ dimensional (n < p) setting. We achieve these rates symmetry. Moreover suppose Psym ⊂ Psym is the class t0,κ for non-parametric distribution classes such as distri- of distributions such that for any P 2 Psym the CDF T butions with bounded 2k-moments and the class of of the univariate projection(u P ) given by FuT P in- symmetric distributions. creases at least linearly around uT θ. Formally, for all T −1 1 x1 2 [med(u P );F T ( + t0)] we have that 2 Background and Setup u P 2 1 1 T F T (x ) − ≥ (x − med(u P )) u P 1 2 κ 1 In this section, we formally define two classes of dis- u;P tributions which we work with in this paper, (1) (4) Bounded-2k-Moment distributions and (2) Symmetric −1 1 T and for all x2 2 [FuT P ( 2 − t0); med(u P )], we have Distributions.
Recommended publications
  • Lecture 12 Robust Estimation
    Lecture 12 Robust Estimation Prof. Dr. Svetlozar Rachev Institute for Statistics and Mathematical Economics University of Karlsruhe Financial Econometrics, Summer Semester 2007 Prof. Dr. Svetlozar Rachev Institute for Statistics and MathematicalLecture Economics 12 Robust University Estimation of Karlsruhe Copyright These lecture-notes cannot be copied and/or distributed without permission. The material is based on the text-book: Financial Econometrics: From Basics to Advanced Modeling Techniques (Wiley-Finance, Frank J. Fabozzi Series) by Svetlozar T. Rachev, Stefan Mittnik, Frank Fabozzi, Sergio M. Focardi,Teo Jaˇsic`. Prof. Dr. Svetlozar Rachev Institute for Statistics and MathematicalLecture Economics 12 Robust University Estimation of Karlsruhe Outline I Robust statistics. I Robust estimators of regressions. I Illustration: robustness of the corporate bond yield spread model. Prof. Dr. Svetlozar Rachev Institute for Statistics and MathematicalLecture Economics 12 Robust University Estimation of Karlsruhe Robust Statistics I Robust statistics addresses the problem of making estimates that are insensitive to small changes in the basic assumptions of the statistical models employed. I The concepts and methods of robust statistics originated in the 1950s. However, the concepts of robust statistics had been used much earlier. I Robust statistics: 1. assesses the changes in estimates due to small changes in the basic assumptions; 2. creates new estimates that are insensitive to small changes in some of the assumptions. I Robust statistics is also useful to separate the contribution of the tails from the contribution of the body of the data. Prof. Dr. Svetlozar Rachev Institute for Statistics and MathematicalLecture Economics 12 Robust University Estimation of Karlsruhe Robust Statistics I Peter Huber observed, that robust, distribution-free, and nonparametrical actually are not closely related properties.
    [Show full text]
  • Should We Think of a Different Median Estimator?
    Comunicaciones en Estad´ıstica Junio 2014, Vol. 7, No. 1, pp. 11–17 Should we think of a different median estimator? ¿Debemos pensar en un estimator diferente para la mediana? Jorge Iv´an V´eleza Juan Carlos Correab [email protected] [email protected] Resumen La mediana, una de las medidas de tendencia central m´as populares y utilizadas en la pr´actica, es el valor num´erico que separa los datos en dos partes iguales. A pesar de su popularidad y aplicaciones, muchos desconocen la existencia de dife- rentes expresiones para calcular este par´ametro. A continuaci´on se presentan los resultados de un estudio de simulaci´on en el que se comparan el estimador cl´asi- co y el propuesto por Harrell & Davis (1982). Mostramos que, comparado con el estimador de Harrell–Davis, el estimador cl´asico no tiene un buen desempe˜no pa- ra tama˜nos de muestra peque˜nos. Basados en los resultados obtenidos, se sugiere promover la utilizaci´on de un mejor estimador para la mediana. Palabras clave: mediana, cuantiles, estimador Harrell-Davis, simulaci´on estad´ısti- ca. Abstract The median, one of the most popular measures of central tendency widely-used in the statistical practice, is often described as the numerical value separating the higher half of the sample from the lower half. Despite its popularity and applica- tions, many people are not aware of the existence of several formulas to estimate this parameter. We present the results of a simulation study comparing the classic and the Harrell-Davis (Harrell & Davis 1982) estimators of the median for eight continuous statistical distributions.
    [Show full text]
  • Bias, Mean-Square Error, Relative Efficiency
    3 Evaluating the Goodness of an Estimator: Bias, Mean-Square Error, Relative Efficiency Consider a population parameter ✓ for which estimation is desired. For ex- ample, ✓ could be the population mean (traditionally called µ) or the popu- lation variance (traditionally called σ2). Or it might be some other parame- ter of interest such as the population median, population mode, population standard deviation, population minimum, population maximum, population range, population kurtosis, or population skewness. As previously mentioned, we will regard parameters as numerical charac- teristics of the population of interest; as such, a parameter will be a fixed number, albeit unknown. In Stat 252, we will assume that our population has a distribution whose density function depends on the parameter of interest. Most of the examples that we will consider in Stat 252 will involve continuous distributions. Definition 3.1. An estimator ✓ˆ is a statistic (that is, it is a random variable) which after the experiment has been conducted and the data collected will be used to estimate ✓. Since it is true that any statistic can be an estimator, you might ask why we introduce yet another word into our statistical vocabulary. Well, the answer is quite simple, really. When we use the word estimator to describe a particular statistic, we already have a statistical estimation problem in mind. For example, if ✓ is the population mean, then a natural estimator of ✓ is the sample mean. If ✓ is the population variance, then a natural estimator of ✓ is the sample variance. More specifically, suppose that Y1,...,Yn are a random sample from a population whose distribution depends on the parameter ✓.The following estimators occur frequently enough in practice that they have special notations.
    [Show full text]
  • A Joint Central Limit Theorem for the Sample Mean and Regenerative Variance Estimator*
    Annals of Operations Research 8(1987)41-55 41 A JOINT CENTRAL LIMIT THEOREM FOR THE SAMPLE MEAN AND REGENERATIVE VARIANCE ESTIMATOR* P.W. GLYNN Department of Industrial Engineering, University of Wisconsin, Madison, W1 53706, USA and D.L. IGLEHART Department of Operations Research, Stanford University, Stanford, CA 94305, USA Abstract Let { V(k) : k t> 1 } be a sequence of independent, identically distributed random vectors in R d with mean vector ~. The mapping g is a twice differentiable mapping from R d to R 1. Set r = g(~). A bivariate central limit theorem is proved involving a point estimator for r and the asymptotic variance of this point estimate. This result can be applied immediately to the ratio estimation problem that arises in regenerative simulation. Numerical examples show that the variance of the regenerative variance estimator is not necessarily minimized by using the "return state" with the smallest expected cycle length. Keywords and phrases Bivariate central limit theorem,j oint limit distribution, ratio estimation, regenerative simulation, simulation output analysis. 1. Introduction Let X = {X(t) : t I> 0 } be a (possibly) delayed regenerative process with regeneration times 0 = T(- 1) ~< T(0) < T(1) < T(2) < .... To incorporate regenerative sequences {Xn: n I> 0 }, we pass to the continuous time process X = {X(t) : t/> 0}, where X(0 = X[t ] and [t] is the greatest integer less than or equal to t. Under quite general conditions (see Smith [7] ), *This research was supported by Army Research Office Contract DAAG29-84-K-0030. The first author was also supported by National Science Foundation Grant ECS-8404809 and the second author by National Science Foundation Grant MCS-8203483.
    [Show full text]
  • 1 Estimation and Beyond in the Bayes Universe
    ISyE8843A, Brani Vidakovic Handout 7 1 Estimation and Beyond in the Bayes Universe. 1.1 Estimation No Bayes estimate can be unbiased but Bayesians are not upset! No Bayes estimate with respect to the squared error loss can be unbiased, except in a trivial case when its Bayes’ risk is 0. Suppose that for a proper prior ¼ the Bayes estimator ±¼(X) is unbiased, Xjθ (8θ)E ±¼(X) = θ: This implies that the Bayes risk is 0. The Bayes risk of ±¼(X) can be calculated as repeated expectation in two ways, θ Xjθ 2 X θjX 2 r(¼; ±¼) = E E (θ ¡ ±¼(X)) = E E (θ ¡ ±¼(X)) : Thus, conveniently choosing either EθEXjθ or EX EθjX and using the properties of conditional expectation we have, θ Xjθ 2 θ Xjθ X θjX X θjX 2 r(¼; ±¼) = E E θ ¡ E E θ±¼(X) ¡ E E θ±¼(X) + E E ±¼(X) θ Xjθ 2 θ Xjθ X θjX X θjX 2 = E E θ ¡ E θ[E ±¼(X)] ¡ E ±¼(X)E θ + E E ±¼(X) θ Xjθ 2 θ X X θjX 2 = E E θ ¡ E θ ¢ θ ¡ E ±¼(X)±¼(X) + E E ±¼(X) = 0: Bayesians are not upset. To check for its unbiasedness, the Bayes estimator is averaged with respect to the model measure (Xjθ), and one of the Bayesian commandments is: Thou shall not average with respect to sample space, unless you have Bayesian design in mind. Even frequentist agree that insisting on unbiasedness can lead to bad estimators, and that in their quest to minimize the risk by trading off between variance and bias-squared a small dosage of bias can help.
    [Show full text]
  • 11. Parameter Estimation
    11. Parameter Estimation Chris Piech and Mehran Sahami May 2017 We have learned many different distributions for random variables and all of those distributions had parame- ters: the numbers that you provide as input when you define a random variable. So far when we were working with random variables, we either were explicitly told the values of the parameters, or, we could divine the values by understanding the process that was generating the random variables. What if we don’t know the values of the parameters and we can’t estimate them from our own expert knowl- edge? What if instead of knowing the random variables, we have a lot of examples of data generated with the same underlying distribution? In this chapter we are going to learn formal ways of estimating parameters from data. These ideas are critical for artificial intelligence. Almost all modern machine learning algorithms work like this: (1) specify a probabilistic model that has parameters. (2) Learn the value of those parameters from data. Parameters Before we dive into parameter estimation, first let’s revisit the concept of parameters. Given a model, the parameters are the numbers that yield the actual distribution. In the case of a Bernoulli random variable, the single parameter was the value p. In the case of a Uniform random variable, the parameters are the a and b values that define the min and max value. Here is a list of random variables and the corresponding parameters. From now on, we are going to use the notation q to be a vector of all the parameters: Distribution Parameters Bernoulli(p) q = p Poisson(l) q = l Uniform(a,b) q = (a;b) Normal(m;s 2) q = (m;s 2) Y = mX + b q = (m;b) In the real world often you don’t know the “true” parameters, but you get to observe data.
    [Show full text]
  • Bayes Estimator Recap - Example
    Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Last Lecture . Biostatistics 602 - Statistical Inference Lecture 16 • What is a Bayes Estimator? Evaluation of Bayes Estimator • Is a Bayes Estimator the best unbiased estimator? . • Compared to other estimators, what are advantages of Bayes Estimator? Hyun Min Kang • What is conjugate family? • What are the conjugate families of Binomial, Poisson, and Normal distribution? March 14th, 2013 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 1 / 28 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 2 / 28 Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Recap - Bayes Estimator Recap - Example • θ : parameter • π(θ) : prior distribution i.i.d. • X1, , Xn Bernoulli(p) • X θ fX(x θ) : sampling distribution ··· ∼ | ∼ | • π(p) Beta(α, β) • Posterior distribution of θ x ∼ | • α Prior guess : pˆ = α+β . Joint fX(x θ)π(θ) π(θ x) = = | • Posterior distribution : π(p x) Beta( xi + α, n xi + β) | Marginal m(x) | ∼ − • Bayes estimator ∑ ∑ m(x) = f(x θ)π(θ)dθ (Bayes’ rule) | α + x x n α α + β ∫ pˆ = i = i + α + β + n n α + β + n α + β α + β + n • Bayes Estimator of θ is ∑ ∑ E(θ x) = θπ(θ x)dθ | θ Ω | ∫ ∈ Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 3 / 28 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 4 / 28 Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Loss Function Optimality Loss Function Let L(θ, θˆ) be a function of θ and θˆ.
    [Show full text]
  • Ch. 2 Estimators
    Chapter Two Estimators 2.1 Introduction Properties of estimators are divided into two categories; small sample and large (or infinite) sample. These properties are defined below, along with comments and criticisms. Four estimators are presented as examples to compare and determine if there is a "best" estimator. 2.2 Finite Sample Properties The first property deals with the mean location of the distribution of the estimator. P.1 Biasedness - The bias of on estimator is defined as: Bias(!ˆ ) = E(!ˆ ) - θ, where !ˆ is an estimator of θ, an unknown population parameter. If E(!ˆ ) = θ, then the estimator is unbiased. If E(!ˆ ) ! θ then the estimator has either a positive or negative bias. That is, on average the estimator tends to over (or under) estimate the population parameter. A second property deals with the variance of the distribution of the estimator. Efficiency is a property usually reserved for unbiased estimators. ˆ ˆ 1 ˆ P.2 Efficiency - Let ! 1 and ! 2 be unbiased estimators of θ with equal sample sizes . Then, ! 1 ˆ ˆ ˆ is a more efficient estimator than ! 2 if var(! 1) < var(! 2 ). Restricting the definition of efficiency to unbiased estimators, excludes biased estimators with smaller variances. For example, an estimator that always equals a single number (or a constant) has a variance equal to zero. This type of estimator could have a very large bias, but will always have the smallest variance possible. Similarly an estimator that multiplies the sample mean by [n/(n+1)] will underestimate the population mean but have a smaller variance.
    [Show full text]
  • Estimators, Bias and Variance
    Deep Learning Srihari Machine Learning Basics: Estimators, Bias and Variance Sargur N. Srihari [email protected] This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/CSE676 1 Deep Learning Topics in Basics of ML Srihari 1. Learning Algorithms 2. Capacity, Overfitting and Underfitting 3. Hyperparameters and Validation Sets 4. Estimators, Bias and Variance 5. Maximum Likelihood Estimation 6. Bayesian Statistics 7. Supervised Learning Algorithms 8. Unsupervised Learning Algorithms 9. Stochastic Gradient Descent 10. Building a Machine Learning Algorithm 11. Challenges Motivating Deep Learning 2 Deep Learning Srihari Topics in Estimators, Bias, Variance 0. Statistical tools useful for generalization 1. Point estimation 2. Bias 3. Variance and Standard Error 4. Bias-Variance tradeoff to minimize MSE 5. Consistency 3 Deep Learning Srihari Statistics provides tools for ML • The field of statistics provides many tools to achieve the ML goal of solving a task not only on the training set but also to generalize • Foundational concepts such as – Parameter estimation – Bias – Variance • They characterize notions of generalization, over- and under-fitting 4 Deep Learning Srihari Point Estimation • Point Estimation is the attempt to provide the single best prediction of some quantity of interest – Quantity of interest can be: • A single parameter • A vector of parameters – E.g., weights in linear regression • A whole function 5 Deep Learning Srihari Point estimator or Statistic • To distinguish estimates of parameters
    [Show full text]
  • Lecture 8 — October 15 8.1 Bayes Estimators and Average Risk
    STATS 300A: Theory of Statistics Fall 2015 Lecture 8 | October 15 Lecturer: Lester Mackey Scribe: Hongseok Namkoong, Phan Minh Nguyen Warning: These notes may contain factual and/or typographic errors. 8.1 Bayes Estimators and Average Risk Optimality 8.1.1 Setting We discuss the average risk optimality of estimators within the framework of Bayesian de- cision problems. As with the general decision problem setting the Bayesian setup considers a model P = fPθ : θ 2 Ωg, for our data X, a loss function L(θ; d), and risk R(θ; δ). In the frequentist approach, the parameter θ was considered to be an unknown deterministic quan- tity. In the Bayesian paradigm, we consider a measure Λ over the parameter space which we call a prior. Assuming this measure defines a probability distribution, we interpret the parameter θ as an outcome of the random variable Θ ∼ Λ. So, in this setup both X and θ are random. Conditioning on Θ = θ, we assume the data is generated by the distribution Pθ. Now, the optimality goal for our decision problem of estimating g(θ) is the minimization of the average risk r(Λ; δ) = E[L(Θ; δ(X))] = E[E[L(Θ; δ(X)) j X]]: An estimator δ which minimizes this average risk is a Bayes estimator and is sometimes referred to as being Bayes. Note that the average risk is an expectation over both the random variables Θ and X. Then by using the tower property, we showed last time that it suffices to find an estimator δ which minimizes the posterior risk E[L(Θ; δ(X))jX = x] for almost every x.
    [Show full text]
  • Unbiasedness and Bayes Estimators
    Unbiasedness and Bayes Estimators Siamak Noorbaloochi Center for Chronic Disease Outcomes Research Minneapolis VA Medical Center and Glen Meeden1 School of Statistics University of Minnesota2 A simple geometric representation of Bayes and unbiased rules for squared error loss is provided. Some orthogonality relationships between them and the functions they are estimating are proved. Bayes estimators are shown to be behave asymptotically like unbiased estimators. Key Words: Unbiasedness, Bayes estimators, squared error loss and con- sistency 1Research supported in part by NSF Grant DMS 9971331 2Glen Meeden School of Statistics 313 Ford Hall 224 Church ST S.E. University of Minnesota Minneapolis, MN 55455-0460 1 1 Introduction Let X be a random variable, possible vector valued, with a family of possible probability distributions indexed by the parameter θ ∈ Θ. Suppose γ, some real-valued function defined on Θ, is to be estimated using X. An estimator δ is said to be unbiased for γ if Eθδ(X) = γ(θ) for all θ ∈ Θ. Lehmann (1951) proposed a generalization of this notion of unbiasedness which takes into account the loss function for the problem. Noorbaloochi and Meeden (1983) proposed a generalization of Lehmann’s definition but which depends on a prior distribution π for θ. Assuming squared error loss let the Bayes risk of an estimator δ for estimating γ be denoted by r(δ, γ; π). Then under their definition δ is unbiased for estimating γ for the prior π if r(δ, γ; π) = inf r(δ, γ0; π) γ0 Under very weak assumptions it is easy to see that this definition reduces to the usual one.
    [Show full text]
  • Section 2 Simple Regression
    Section 2 Simple Regression What regression does • Relationship o Often in economics we believe that there is a (perhaps causal) relationship between two variables. o Usually more than two, but that’s deferred to another day. • Form o Is the relationship linear? YX=β01 +β This is natural first assumption, unless theory rejects it. β1 is slope, which determines whether relationship between X and Y is positive or negative. β0 is intercept or constant term, which determines where the linear relationship intersects the Y axis. o Is it plausible that this is an exact, “deterministic” relationship? No. Data (almost) never fit exactly along line. Why? • Measurement error (incorrect definition or mismeasurement) • Other variables that affect Y • Relationship is not purely linear • Relationship may be different for different observations o Adding an error term for a “stochastic” relationship YXu=β01 +β + Error term u captures all of the above problems. Error term is considered to be a random variable and is not observed directly. o Does it matter which variable is on the left-hand side? At one level, no: 1 • XY=−β−()0 u, so β1 β0 11 • XYv=γ01 +γ + , where γ≡−01,, γ=vu =− . β11ββ 1 For purposes of most estimators, yes: • We shall see that a critically important assumption is that the error term is independent of the “regressors” or exogenous variables. ~ 14 ~ • Are the errors shocks to Y for given X or shocks to X for given Y? o It might not seem like there is much difference, but the assumption is crucial to valid estimation.
    [Show full text]