Transformations and Bayesian Estimation of Skewed and Heavy-Tailed Densities

Total Page:16

File Type:pdf, Size:1020Kb

Transformations and Bayesian Estimation of Skewed and Heavy-Tailed Densities Transformations and Bayesian Estimation of Skewed and Heavy-Tailed Densities Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Andrew Bean, B.A.,M.S. Graduate Program in Statistics The Ohio State University 2017 Dissertation Committee: Xinyi Xu, Co-Advisor Steven N. MacEachern, Co-Advisor Yoonkyung Lee Matthew T. Pratola c Copyright by Andrew Bean 2017 Abstract In data analysis applications characterized by large and possibly irregular data sets, nonparametric statistical techniques aim to ensure that, as the sample size grows, all unusual features of the data generating process can be captured. Good large- sample performance can be guaranteed in broad classes of problems. Yet within these broad classes, some problems may be substantially more difficult than others. This fact, long recognized in classical nonparametrics, also holds in the growing field of Bayesian nonparametrics, where flexible prior distributions are developed to allow for an infinite-dimensional set of possible truths. This dissertation studies the Bayesian approach to the classic problem of nonpara- metric density estimation, in the presence of specific irregularities such as heavy tails and skew. The problem of estimating an unknown probability density is recognized as being harder when the density is skewed or heavy tailed than when it is symmetric and light-tailed. It is more challenging problem for classical kernel density estimators, where the expected squared-error loss is higher for heavier tailed densities. It is also a more challenging problem in Bayesian density estimation, where heavy tails preclude the analytical treatment required to establish a large-sample convergence rate for the popular Dirichlet-Process (DP) mixture model. ii Our proposed approach addresses these features by incorporating a low-dimensional parametric transformation of the sample, estimated from the data, with the aim of set- ting up an easier density estimation problem on the transformed scale. This strategy was proposed earlier in combination with kernel density estimators, and we illustrate its usefulness in the Bayesian context. Further, we develop a set of transformations estimated in a way to ensure that the fastest proven convergence rate for the DP mixture is applicable to the transformed problem. The transformation-density estimation technique makes advantageous use of a parametric pre-analysis to address specific irregularities in the data generating pro- cess. Since the parametric stage is low-dimensional, and governed by a faster con- vergence rate, the asymptotic performance of the model is enhanced without slowing down the overall convergence rate. We consider other settings where this recipe for semiparametric analysis | with parametric sub-analyses designed to address specific irregularities, or to simplify the main nonparametric component of the analysis | might be beneficial. iii To my family. iv Acknowledgments This dissertation is the culmination of a six-year odyssey at Ohio State. I was \carried to Ohio in a swarm of bees," as the rock band The National put it. (Figu- ratively, of course, when it comes to the bees.) My studies have taken me far from my home state of Arizona. They have taken me far from friends and family, who, despite the distance, have continued to provide much-needed balance during my time in Ohio. In the end, Columbus too proved to be a wonderful home away from home. This is mostly thanks to the people I have met here, including the faculty and staff in the Department of Statistics, and friends, roommates, and classmates throughout the years. I will miss them, and will look back on these years with fondness. I would not have reached this point without the support of several people I want to thank individually. My advisors Xinyi Xu and Steve MacEachern are tremendous role models, both personally and as statisticians and researchers. They were unfailingly supportive and encouraging, even when my work did not proceed smoothly. It has been a privilege to work with them, and I look forward to continuing collaboration in the future. I thank Yoon Lee and Matt Pratola for serving on the committee for this dissertation. Their perspective and input on this work is invaluable. I have loved teaching statistics at Ohio State (loved everything but the grading, that is), and I became a better teacher during my time here thanks to Michelle Everson, Jonathan Baker, Laura Kubatko, and others. I also thank the faculty and staff of the v Mathematics and Computer Science Department at Colorado College; the wonderful teachers there inspired me to pursue my studies this far. Most of all, I am grateful to my family | my parents Jeff and Sydney, and my brother Owen | for their love and support as I worked towards this degree. I am privileged to have had parents and grandparents who gave me the opportunity to succeed at every level of my education. The only way to properly give thanks for this gift is to pass it on. I can only hope to do so with the same selflessness. Lastly, to Yi: the greatest fortune I've had during my time in Ohio is to have found a partner like you. The daily ups and downs of doctoral studies, the halls of Cockins: neither are romantic, but both seemed that way as we navigated them together. I look forward to writing our next chapters together. vi Vita August 3, 1987 . Born - San Francisco, CA, USA 2009 . .B.A. Mathematics, The Colorado College. 2012 . .M.S. Statistics, The Ohio State University. 2012-present . .Graduate Teaching Associate, The Ohio State University. Publications Research Publications A. Bean, X. Xu, S.N. MacEachern, \Transformations and Bayesian Density Estima- tion". Electronic Journal of Statistics, 10(2):3355-3373, Nov. 2016. Fields of Study Major Field: Statistics vii Table of Contents Page Abstract . ii Dedication . iv Acknowledgments . .v Vita......................................... vii List of Tables . xi List of Figures . xii 1. Introduction and Theoretical Motivation . .1 1.1 Parametric and Nonparametric Asymptotics by Example . .3 1.2 Asymptotic Properties of Bayesian Posteriors . .6 1.2.1 Parametric Bayesian Models . .7 1.2.2 Nonparametric Bayesian Models . .9 1.3 Outline of the Thesis . 12 2. Frequentist Transformation-Density Estimation . 14 2.1 Parzen-Type Kernel Density Estimators . 15 2.1.1 Asymptotic properties of the kernel density estimator . 16 2.2 Density Estimation with Transformations . 19 2.2.1 An L2 Criterion for Selecting Transformations . 20 2.3 Transformation Families . 27 2.3.1 Parametric transformation, nonparametric density estimation 27 2.3.2 Nonparametric transformation, nonparametric density esti- mation . 29 2.3.3 Other Transformation Families . 30 viii 3. Density Estimation using Dirichlet Process Mixtures . 35 3.1 Nonparametric Prior Construction . 35 3.1.1 The Dirichlet Process . 36 3.1.2 Dirichlet process mixtures . 38 3.2 Posterior computation for DP mixtures . 41 3.3 Performance of Dirichlet process mixtures for density estimation . 43 3.3.1 Measurements of accuracy for density estimation . 43 3.3.2 Constructions for the DP mixture prior, and finite-sample performance . 46 3.3.3 Asymptotic properties of DP mixtures . 48 4. Iterative Transformation and Bayesian Density Estimation . 53 4.1 Background . 53 4.2 Transformations . 56 4.2.1 Family of Transformations . 60 4.2.2 A Criterion for Estimating Transformation Parameters . 62 4.2.3 Iterative Transformation Selection . 66 4.3 Simulation Study . 67 4.3.1 Simulation Design . 67 4.3.2 Simulation Results . 69 4.4 An Application to BMI Modeling . 74 4.5 Discussion . 77 5. Heavy-Tailed Density Estimation Using Transformations and DP Mixtures 80 5.1 Motivation . 80 5.2 DP Mixture Asymptotics and Distribution Tails . 82 5.2.1 Convergence Rates for Sub-Gaussian Tails . 84 5.2.2 Characterization of Heavy-Tailed Distributions . 85 5.3 A Family of Transformations for Heavy-Tailed Densities . 86 5.3.1 Transformations to sub-Gaussian tails . 86 5.3.2 Skew-t cdf-inverse-cdf transformations . 89 5.3.3 Estimating Skew-t Transformation Parameters . 90 5.4 Simulations and Data Analysis . 98 5.4.1 Data Analysis . 99 5.4.2 Simulation Study . 107 5.5 Discussion . 110 ix 6. Extensions and Future Work . 120 6.1 Multivariate Density Estimation . 120 6.1.1 Transformations to Multivariate sub-Gaussianity . 123 6.1.2 Bayesian Analysis of Heavy-Tailed Time Series . 126 6.1.3 Median Regression with a Heavy-Tailed Error Distribution . 127 6.2 A Recipe for Efficient Semiparametric Analysis . 130 Appendices 131 A. Asymptotic Expansion of MISE for Kernel Estimates . 131 B. Estimating integrated squared second derivatives . 133 Bibliography . 136 x List of Tables Table Page 4.1 Ohio Family Health Survey (2008) sample sizes, divided into training and holdout samples. 77 5.1 Unemployment data: Log predictive scores (5.36) for transformation / DP mixture (5.2) predictive densities. 102 5.2 Acidity data: Log predictive scores (5.36) for transformation / DP mixture (5.2) predictive densities. 106 5.3 Griffin (2010) predictive scores for the acidity data. 107 xi List of Figures Figure Page 4.1 DPM density estimates (dashed lines) based on samples of size 100 for two examples of the two-piece distributions (the true densities are shown in solid black). The leftmost density is symmetric, but has t2 tails. The rightmost density has Gaussian tails, but is right-skewed. 54 4.2 Illustration of the Transformation-DPM technique. The heavy-tailed sample (left column, A1-A3) and skewed sample (right column, B1- B3) of figure 4.1 are transformed according to the symmetrizing and tail-shortening transformations of section 2. The DPM model is fit to the transformed samples in the bottom panels, then back-transformed to give the TDPM estimate on the original scale.
Recommended publications
  • Approximated Bayes and Empirical Bayes Confidence Intervals—
    Ann. Inst. Statist. Math. Vol. 40, No. 4, 747-767 (1988) APPROXIMATED BAYES AND EMPIRICAL BAYES CONFIDENCE INTERVALSmTHE KNOWN VARIANCE CASE* A. J. VAN DER MERWE, P. C. N. GROENEWALD AND C. A. VAN DER MERWE Department of Mathematical Statistics, University of the Orange Free State, PO Box 339, Bloemfontein, Republic of South Africa (Received June 11, 1986; revised September 29, 1987) Abstract. In this paper hierarchical Bayes and empirical Bayes results are used to obtain confidence intervals of the population means in the case of real problems. This is achieved by approximating the posterior distribution with a Pearson distribution. In the first example hierarchical Bayes confidence intervals for the Efron and Morris (1975, J. Amer. Statist. Assoc., 70, 311-319) baseball data are obtained. The same methods are used in the second example to obtain confidence intervals of treatment effects as well as the difference between treatment effects in an analysis of variance experiment. In the third example hierarchical Bayes intervals of treatment effects are obtained and compared with normal approximations in the unequal variance case. Key words and phrases: Hierarchical Bayes, empirical Bayes estimation, Stein estimator, multivariate normal mean, Pearson curves, confidence intervals, posterior distribution, unequal variance case, normal approxima- tions. 1. Introduction In the Bayesian approach to inference, a posterior distribution of unknown parameters is produced as the normalized product of the like- lihood and a prior distribution. Inferences about the unknown parameters are then based on the entire posterior distribution resulting from the one specific data set which has actually occurred. In most hierarchical and empirical Bayes cases these posterior distributions are difficult to derive and cannot be obtained in closed form.
    [Show full text]
  • 1 Estimation and Beyond in the Bayes Universe
    ISyE8843A, Brani Vidakovic Handout 7 1 Estimation and Beyond in the Bayes Universe. 1.1 Estimation No Bayes estimate can be unbiased but Bayesians are not upset! No Bayes estimate with respect to the squared error loss can be unbiased, except in a trivial case when its Bayes’ risk is 0. Suppose that for a proper prior ¼ the Bayes estimator ±¼(X) is unbiased, Xjθ (8θ)E ±¼(X) = θ: This implies that the Bayes risk is 0. The Bayes risk of ±¼(X) can be calculated as repeated expectation in two ways, θ Xjθ 2 X θjX 2 r(¼; ±¼) = E E (θ ¡ ±¼(X)) = E E (θ ¡ ±¼(X)) : Thus, conveniently choosing either EθEXjθ or EX EθjX and using the properties of conditional expectation we have, θ Xjθ 2 θ Xjθ X θjX X θjX 2 r(¼; ±¼) = E E θ ¡ E E θ±¼(X) ¡ E E θ±¼(X) + E E ±¼(X) θ Xjθ 2 θ Xjθ X θjX X θjX 2 = E E θ ¡ E θ[E ±¼(X)] ¡ E ±¼(X)E θ + E E ±¼(X) θ Xjθ 2 θ X X θjX 2 = E E θ ¡ E θ ¢ θ ¡ E ±¼(X)±¼(X) + E E ±¼(X) = 0: Bayesians are not upset. To check for its unbiasedness, the Bayes estimator is averaged with respect to the model measure (Xjθ), and one of the Bayesian commandments is: Thou shall not average with respect to sample space, unless you have Bayesian design in mind. Even frequentist agree that insisting on unbiasedness can lead to bad estimators, and that in their quest to minimize the risk by trading off between variance and bias-squared a small dosage of bias can help.
    [Show full text]
  • Download Article (PDF)
    Journal of Statistical Theory and Applications, Vol. 17, No. 2 (June 2018) 359–374 ___________________________________________________________________________________________________________ BAYESIAN APPROACH IN ESTIMATION OF SHAPE PARAMETER OF THE EXPONENTIATED MOMENT EXPONENTIAL DISTRIBUTION Kawsar Fatima Department of Statistics, University of Kashmir, Srinagar, India [email protected] S.P Ahmad* Department of Statistics, University of Kashmir, Srinagar, India [email protected] Received 1 November 2016 Accepted 19 June 2017 Abstract In this paper, Bayes estimators of the unknown shape parameter of the exponentiated moment exponential distribution (EMED)have been derived by using two informative (gamma and chi-square) priors and two non- informative (Jeffrey’s and uniform) priors under different loss functions, namely, Squared Error Loss function, Entropy loss function and precautionary Loss function. The Maximum likelihood estimator (MLE) is obtained. Also, we used two real life data sets to illustrate the result derived. Keywords: Exponentiated Moment Exponential distribution; Maximum Likelihood Estimator; Bayesian estimation; Priors; Loss functions. 2000 Mathematics Subject Classification: 22E46, 53C35, 57S20 1. Introduction The exponentiated exponential distribution is a specific family of the exponentiated Weibull distribution. In analyzing several life time data situations, it has been observed that the dual parameter exponentiated exponential distribution can be more effectively used as compared to both dual parameters of gamma or Weibull distribution. When we consider the shape parameter of exponentiated exponential, gamma and Weibull is one, then these distributions becomes one parameter exponential distribution. Hence, these three distributions are the off shoots of the exponential distribution. Moment distributions have a vital role in mathematics and statistics, in particular probability theory, in the viewpoint research related to ecology, reliability, biomedical field, econometrics, survey sampling and in life-testing.
    [Show full text]
  • Bayes Estimator Recap - Example
    Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Last Lecture . Biostatistics 602 - Statistical Inference Lecture 16 • What is a Bayes Estimator? Evaluation of Bayes Estimator • Is a Bayes Estimator the best unbiased estimator? . • Compared to other estimators, what are advantages of Bayes Estimator? Hyun Min Kang • What is conjugate family? • What are the conjugate families of Binomial, Poisson, and Normal distribution? March 14th, 2013 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 1 / 28 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 2 / 28 Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Recap - Bayes Estimator Recap - Example • θ : parameter • π(θ) : prior distribution i.i.d. • X1, , Xn Bernoulli(p) • X θ fX(x θ) : sampling distribution ··· ∼ | ∼ | • π(p) Beta(α, β) • Posterior distribution of θ x ∼ | • α Prior guess : pˆ = α+β . Joint fX(x θ)π(θ) π(θ x) = = | • Posterior distribution : π(p x) Beta( xi + α, n xi + β) | Marginal m(x) | ∼ − • Bayes estimator ∑ ∑ m(x) = f(x θ)π(θ)dθ (Bayes’ rule) | α + x x n α α + β ∫ pˆ = i = i + α + β + n n α + β + n α + β α + β + n • Bayes Estimator of θ is ∑ ∑ E(θ x) = θπ(θ x)dθ | θ Ω | ∫ ∈ Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 3 / 28 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 4 / 28 Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Loss Function Optimality Loss Function Let L(θ, θˆ) be a function of θ and θˆ.
    [Show full text]
  • CSC535: Probabilistic Graphical Models
    CSC535: Probabilistic Graphical Models Bayesian Probability and Statistics Prof. Jason Pacheco Why Graphical Models? Data elements often have dependence arising from structure Pose Estimation Protein Structure Exploit structure to simplify representation and computation Why “Probabilistic”? Stochastic processes have many sources of uncertainty Randomness in Measurement State of Nature Process PGMs let us represent and reason about these in structured ways What is Probability? What does it mean that the probability of heads is ½ ? Two schools of thought… Frequentist Perspective Proportion of successes (heads) in repeated trials (coin tosses) Bayesian Perspective Belief of outcomes based on assumptions about nature and the physics of coin flips Neither is better/worse, but we can compare interpretations… Administrivia • HW1 due 11:59pm tonight • Will accept submissions through Friday, -0.5pts per day late • HW only worth 4pts so maximum score on Friday is 75% • Late policy only applies to this HW Frequentist & Bayesian Modeling We will use the following notation throughout: - Unknown (e.g. coin bias) - Data Frequentist Bayesian (Conditional Model) (Generative Model) Prior Belief Likelihood • is a non-random unknown • is a random variable (latent) parameter • Requires specifying the • is the sampling / data prior belief generating distribution Frequentist Inference Example: Suppose we observe the outcome of N coin flips. What is the probability of heads (coin bias)? • Coin bias is not random (e.g. there is some true value) • Uncertainty reported
    [Show full text]
  • 9 Bayesian Inference
    9 Bayesian inference 1702 - 1761 9.1 Subjective probability This is probability regarded as degree of belief. A subjective probability of an event A is assessed as p if you are prepared to stake £pM to win £M and equally prepared to accept a stake of £pM to win £M. In other words ... ... the bet is fair and you are assumed to behave rationally. 9.1.1 Kolmogorov’s axioms How does subjective probability fit in with the fundamental axioms? Let A be the set of all subsets of a countable sample space Ω. Then (i) P(A) ≥ 0 for every A ∈A; (ii) P(Ω)=1; 83 (iii) If {Aλ : λ ∈ Λ} is a countable set of mutually exclusive events belonging to A,then P Aλ = P (Aλ) . λ∈Λ λ∈Λ Obviously the subjective interpretation has no difficulty in conforming with (i) and (ii). (iii) is slightly less obvious. Suppose we have 2 events A and B such that A ∩ B = ∅. Consider a stake of £pAM to win £M if A occurs and a stake £pB M to win £M if B occurs. The total stake for bets on A or B occurring is £pAM+ £pBM to win £M if A or B occurs. Thus we have £(pA + pB)M to win £M and so P (A ∪ B)=P(A)+P(B) 9.1.2 Conditional probability Define pB , pAB , pA|B such that £pBM is the fair stake for £M if B occurs; £pABM is the fair stake for £M if A and B occur; £pA|BM is the fair stake for £M if A occurs given B has occurred − other- wise the bet is off.
    [Show full text]
  • Lecture 8 — October 15 8.1 Bayes Estimators and Average Risk
    STATS 300A: Theory of Statistics Fall 2015 Lecture 8 | October 15 Lecturer: Lester Mackey Scribe: Hongseok Namkoong, Phan Minh Nguyen Warning: These notes may contain factual and/or typographic errors. 8.1 Bayes Estimators and Average Risk Optimality 8.1.1 Setting We discuss the average risk optimality of estimators within the framework of Bayesian de- cision problems. As with the general decision problem setting the Bayesian setup considers a model P = fPθ : θ 2 Ωg, for our data X, a loss function L(θ; d), and risk R(θ; δ). In the frequentist approach, the parameter θ was considered to be an unknown deterministic quan- tity. In the Bayesian paradigm, we consider a measure Λ over the parameter space which we call a prior. Assuming this measure defines a probability distribution, we interpret the parameter θ as an outcome of the random variable Θ ∼ Λ. So, in this setup both X and θ are random. Conditioning on Θ = θ, we assume the data is generated by the distribution Pθ. Now, the optimality goal for our decision problem of estimating g(θ) is the minimization of the average risk r(Λ; δ) = E[L(Θ; δ(X))] = E[E[L(Θ; δ(X)) j X]]: An estimator δ which minimizes this average risk is a Bayes estimator and is sometimes referred to as being Bayes. Note that the average risk is an expectation over both the random variables Θ and X. Then by using the tower property, we showed last time that it suffices to find an estimator δ which minimizes the posterior risk E[L(Θ; δ(X))jX = x] for almost every x.
    [Show full text]
  • Unbiasedness and Bayes Estimators
    Unbiasedness and Bayes Estimators Siamak Noorbaloochi Center for Chronic Disease Outcomes Research Minneapolis VA Medical Center and Glen Meeden1 School of Statistics University of Minnesota2 A simple geometric representation of Bayes and unbiased rules for squared error loss is provided. Some orthogonality relationships between them and the functions they are estimating are proved. Bayes estimators are shown to be behave asymptotically like unbiased estimators. Key Words: Unbiasedness, Bayes estimators, squared error loss and con- sistency 1Research supported in part by NSF Grant DMS 9971331 2Glen Meeden School of Statistics 313 Ford Hall 224 Church ST S.E. University of Minnesota Minneapolis, MN 55455-0460 1 1 Introduction Let X be a random variable, possible vector valued, with a family of possible probability distributions indexed by the parameter θ ∈ Θ. Suppose γ, some real-valued function defined on Θ, is to be estimated using X. An estimator δ is said to be unbiased for γ if Eθδ(X) = γ(θ) for all θ ∈ Θ. Lehmann (1951) proposed a generalization of this notion of unbiasedness which takes into account the loss function for the problem. Noorbaloochi and Meeden (1983) proposed a generalization of Lehmann’s definition but which depends on a prior distribution π for θ. Assuming squared error loss let the Bayes risk of an estimator δ for estimating γ be denoted by r(δ, γ; π). Then under their definition δ is unbiased for estimating γ for the prior π if r(δ, γ; π) = inf r(δ, γ0; π) γ0 Under very weak assumptions it is easy to see that this definition reduces to the usual one.
    [Show full text]
  • Bayesian Cluster Enumeration Criterion for Unsupervised Learning
    1 Bayesian Cluster Enumeration Criterion for Unsupervised Learning Freweyni K. Teklehaymanot, Student Member, IEEE, Michael Muma, Member, IEEE, and Abdelhak M. Zoubir, Fellow, IEEE Abstract—We derive a new Bayesian Information Criterion The BIC was originally derived by Schwarz in [8] assum- (BIC) by formulating the problem of estimating the number of ing that (i) the observations are independent and identically clusters in an observed data set as maximization of the posterior distributed (iid), (ii) they arise from an exponential family probability of the candidate models. Given that some mild assumptions are satisfied, we provide a general BIC expression of distributions, and (iii) the candidate models are linear in for a broad class of data distributions. This serves as a starting parameters. Ignoring these rather restrictive assumptions, the point when deriving the BIC for specific distributions. Along this BIC has been used in a much larger scope of model selection line, we provide a closed-form BIC expression for multivariate problems. A justification of the widespread applicability of Gaussian distributed variables. We show that incorporating the the BIC was provided in [16] by generalizing Schwarz’s data structure of the clustering problem into the derivation of the BIC results in an expression whose penalty term is different derivation. In [16], the authors drop the first two assumptions from that of the original BIC. We propose a two-step cluster enu- made by Schwarz given that some regularity conditions are meration algorithm. First, a model-based unsupervised learning satisfied. The BIC is a generic criterion in the sense that algorithm partitions the data according to a given set of candidate it does not incorporate information regarding the specific models.
    [Show full text]
  • Comparison Between Bayesian and Maximum Likelihood Estimation of Scale Parameter in Weibull Distribution with Known Shape Under Linex Loss Function
    Journal of Scientific Research Vol. 55, 2011 : 163-172 Banaras Hindu University, Varanasi ISSN : 0447-9483 COMPARISON BETWEEN BAYESIAN AND MAXIMUM LIKELIHOOD ESTIMATION OF SCALE PARAMETER IN WEIBULL DISTRIBUTION WITH KNOWN SHAPE UNDER LINEX LOSS FUNCTION B.N. Pandey, Nidhi Dwivedi and Pulastya Bandyopadhyay Department of Statistics, Banaras Hindu University, Varanasi-221005 Email: [email protected] Abstract Weibull distribution is widely employed in modeling and analyzing lifetime data. The present paper considers the estimation of the scale parameter of two parameter Weibull distribution with known shape. Maximum likelihood estimation is discussed. Bayes estimator is obtained using Jeffreys’ prior under linex loss function. Relative efficiency of the estimators are calculated in small and large samples for over-estimation and under-estimation using simulated data sets. It is observed that Bayes estimator fairs better especially in small sample size and when over estimation is more critical than under estimation. INTRODUCTION The Weibull distribution is one of the most widely used distributions for analyzing lifetime data. It is found to be useful in diverse fields ranging from engineering to medical sciences (see Lawless [4], Martz and Waller [6]). The Weibull family is a generalization of the exponential family and can model data exhibiting monotone hazard rate behavior, i.e. it can accommodate three types of failure rates, namely increasing, decreasing and constant. The probability density function of the Weibull distribution is given by: β x β f(x|α, β ) = x β −1 exp[− ] ; x ≥ 0, α , β >0 (1) α α where the parameter β determines the shape of the distribution and α is the scale parameter.
    [Show full text]
  • Bayes Estimators
    Bayes estimators Given θ; X1; ··· ; Xn iid ∼ f (xjθ). prior g(θ). joint distribution: f (x1; ··· ; xnjθ)g(θ) posterior g(θjx1; ··· ; xn). Estimator T = h(X1; ··· ; Xn). Loss: L(t; θ) Risk: R(T ; θ) = ET L(T ; θ) Bayes risk of T is R R(T ; g) = Θ R(T ; θ)g(θ)dθ. T is Bayes estimator if R(T ; g) = infT R(T ; g). To minimize Bayes risk, we only need to minimize the conditional expected loss given each x observed. Binomial model X ∼ bin(xjn; θ) prior g(θ) ∼ unif (0; 1). posterior g(θjx) ∼ beta(x + 1; n − x + 1). Beta distribution: Γ(α+β) α−1 β−1 α f (x) = Γ(α)Γ(β) x (1 − x) ; 0 ≤ x ≤ 1; E(X ) = α+β . For squared error loss, the Bayes estimator is x+1 posterior mean n+2 : If X is a random variable, choose c min E(X − c)2. c = E(X ) Prior distributions Where do prior distributions come from? * a prior knowledge about θ * population interpretation{(a population of possible θ values). * mathematical convenience (conjugate prior) conjugate distribution{ the prior and the posterior distribution are in the same parametric family. Conjugate prior distribution Advantages: * mathematically convenient * easy to interpret * can provide good approximation to many prior opinions (especially if we allow mixtures of distributions from the conjugate family) Disadvantages: may not be realistic Binomial model X jθ ∼ bin(n; θ) θ ∼ beta(α; β). Beta distribution: Γ(α+β) α−1 β−1 f (x) = Γ(α)Γ(β) x (1 − x) ; α 0 ≤ x ≤ 1; E(X ) = α+β .
    [Show full text]
  • Chapter 7: Estimation Sections
    Chapter 7 Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods: 7.5 Maximum Likelihood Estimators 7.6 Properties of Maximum Likelihood Estimators Skip: p. 434-441 (EM algorithm and Sampling Plans) 7.7 Sufficient Statistics Skip: 7.8 Jointly Sufficient Statistics Skip: 7.9 Improving an Estimator 1 / 40 Chapter 7 7.1 Statistical Inference Statistical Inference We have seen statistical models in the form of probability distributions: f (xjθ) In this section the general notation for any parameter will be θ The parameter space will be denoted by Ω For example: Life time of a christmas light series follows the Expo(θ) The average of 63 poured drinks is approximately normal with mean θ The number of people that have a disease out of a group of N people follows the Binomial(N; θ) distribution. In practice the value of the parameter θ is unknown. 2 / 40 Chapter 7 7.1 Statistical Inference Statistical Inference Statistical Inference: Given the data we have observed what can we say about θ? I.e. we observe random variables X1;:::; Xn that we assume follow our statistical model and then we want to draw probabilistic conclusions about the parameter θ. For example: If I tested 5 Christmas light series from the same manufacturer and they lasted for 21; 103; 76; 88 and 96 days. Assuming that the life times are independent and follow Expo(θ), what does this data set tell me about the failure rate θ? 3 / 40 Chapter 7 7.1 Statistical Inference Statistical Inference – Another example Say I take a random sample of 100 people and test them all for a disease.
    [Show full text]