Bayes Estimator for Coefficient of Variation and Inverse Coefficient of Variation for the Normal Distribution
Total Page:16
File Type:pdf, Size:1020Kb
International Journal of Statistics and Systems ISSN 0973-2675 Volume 12, Number 4 (2017), pp. 721-732 © Research India Publications http://www.ripublication.com Bayes Estimator for Coefficient of Variation and Inverse Coefficient of Variation for the Normal Distribution Aruna Kalkur T. St.Aloysius College (Autonomous) Mangaluru, Karnataka India. Aruna Rao Professor (Retd.) Mangalore University, Karnataka, India Abstract In this paper Bayes estimators of Coefficient of Variation (CV) and Inverse Coefficient of Variation (ICV) of a normal distribution are derived using five objective priors namely Left invariant, Right invariant, Jeffry’s Rule prior, Uniform prior and Probability matching prior. Mean Square Error (MSE) of 1 these estimators are derived to the order of 푂( ⁄푛) and the expressions are new. Numerical results indicate that Maximum Likelihood Estimators (MLE) of CV and ICV have smaller MSE compared to Bayes estimators. It is further shown that in a certain class of estimators MLE of CV and ICV are not admissible. Key words: Bayes estimators, Objective priors, Mean Square Error, Admissibility. 1. Introduction The estimation and confidence interval for Coefficient of Variation (CV) of a normal distribution is well addressed in the literature. The original work dates back to 1932 when Mckay derived a confidence interval for the CV of a normal distribution using transformed CV. (Also see the references cited in this paper). It was Singh (1983) who emphasized that Inverse Sample Coefficient of Variation 722 Aruna Kalkur T. and Aruna Rao (ISCV) can be used to derive confidence interval for the CV of a distribution. He derived the first four moments of the ISCV. Sharma and Krishnan (1994) developed the asymptotic distribution of the ISCV without making an assumption about the population distribution. Although many papers have appeared for the estimation and the confidence interval of a normal distribution, they are derived using the classical theory of inference. In recent years Bayesian inference is widely used in scientific investigation. When objective priors are used the Bayes estimator performs well compared to the classical estimator in terms of mean square error. Not many papers have appeared in the past regarding Bayes estimator of CV of the normal distribution. In this paper, we discuss Bayes estimator for of the CV and ICV of the normal distribution under squared error loss function. We have used objective priors so that the bias and Mean Square Error (MSE) of these estimators can be compared with maximum likelihood estimators. The objective priors used are Left invariant, Right invariant, Jeffry’s Rule prior, Uniform prior and Probability matching prior. In Chapter 1 we have listed many papers dealing with the estimation for the CV of the normal distribution. In this chapter, we discuss Bayes estimator for the CV and ICV of the normal distribution under squared error loss function. We have used objective priors so that the bias and MSE of these estimators can be compared with MLEs. The objective priors used are Right invariant, Left invariant, Jeffrey’s Rule prior, Uniform prior and Probability matching prior. Harvey and Merwe (2012) make a distinction between Jeffrey’s prior and Jeffry’s Rule prior. Both are proportional to square root of the determinant of the Fisher Information matrix; but the distinction is that in Jeffrey’s prior σ is treated as parameter and in Jeffrey’s Rule prior σ2 is treated as parameter. The comparison between the Bayes estimator, maximum likelihood and other estimators of the classical inference are the focal point of investigation in many research papers in the past. Some of the research references are the following: Pereira and Stern (2000) derived Full Bayesian Test for the CV of the normal distribution. Kang and Kim (2003) proposed Bayesian test procedure for the equality of CVs of two normal distributions using fractional Bayes factor. Pang et al. (2005) derived Bayesian credible intervals for the CVs of three parameters Lognormal and Weibull distributions. D’Cunha and Rao (2014) compared Bayesian credible interval and confidence interval based on MLE in lognormal distribution. This chapter is divided into 6 sections. Section 2 presents the expressions for the Bayes estimators of CV using five objective priors mentioned previously. The bias and MSE 1 of the Bayes estimator of CV to the order of 푂(1)and푂( ⁄푛) respectively are also derived. Numerical values of bias and MSE of the Bayes estimator for CVs, along with the MLE’s are presented in section 3. In section 4 we present the expressions for the Bayes Estimator for Coefficient of Variation and Inverse Coefficient… 723 1 Bayes estimator of ICV along with the bias and MSE to the order of 푂(1)and푂( ⁄푛) respectively. Numerical values of bias and MSE of the Bayes estimator for ICV, along with the MLE’s are presented in section 5. The chapter concludes in section 6. 2. Bayes Estimators of Coefficient of Variation 2 Let 푥1, … , 푥푛 be i.i.d. N (μ, σ ). The maximum likelihood estimator of CV of the normal 휎̂ 1 1 distribution is given by휃̂ = , where 휇̂ = 푥̅ = ∑푛 푥 and 휎̂ 2=푠2 = ∑푛 ( 푥 − 휇̂ 푛 푖=1 푖 푛−1 푖=1 푖 푥̅)2 denotes the sample mean and sample variance of normal distribution respectively. Five priors are used for the estimation of CV and ICV. Let 휋(휇, 휎) denote the prior density for μ and σ. The expressions for the prior density are given below. a) Right invariant Jeffrey’s prior 1 휋(휇, 휎) = (1) 휎 b) Left invariant prior 1 휋(휇, 휎) = (2) 휎2 c) Jeffrey’s Rule prior 1 휋(휇, 휎 ) = (3) 휎3 d) Uniform prior 휋(휇, 휎 ) = 1 (4) e) Probability matching prior 휋(휇, 휎) = 휎2 (5) Since the distribution of 푥̅ and 푠2 are independent, after some simplification we obtain 1 푛+2 1 the posterior density of ( ) as Gamma (( ), (푛 − 1)푠2) under right invariant 휎2 2 2 푛+3 1 prior, Gamma (( ), (푛 − 1)푠2) under left invariant Jeffreys prior, 2 2 724 Aruna Kalkur T. and Aruna Rao 푛+4 1 푛+1 1 Gamma(( ), (푛 − 1)푠2) under Jeffry’s Rule prior, Gamma (( ), (푛 − 1)푠2) 2 2 2 2 푛 1 under Uniform prior and Gamma (( ), (푛 − 1)푠2) under Probability matching prior. 2 2 휎 Bayes estimator of CV is given by 퐸( ), where the expectation is taken with respect to 휇 the posterior density of 휋(휇 , 휎|푑푎푡푎). The posterior density 흅(흁, 흈|풙ퟏ , … , 풙풏 ) for the right invariant, left invariant, Jeffrey’s Rule, Uniform and Probability matching priors respectively are given by the following expressions. 휋(휇, 휎|푥1, … , 푥푛) 푛+2 2 2 (6) 1(푥̅−휇)2 (푛 − 1)푠 푛+2 − ( ) ( )−1 1 푛−1 1 2 휎2 2 1 2 − ( )푠2 = 푒 ⁄푛 ( ) 푒 2 휎2 , 휎 푛 + 2 2 √2휋 Γ ( ) 휎 √푛 2 (Using Right invariant prior) −∞ < 휇 < ∞, 휎 > 0 푛+3 1(푥̅−휇)2 (푛−1)푠2 2 푛+3 − ( ) ( )−1 1 푛−1 2 1 2 휎2 2 1 2 − ( )푠 ⁄푛 2 휎2 휋(휇, 휎|푥1, … , 푥푛) = 휎 푒 푛+3 ( ) 푒 , (7) √2휋 Γ( ) 휎2 √푛 2 (Using Left invariant prior) −∞ < 휇 < ∞, 휎 > 0 푛+4 1(푥̅−휇)2 (푛−1)푠2 2 푛+4 − ( ) ( )−1 1 푛−1 2 1 2 휎2 2 1 2 − ( )푠 ⁄푛 2 휎2 휋(휇, 휎|푥1, … , 푥푛) = 휎 푒 푛+4 ( ) 푒 , (8) √2휋 Γ( ) 휎2 √푛 2 (Using Jeffrey’s Rule prior) −∞ < 휇 < ∞, 휎 > 0 푛+1 1(푥̅−휇)2 (푛−1)푠2 2 푛+1 − ( ) ( )−1 1 푛−1 2 1 2 휎2 2 1 2 − ( )푠 ⁄푛 2 휎2 휋(휇, 휎|푥1, … , 푥푛) = 휎 푒 푛+1 ( ) 푒 , (9) √2휋 Γ( ) 휎2 √푛 2 (Using Uniform prior) −∞ < 휇 < ∞, 휎 > 0 푛 1(푥̅−휇)2 (푛−1)푠2 2 푛 − ( ) ( )−1 1 푛−1 2 1 2 휎2 2 1 2 − ( )푠 ⁄푛 2 휎2 휋(휇, 휎|푥1, … , 푥푛) = 휎 푒 푛 ( ) 푒 , (10) √2휋 Γ( ) 휎2 √푛 2 (Using Probability matching prior) −∞ < 휇 < ∞, 휎 > 0 Bayes Estimator for Coefficient of Variation and Inverse Coefficient… 725 The Bayes estimator of CV is 퐸(휃|푥1, … , 푥푛 ), where the expectation is taken with respect to the posterior density of 휇 and 휎. For the right invariant prior it is given by 푛+2 1(푥̅−휇)2 (푛−1)푠2 2 푛+2 − ( ) ( )−1 1 푛−1 2 휇 1 2 휎2 2 1 2 − ( )푠 ̂ ⁄푛 2 휎2 휃푅 = ∬ 휎 푒 푛+2 ( ) 푒 푑휇푑휎 (11) 휎 √2휋 Γ( ) 휎2 √푛 2 푛+2 푛−1 1/2 푠 Г( )( ) ( ) 2 2 푥̅ = 푛+3 Г( ) 2 푛+2 푛−1 1/2 Г( )( ) ̂ 2 2 ̂ 푠 = 푎푛휃 , where푎푛 = 푛+3 and 휃=( ) (12) Г( ) 푥̅ 2 The following theorem gives the Bayes estimators. Theorem 1: The Bayes estimators of CV corresponding to Left Invariant prior, Right Invariant prior, Jeffrey’s Rule prior, Uniform prior and Probability Matching prior are the following. 푛+2 푛−1 1/2 푠 Г( )( ) ( ) ̂ 2 2 푥̅ 휃푅 = 푛+3 (13) Г( ) 2 푛+1 푛−1 1/2 푠 Г( )( ) ( ) ̂ 2 2 푥̅ 휃퐿= 푛+2 (14) Г( ) 2 푛+3 푛−1 1/2 푠 Г( )( ) ( ) 2 2 푥̅ 휃̂JR = 푛+4 (15) Г( ) 2 푛 푛−1 1/2 푠 (Г( ))( ) ( ) 2 2 푥̅ 휃̂U = 푛+1 (16) Г( ) 2 푛−1 푛−1 1/2 푠 (Г( ))( ) ( ) 2 2 푥̅ and 휃̂PM = 푛 respectively. (17) Г( ) 2 Proof: Straight forward and is omitted. 726 Aruna Kalkur T.