<<

International Journal of and Systems ISSN 0973-2675 Volume 12, Number 4 (2017), pp. 721-732 © Research India Publications http://www.ripublication.com

Bayes for and Inverse Coefficient of Variation for the

Aruna Kalkur T. St.Aloysius College (Autonomous) Mangaluru, Karnataka India.

Aruna Rao Professor (Retd.) Mangalore University, Karnataka, India

Abstract In this paper Bayes of Coefficient of Variation (CV) and Inverse Coefficient of Variation (ICV) of a normal distribution are derived using five objective priors namely Left invariant, Right invariant, Jeffry’s Rule prior, Uniform prior and Probability prior. Square Error (MSE) of 1 these estimators are derived to the order of 푂( ⁄푛) and the expressions are new. Numerical results indicate that Maximum Likelihood Estimators (MLE) of CV and ICV have smaller MSE compared to Bayes estimators. It is further shown that in a certain class of estimators MLE of CV and ICV are not admissible. Key words: Bayes estimators, Objective priors, Mean Square Error, Admissibility.

1. Introduction The estimation and for Coefficient of Variation (CV) of a normal distribution is well addressed in the literature. The original work dates back to 1932 when Mckay derived a confidence interval for the CV of a normal distribution using transformed CV. (Also see the references cited in this paper). It was Singh (1983) who emphasized that Inverse Sample Coefficient of Variation 722 Aruna Kalkur T. and Aruna Rao

(ISCV) can be used to derive confidence interval for the CV of a distribution. He derived the first four moments of the ISCV. Sharma and Krishnan (1994) developed the asymptotic distribution of the ISCV without making an assumption about the population distribution. Although many papers have appeared for the estimation and the confidence interval of a normal distribution, they are derived using the classical theory of inference. In recent years is widely used in scientific investigation. When objective priors are used the performs well compared to the classical estimator in terms of mean square error. Not many papers have appeared in the past regarding Bayes estimator of CV of the normal distribution. In this paper, we discuss Bayes estimator for of the CV and ICV of the normal distribution under squared error . We have used objective priors so that the bias and Mean Square Error (MSE) of these estimators can be compared with maximum likelihood estimators. The objective priors used are Left invariant, Right invariant, Jeffry’s Rule prior, Uniform prior and Probability matching prior. In Chapter 1 we have listed many papers dealing with the estimation for the CV of the normal distribution. In this chapter, we discuss Bayes estimator for the CV and ICV of the normal distribution under squared error loss function. We have used objective priors so that the bias and MSE of these estimators can be compared with MLEs. The objective priors used are Right invariant, Left invariant, Jeffrey’s Rule prior, Uniform prior and Probability matching prior. Harvey and Merwe (2012) make a distinction between Jeffrey’s prior and Jeffry’s Rule prior. Both are proportional to square root of the determinant of the matrix; but the distinction is that in Jeffrey’s prior σ is treated as parameter and in Jeffrey’s Rule prior σ2 is treated as parameter. The comparison between the Bayes estimator, maximum likelihood and other estimators of the classical inference are the focal point of investigation in many research papers in the past. Some of the research references are the following: Pereira and Stern (2000) derived Full Bayesian Test for the CV of the normal distribution. Kang and Kim (2003) proposed Bayesian test procedure for the equality of CVs of two normal distributions using fractional . Pang et al. (2005) derived Bayesian credible intervals for the CVs of three parameters Lognormal and Weibull distributions. D’Cunha and Rao (2014) compared Bayesian and confidence interval based on MLE in lognormal distribution. This chapter is divided into 6 sections. Section 2 presents the expressions for the Bayes estimators of CV using five objective priors mentioned previously. The bias and MSE 1 of the Bayes estimator of CV to the order of 푂(1)and푂( ⁄푛) respectively are also derived. Numerical values of bias and MSE of the Bayes estimator for CVs, along with the MLE’s are presented in section 3. In section 4 we present the expressions for the Bayes Estimator for Coefficient of Variation and Inverse Coefficient… 723

1 Bayes estimator of ICV along with the bias and MSE to the order of 푂(1)and푂( ⁄푛) respectively. Numerical values of bias and MSE of the Bayes estimator for ICV, along with the MLE’s are presented in section 5. The chapter concludes in section 6.

2. Bayes Estimators of Coefficient of Variation

2 Let 푥1, … , 푥푛 be i.i.d. N (μ, σ ). The maximum likelihood estimator of CV of the normal 휎̂ 1 1 distribution is given by휃̂ = , where 휇̂ = 푥̅ = ∑푛 푥 and 휎̂ 2=푠2 = ∑푛 ( 푥 − 휇̂ 푛 푖=1 푖 푛−1 푖=1 푖 푥̅)2 denotes the sample mean and sample of normal distribution respectively. Five priors are used for the estimation of CV and ICV. Let 휋(휇, 휎) denote the prior density for μ and σ. The expressions for the prior density are given below. a) Right invariant Jeffrey’s prior

1 휋(휇, 휎) = (1) 휎 b) Left invariant prior

1 휋(휇, 휎) = (2) 휎2 c) Jeffrey’s Rule prior

1 휋(휇, 휎 ) = (3) 휎3 d) Uniform prior

휋(휇, 휎 ) = 1 (4) e) Probability matching prior

휋(휇, 휎) = 휎2 (5)

Since the distribution of 푥̅ and 푠2 are independent, after some simplification we obtain 1 푛+2 1 the posterior density of ( ) as Gamma (( ), (푛 − 1)푠2) under right invariant 휎2 2 2 푛+3 1 prior, Gamma (( ), (푛 − 1)푠2) under left invariant Jeffreys prior, 2 2 724 Aruna Kalkur T. and Aruna Rao

푛+4 1 푛+1 1 Gamma(( ), (푛 − 1)푠2) under Jeffry’s Rule prior, Gamma (( ), (푛 − 1)푠2) 2 2 2 2 푛 1 under Uniform prior and Gamma (( ), (푛 − 1)푠2) under Probability matching prior. 2 2 휎 Bayes estimator of CV is given by 퐸( ), where the expectation is taken with respect to 휇 the posterior density of 휋(휇 , 휎|푑푎푡푎).

The posterior density 흅(흁, 흈|풙ퟏ , … , 풙풏 ) for the right invariant, left invariant, Jeffrey’s Rule, Uniform and Probability matching priors respectively are given by the following expressions.

휋(휇, 휎|푥1, … , 푥푛) 푛+2 2 2 (6) 1(푥̅−휇)2 (푛 − 1)푠 푛+2 − ( ) ( )−1 1 푛−1 1 2 휎2 2 1 2 − ( )푠2 = 푒 ⁄푛 ( ) 푒 2 휎2 , 휎 푛 + 2 2 √2휋 Γ ( ) 휎 √푛 2 (Using Right invariant prior) −∞ < 휇 < ∞, 휎 > 0

푛+3 1(푥̅−휇)2 (푛−1)푠2 2 푛+3 − ( ) ( )−1 1 푛−1 2 1 2 휎2 2 1 2 − ( )푠 ⁄푛 2 휎2 휋(휇, 휎|푥1, … , 푥푛) = 휎 푒 푛+3 ( ) 푒 , (7) √2휋 Γ( ) 휎2 √푛 2 (Using Left invariant prior) −∞ < 휇 < ∞, 휎 > 0

푛+4 1(푥̅−휇)2 (푛−1)푠2 2 푛+4 − ( ) ( )−1 1 푛−1 2 1 2 휎2 2 1 2 − ( )푠 ⁄푛 2 휎2 휋(휇, 휎|푥1, … , 푥푛) = 휎 푒 푛+4 ( ) 푒 , (8) √2휋 Γ( ) 휎2 √푛 2 (Using Jeffrey’s Rule prior) −∞ < 휇 < ∞, 휎 > 0

푛+1 1(푥̅−휇)2 (푛−1)푠2 2 푛+1 − ( ) ( )−1 1 푛−1 2 1 2 휎2 2 1 2 − ( )푠 ⁄푛 2 휎2 휋(휇, 휎|푥1, … , 푥푛) = 휎 푒 푛+1 ( ) 푒 , (9) √2휋 Γ( ) 휎2 √푛 2 (Using Uniform prior) −∞ < 휇 < ∞, 휎 > 0

푛 1(푥̅−휇)2 (푛−1)푠2 2 푛 − ( ) ( )−1 1 푛−1 2 1 2 휎2 2 1 2 − ( )푠 ⁄푛 2 휎2 휋(휇, 휎|푥1, … , 푥푛) = 휎 푒 푛 ( ) 푒 , (10) √2휋 Γ( ) 휎2 √푛 2 (Using Probability matching prior) −∞ < 휇 < ∞, 휎 > 0 Bayes Estimator for Coefficient of Variation and Inverse Coefficient… 725

The Bayes estimator of CV is 퐸(휃|푥1, … , 푥푛 ), where the expectation is taken with respect to the posterior density of 휇 and 휎. For the right invariant prior it is given by

푛+2 1(푥̅−휇)2 (푛−1)푠2 2 푛+2 − ( ) ( )−1 1 푛−1 2 휇 1 2 휎2 2 1 2 − ( )푠 ̂ ⁄푛 2 휎2 휃푅 = ∬ 휎 푒 푛+2 ( ) 푒 푑휇푑휎 (11) 휎 √2휋 Γ( ) 휎2 √푛 2

푛+2 푛−1 1/2 푠 Г( )( ) ( ) 2 2 푥̅ = 푛+3 Г( ) 2

푛+2 푛−1 1/2 Г( )( ) ̂ 2 2 ̂ 푠 = 푎푛휃 , where푎푛 = 푛+3 and 휃=( ) (12) Г( ) 푥̅ 2

The following theorem gives the Bayes estimators.

Theorem 1: The Bayes estimators of CV corresponding to Left Invariant prior, Right Invariant prior, Jeffrey’s Rule prior, Uniform prior and Probability Matching prior are the following.

푛+2 푛−1 1/2 푠 Г( )( ) ( ) ̂ 2 2 푥̅ 휃푅 = 푛+3 (13) Г( ) 2

푛+1 푛−1 1/2 푠 Г( )( ) ( ) ̂ 2 2 푥̅ 휃퐿= 푛+2 (14) Г( ) 2

푛+3 푛−1 1/2 푠 Г( )( ) ( ) 2 2 푥̅ 휃̂JR = 푛+4 (15) Г( ) 2

푛 푛−1 1/2 푠 (Г( ))( ) ( ) 2 2 푥̅ 휃̂U = 푛+1 (16) Г( ) 2

푛−1 푛−1 1/2 푠 (Г( ))( ) ( ) 2 2 푥̅ and 휃̂PM = 푛 respectively. (17) Г( ) 2

Proof: Straight forward and is omitted.

726 Aruna Kalkur T. and Aruna Rao

3. Bias and Mean Square Error of the Bayes Estimators of Coefficient of Variation Theorem 2: The following table gives the bias and mean square error to the order 1 푂(1) and 푂( ⁄푛) for different estimators of CV. Prior Bias MSE

푛+2 푛−1 1/2 푠 푠 4 푠 2 푛+2 푛−1 1/2 푠 1. Right Г( )( ) ( ) ( ) ( ) Г( )( ) ( ) 2 2 푥̅ 휎 푥̅ 푥̅ 2 2 푥̅ 휎 2 푛+3 – ( ) + +( 푛+3 − ( )) Invariant Г( ) 휇 푛 2푛 Г( ) 휇 2 2 1 푛+1 푛−1 1/2 푠 4 2 2. Left Г( )( ) ( ) 푠 푠 푛+1 푛−1 2 푠 ̅ 휎 ( ) ( ) (Г( )( ) ( ) 2 2 푥 –( ) 푥̅ 푥̅ 2 2 푥̅ 휎 2 Invariant 푛+2 휇 + +( 푛+3 − ( )) Г( ) 푛 2푛 Г( ) 휇 2 2 1 푛+3 푛−1 1/2 푠 4 2 3. Jeffrey’s Г( )( ) ( ) 푠 푠 푛+3 푛−1 2 푠 ̅ 휎 ( ) ( ) (Г( )( ) ( ) 2 2 푥 – ( ) 푥̅ 푥̅ 2 2 푥̅ 휎 2 Rule 푛+4 휇 + +( 푛+4 − ( )) Г( ) 푛 2푛 Г( ) 휇 2 2 1 푛 푛−1 1/2 푠 4 2 4. Uniform Г( )( ) ( ) 푠 푠 푛 푛−1 2 푠 ̅ 휎 ( ) ( ) (Г( )( ) ( ) 2 2 푥 – ( ) 푥̅ 푥̅ 2 2 푥̅ 휎 2 푛+1 휇 + +( 푛+1 − ( )) Г( ) 푛 2푛 Г( ) 휇 2 2 1 푛−1 푛−1 1/2 푠 4 2 5. Probability Г( )( ) ( ) 푠 푠 푛−1 푛−1 2 푠 ̅ 휎 ( ) ( ) (Г( )( ) ( ) 2 2 푥 – ( ) 푥̅ 푥̅ 2 2 푥̅ 휎 2 Matching 푛 휇 + +( 푛 − ( )) Г( ) 푛 2푛 Г( ) 휇 2 2

Proof: Given in the Appendix. Consider the class of estimators {푎 ̂휃}. This class includes the MLE ̂휃 as well as various Bayes estimators for different choice of an. The following theorem gives the optimal estimator in this class.

Theorem 3: Among the class of estimators {푎 ̂휃} of Coefficient of variation 휃, where ̂휃 denotes the maximum likelihood estimator of 휃, the estimator with minimum mean 1 휃̂ square error to the order of 푂( ⁄ ) is 1 푛 ( )(휃̂2+1)+1 푛 Proof: The expression for MSE of (a ̂휃) is given by MSE of (푎휃̂) =푎2V(휃̂) + Bias of (a휃̂)2 (17) =푎2V(휃̂) +(휃̂2(푎-1) 2

Differentiating with respect to 푎 and equating it with zero we get 1 a= 1 (18) ( )(휃2+1)+1 푛 Bayes Estimator for Coefficient of Variation and Inverse Coefficient… 727

휃̂ Substituting 휃̂ for 휃 in a we get estimator as 1 , this is the optimal estimator ( )(휃̂2+1)+1 푛 휃̂ in the class of 푎휃̂. As n→ ∞, 1 → ̂휃 ( )(휃̂2+1)+1 푛

Table 3.1 presents the numerical values of Bias and MSE of the estimators of CV. From the numerical results it follows that maximum likelihood estimators has the smallest MSE compared to other Bayes estimators.

Table 3.1 Bias and Mean Square Error of the Bays Estimators of CV

n=10 n=30

θ Bias MSE MSE(휃̂) 퐴푛 Bias MSE MSE(휃̂) 1.Left Invariant Prior 0.1 -0.0116 6.4401x10-4 5.1 x10-4 -0.0041 1.8645x10-4 1.70x10-4 0.4 0.8842 -0.0463 0.0127 0.0106 0.9594 -0.0122 0.0019 0.0035 0.7 -0.0810 0.0551 0.2020 -0.0284 0.0170 0.0162 2.Right Invariant Prior 0.1 -0.0075 5.6580x10-4 5.1 x10-4 -0.0025 1.7623x10-4 1.7x10-4 0.4 0.9253 -0.0299 0.0115 0.0106 0.9750 -0.0100 0.0036 0.1375 0.7 -0.0523 0.0512 0.2020 -0.0175 0.0165 0.0673 3.Jeffrey’s Rule Prior 0.1 0.0152 7.4046x10-4 5.1 x10-4 -0.0055- 2.0073x10-4 1.70x10-4 0.4 0.8482 -0.0607 0.0142 0.0106 0.9446 0.0222- 0.0040 0.0035 0.7 -0.1063 0.0598 0.2020 0.0388 0.0177 0.0162 4.Uniform Prior 0.1 -0.0027 5.1748x10-4 5.1 x10-4 -8.58x10-4 1.7074x10 -4 1.70x10-4 0.4 0.9727 -0.0109 0.0107 0.0106 0.9914 -0.0034 - 0.0035 0.0035 0.7 -0.0191 0.0489 0.2020 0.0060 0.0162 0.0162 5.Probability Matching Prior 0.1 0.0028 5.1790 x10-4 5.1 x10-4 8.6562x10-4 1.7075 x10-4 1.70x10-4 0.4 1.0281 0.0112 0.0107 0.0106 1.0087 0.0035 0.0035 0.0035 0.7 0.0197 0.0489 0.2020 0.0061 0.0170 0.0162

728 Aruna Kalkur T. and Aruna Rao

4. Bayes Estimators of Inverse Coefficient of Variation 2 Let 푥1 ,…, 푥푛 be i.i.d. 푁(휇, 휎 ). The maximum likelihood estimator of ICV of the normal 휇̂ distribution is given by 휃̂ = 1 휎̂

The Bayes estimator of ICV is 퐸(휃1|푥1, … , 푥푛 ), where the expectation is taken with respect to the posterior density of 휇 and 휎. For the right invariant prior it is given by

푛+2 1(푥̅−휇)2 (푛−1)푠2 2 푛+2 − ( ) ( )−1 1 푛−1 2 휇 1 2 휎2 2 1 2 − ( )푠 ̂ ⁄푛 2 휎2 휃1푅 = ∬ 휎 푒 푛+2 ( ) 푒 푑휇푑휎 (19) 휎 √2휋 Γ( ) 휎2 √푛 2

푛+3 푥̅ Г( )( ) = 2 푠 푛+2 푛−1 1/2 ( Г( ))( ) 2 2

푛+3 Г( ) 푥̅ =푏 ̂휃 , where 푏 = 2 and ̂휃 = ( ) 푛 1 푛 푛+2 푛−1 1/2 1 ( Г( ))( ) 푠 2 2

Theorem 4: The Bayes estimators of ICV for different priors are given below. 1. Right invariant Jeffrey’s prior

푛+3 푥̅ Г( )( ) 휃̂ = 2 푠 (20) 1푅 푛+2 푛−1 1/2 ( Г( ))( ) 2 2

2. Left Invariant Prior

푛+2 푥̅ Г( )( ) 휃̂ = 2 푠 (21) 1퐿 푛+1 푛−1 1/2 ( Г( ))( ) 2 2

3. Jeffrey’s Rule Prior

푛+4 푥̅ Г( )( ) 2 푠 휃̂JR = (22) 1 푛+3 푛−1 1/2 ( Г( ))( ) 2 2

4. Uniform Prior

푛+1 푥̅ Г( )( ) 2 푠 휃̂U = (23) 1 푛 푛−1 1/2 ( Г( ))( ) 2 2

Bayes Estimator for Coefficient of Variation and Inverse Coefficient… 729

5. Probability Matching Prior

푛 푥̅ Г( )( ) 2 푠 휃̂PM = (24) 1 푛−1 푛−1 1/2 ( Г( ))( ) 2 2

Proof: Straight forward and is omitted.

5. Bias and Mean Square Error of the Bayes Estimator of Inverse Coefficient of Variation Theorem 5: The following table gives the bias and mean square error to the order 1 푂(1)and푂( ⁄푛) respectively of different estimators of ICV. Prior Bias MSE

푛+3 푥̅ 푥̅ 2 푛+3 푥̅ 1. Right Invariant Г( )( ) 휇 ( ) Г( )( ) 2 푠 1 푠 2 푠 휇 2 1/2 - ( ) + +( − ( )) 푛+2 푛−1 휎 푛+2 푛−1 1/2 ( Г( ))( ) 푛 2푛 ( Г( ))( ) 휎 2 2 2 2

푛+2 푥̅ 푥̅ 2 푛+2 푥̅ 2. Left Invariant Г( )( ) 휇 ( ) Г( )( ) 2 푠 1 푠 2 푠 휇 2 1/2 -( ) + +( − ( )) 푛+1 푛−1 휎 푛+1 푛−1 1/2 ( Г( ))( ) 푛 2푛 ( Г( ))( ) 휎 2 2 2 2

푛+4 푥̅ 푥̅ 2 푛+4 푥̅ 3. Jeffrey’s Rule Г( )( ) 휇 ( ) Г( )( ) 2 푠 1 푠 2 푠 휇 2 1/2-( ) + +( − ( )) 푛+3 푛−1 휎 푛+3 푛−1 1/2 ( Г( ))( ) 푛 2푛 ( Г( ))( ) 휎 2 2 2 2

푛+1 푥̅ 푥̅ 2 푛+1 푥̅ 4. Uniform Г( )( ) 휇 ( ) Г( )( ) 2 푠 1 푠 2 푠 휇 2 1/2-( ) + +( − ( )) 푛 푛−1 휎 푛 푛−1 1/2 ( Г( ))( ) 푛 2푛 ( Г( ))( ) 휎 2 2 2 2

푛 푥̅ 푥̅ 2 푛 푥̅ 5. Probability Matching Г( )( ) 휇 ( ) Г( )( ) 2 푠 1 푠 2 푠 휇 2 1/2-( ) + +( − ( )) 푛−1 푛−1 휎 푛−1 푛−1 1/2 ( Г( ))( ) 푛 2푛 ( Г( ))( ) 휎 2 2 2 2

Proof: Given in the Appendix.

Theorem 6: Among the class of estimators {푏 ̂휃1} of Inverse Coefficient of variation 휃1, where ̂휃1 denotes the maximum likelihood estimator of 휃1, the estimator 휃̂ with minimum mean square error to the order of 푂(1⁄ ) is 1 푛 1 1 1 ( )( + )+1 푛 ̂2 2 휃1

Proof: The expression for MSE of (b휃̂1) is given by ̂ 2 ̂ ̂2 2 MSE of (b휃1) =푏 푉(휃1) +휃1 (푏-1) (25) 730 Aruna Kalkur T. and Aruna Rao

Differentiating with respect to b and equating it to zero we get

2 휃1 1 b= 2 = (26) 푉(휃 )+휃 1 1 1 1 1 ( )( + )+1 푛 2 2 휃1

휃̂1 Substituting ̂휃1 for 휃1 in b we get estimator as , which is the optimal 1 1 1 ( )( + )+1 푛 ̂2 2 휃1 estimator. ̂ As n→ ∞, 푏휃1 → 휃̂1 Table 5.1 presents the numerical values of Bias and MSE of the estimators of ICV. From the numerical results it follows that maximum likelihood estimators have the smallest MSE compared to other Bayes estimators.

Table 5.1. Bias and Mean Square Error of the Bays Estimators of ICV n=10 n=30 ̂ ̂ θ1 퐵푛 Bias MSE MSE(휃1) 퐵푛 Bias MSE MSE(휃1) 1.Left Invariant Prior 0.1 0.0131 5.1002 5.1000 1.0423 0.0042 1.7000 1.7000 0.4 1.1309 0.0524 0.4152 0.4125 0.0169 0.1378 0.1375 0.7 0.0916 0.2104 0.0485 0.0296 0.0682 0.0673

2.Right Invariant Prior 0.1 0.0081 5.1001 5.1000 0.0026 1.7000 1.7000 0.4 1.0807 0.0323 0.4135 0.4125 1.0256 0.0102 0.1376 0.1375 0.7 0.0565 0.2052 0.0485 0.0179 0.0677 0.0673 3.Jeffrey’s Rule Prior 0.1 0.0179 5.1003 5.1000 0.0059 1.7000 1.7000 0.4 1.1790 0.0716 0.4176 0.4125 1.0587 0.0235 0.1381 0.1375 0.7 0.1253 0.2177 0.0485 0.0411 0.0690 0.0673 4.Uniform Prior 4. 0.1 0.0028 5.1000 5.1000 8.656x10-4 1.7000 1.7000 0.4 1.0281 0.0112 0.4126 0.4125 1.0087 0.0035 0.1375 0.1375 0.7 0.0197 0.2024 0.0485 0.0061 0.0674 0.0673 5.Probability Matching Prior 0.1 -0.0027 5.1000 5.1000 8.582x10-4 1.7000 1.7000 0.4 0.9727 -0.0109 0.4126 0.4125 0.9914 0.0034 0.1375 0.1375 0.7 -0.0191 0.2024 0.0485 0.0060 0.0674 0.0673

Bayes Estimator for Coefficient of Variation and Inverse Coefficient… 731

6. Conclusion In this chapter we derive five Bayes estimators of coefficients of variation (CV) as well as Inverse Coefficient of Variation (ICV) respectively. The bias and mean square error 1 (MSE) of these estimators are derived to the order of 푂(1) and 푂( ⁄푛) respectively. The numerical results indicate that the maximum likelihood estimator (MLE) of CV and ICV has smaller MSE compared to the Bayes estimators of CV and ICV. Among the class of estimators {푎휃̂} of CV, the MLE 휃̂ of CV is dominated by the estimator 휃̂ 1 1 to the order of 푂( ⁄ ). In a parallel result the MLE of ̂휃1 of ICV is ( )(휃̂2+1)+1 푛 푛 휃̂ dominated by the estimator 1 to the order of 푂(1⁄ ). 1 1 1 푛 ( )( + )+1 푛 ̂2 2 휃1

APPENDIX

Proof of Theorem 3: Using right invariant prior, the bias of Bayes estimator of CV is given by 퐵푖푎푠(휃̂, 휃)= 퐸(휃̂) − 휃)

푛+2 2 2 1(푥̅−휇) (푛−1)푠 2 푛+2 푛+2 푛−1 1/2 − ( ) ( )−1 1 푛−1 2 휎 1 2 휎2 2 1 2 − ( )푠 휎 Г( )( ) 휎 ⁄푛 2 휎2 2 2 ∬ 휎 푒 푛+2 ( ) 푒 푑휇푑휎 - = 푛+3 − 휇 √2휋 Γ( ) 휎2 휇 Г( ) 휇 √푛 2 2 2 2 The MSE (휃̂) = 퐸((휃̂ − 휃) ) =푉(휃̂) + 퐵푖푎푠(휃̂, 휃) The MSE of 휃̂ using right invariant prior is given by 1 MSE (휃̂) = 휃̂4 + 푛 푛+2 1(푥̅−휇)2 (푛−1)푠2 2 푛+2 − ( ) ( )−1 1 푛−1 2 1 휇 1 2 휎2 2 1 2 − ( )푠 휎 2 ⁄푛 2 휎2 2 휃̂ +(∬ 휎 푒 푛+2 ( ) 푒 푑휇푑휎 − ) 2푛 휎 √2휋 Γ( ) 휎2 휇 √푛 2

푛+2 푛−1 1/2 휎 Г( )( ) ( ) 1 4 1 2 2 2 푥̅ 2 휎 = 휃̂ + 휃̂ +( 푛+3 − 휃) , where 휃 = . 푛 2푛 Г( ) 휇 2

732 Aruna Kalkur T. and Aruna Rao

Proof of Theorem 5 Using right invariant prior the bias of Bayes estimator of ICV is given by

푛+2 2 2 1(푥̅−휇) (푛−1)푠 2 푛+2 푛+3 푥̅ − 2 ( ) ( )−1 1 푛−1 2 Г( )( ) 휇 1 2 휎 ⁄ 2 1 2 − ( 2 )푥 휇 2 푠̅ 휇 휎 푒 푛 ( ) 푒 2 휎 푑휇푑휎 - = − ∬ 푛+2 2 푛+2 푛−1 1/2 휎 √2휋 Γ( ) 휎 휎 ( Г( ))( ) 휎 √푛 2 2 2

The MSE of 휃̂1 using right invariant prior is given by 푛+2 1(푥̅−휇)2 (푛−1)푠2 2 푛+2 − ( ) ( )−1 1 푛−1 2 1 1 휇 1 2 휎2 2 1 2 − ( )푥 1 ̂2 ⁄푛 2 휎2 2 + 휃1 + (∬ 휎 푒 푛+2 ( ) 푒 푑휇푑휎 − 휃1) = + 푛 2푛 휎 √2휋 Γ( ) 휎2 푛 √푛 2 푛+3 푥̅ 1 Г( )( ) 휇 휃̂2+( 2 푠 - 휃 )2, where 휃 = 1 푛+2 푛−1 1/2 1 1 2푛 ( Г( ))( ) 휎 2 2 Similarly the Bias and MSEs of other Bayes estimators of CV and ICV can be obtained.

References [1] D‘Cunha, J.G., &Rao, K.A. (2015). Application of Bayesian inference for the analysis of stock prices. Advances and Applications in Statistics, 6 (1), 57-78. [2] Harvey, J., & Van der Merwe, A. J. (2012). Bayesian confidence Intervals for and of Lognormal and Bivariate lognormal distributions. Journal of Statistical Planning and Inference, 142(6), 1294-1309. [3] McKay, A.T. (1932). Distribution of the Co-efficient of variation and the extended‘t’ distribution. Journal of Royal Statistics Society.95, 696-698. [4] Pang W.K., Leung P.K., Huang W. K. and W. Liu. (2003). On Bayesian Estimation of the coefficient of variation for three parameter Weibull, Lognormal and Gamma distributions. [5] Pereira, C.A.B., Stern J.M. (1999b). Evidence and Credibility: Full Bayesian Significance Test for Precise Hypothesis, Entropy 1, 69-80. [6] Singh M. (1993). Behavior of Sample Coefficient of Variation drawn from several distributions, Sankya, 55, series B., Pt 1, 65-76. [7] Sharma and Krishna (1994). Asymptotic distribution of inverse coefficient of variation and its applications. IEEE Transactions on . 43(4), 630-633.