Bayes Estimator for Coefficient of Variation and Inverse Coefficient of Variation for the Normal Distribution

Total Page:16

File Type:pdf, Size:1020Kb

Bayes Estimator for Coefficient of Variation and Inverse Coefficient of Variation for the Normal Distribution International Journal of Statistics and Systems ISSN 0973-2675 Volume 12, Number 4 (2017), pp. 721-732 © Research India Publications http://www.ripublication.com Bayes Estimator for Coefficient of Variation and Inverse Coefficient of Variation for the Normal Distribution Aruna Kalkur T. St.Aloysius College (Autonomous) Mangaluru, Karnataka India. Aruna Rao Professor (Retd.) Mangalore University, Karnataka, India Abstract In this paper Bayes estimators of Coefficient of Variation (CV) and Inverse Coefficient of Variation (ICV) of a normal distribution are derived using five objective priors namely Left invariant, Right invariant, Jeffry’s Rule prior, Uniform prior and Probability matching prior. Mean Square Error (MSE) of 1 these estimators are derived to the order of 푂( ⁄푛) and the expressions are new. Numerical results indicate that Maximum Likelihood Estimators (MLE) of CV and ICV have smaller MSE compared to Bayes estimators. It is further shown that in a certain class of estimators MLE of CV and ICV are not admissible. Key words: Bayes estimators, Objective priors, Mean Square Error, Admissibility. 1. Introduction The estimation and confidence interval for Coefficient of Variation (CV) of a normal distribution is well addressed in the literature. The original work dates back to 1932 when Mckay derived a confidence interval for the CV of a normal distribution using transformed CV. (Also see the references cited in this paper). It was Singh (1983) who emphasized that Inverse Sample Coefficient of Variation 722 Aruna Kalkur T. and Aruna Rao (ISCV) can be used to derive confidence interval for the CV of a distribution. He derived the first four moments of the ISCV. Sharma and Krishnan (1994) developed the asymptotic distribution of the ISCV without making an assumption about the population distribution. Although many papers have appeared for the estimation and the confidence interval of a normal distribution, they are derived using the classical theory of inference. In recent years Bayesian inference is widely used in scientific investigation. When objective priors are used the Bayes estimator performs well compared to the classical estimator in terms of mean square error. Not many papers have appeared in the past regarding Bayes estimator of CV of the normal distribution. In this paper, we discuss Bayes estimator for of the CV and ICV of the normal distribution under squared error loss function. We have used objective priors so that the bias and Mean Square Error (MSE) of these estimators can be compared with maximum likelihood estimators. The objective priors used are Left invariant, Right invariant, Jeffry’s Rule prior, Uniform prior and Probability matching prior. In Chapter 1 we have listed many papers dealing with the estimation for the CV of the normal distribution. In this chapter, we discuss Bayes estimator for the CV and ICV of the normal distribution under squared error loss function. We have used objective priors so that the bias and MSE of these estimators can be compared with MLEs. The objective priors used are Right invariant, Left invariant, Jeffrey’s Rule prior, Uniform prior and Probability matching prior. Harvey and Merwe (2012) make a distinction between Jeffrey’s prior and Jeffry’s Rule prior. Both are proportional to square root of the determinant of the Fisher Information matrix; but the distinction is that in Jeffrey’s prior σ is treated as parameter and in Jeffrey’s Rule prior σ2 is treated as parameter. The comparison between the Bayes estimator, maximum likelihood and other estimators of the classical inference are the focal point of investigation in many research papers in the past. Some of the research references are the following: Pereira and Stern (2000) derived Full Bayesian Test for the CV of the normal distribution. Kang and Kim (2003) proposed Bayesian test procedure for the equality of CVs of two normal distributions using fractional Bayes factor. Pang et al. (2005) derived Bayesian credible intervals for the CVs of three parameters Lognormal and Weibull distributions. D’Cunha and Rao (2014) compared Bayesian credible interval and confidence interval based on MLE in lognormal distribution. This chapter is divided into 6 sections. Section 2 presents the expressions for the Bayes estimators of CV using five objective priors mentioned previously. The bias and MSE 1 of the Bayes estimator of CV to the order of 푂(1)and푂( ⁄푛) respectively are also derived. Numerical values of bias and MSE of the Bayes estimator for CVs, along with the MLE’s are presented in section 3. In section 4 we present the expressions for the Bayes Estimator for Coefficient of Variation and Inverse Coefficient… 723 1 Bayes estimator of ICV along with the bias and MSE to the order of 푂(1)and푂( ⁄푛) respectively. Numerical values of bias and MSE of the Bayes estimator for ICV, along with the MLE’s are presented in section 5. The chapter concludes in section 6. 2. Bayes Estimators of Coefficient of Variation 2 Let 푥1, … , 푥푛 be i.i.d. N (μ, σ ). The maximum likelihood estimator of CV of the normal 휎̂ 1 1 distribution is given by휃̂ = , where 휇̂ = 푥̅ = ∑푛 푥 and 휎̂ 2=푠2 = ∑푛 ( 푥 − 휇̂ 푛 푖=1 푖 푛−1 푖=1 푖 푥̅)2 denotes the sample mean and sample variance of normal distribution respectively. Five priors are used for the estimation of CV and ICV. Let 휋(휇, 휎) denote the prior density for μ and σ. The expressions for the prior density are given below. a) Right invariant Jeffrey’s prior 1 휋(휇, 휎) = (1) 휎 b) Left invariant prior 1 휋(휇, 휎) = (2) 휎2 c) Jeffrey’s Rule prior 1 휋(휇, 휎 ) = (3) 휎3 d) Uniform prior 휋(휇, 휎 ) = 1 (4) e) Probability matching prior 휋(휇, 휎) = 휎2 (5) Since the distribution of 푥̅ and 푠2 are independent, after some simplification we obtain 1 푛+2 1 the posterior density of ( ) as Gamma (( ), (푛 − 1)푠2) under right invariant 휎2 2 2 푛+3 1 prior, Gamma (( ), (푛 − 1)푠2) under left invariant Jeffreys prior, 2 2 724 Aruna Kalkur T. and Aruna Rao 푛+4 1 푛+1 1 Gamma(( ), (푛 − 1)푠2) under Jeffry’s Rule prior, Gamma (( ), (푛 − 1)푠2) 2 2 2 2 푛 1 under Uniform prior and Gamma (( ), (푛 − 1)푠2) under Probability matching prior. 2 2 휎 Bayes estimator of CV is given by 퐸( ), where the expectation is taken with respect to 휇 the posterior density of 휋(휇 , 휎|푑푎푡푎). The posterior density 흅(흁, 흈|풙ퟏ , … , 풙풏 ) for the right invariant, left invariant, Jeffrey’s Rule, Uniform and Probability matching priors respectively are given by the following expressions. 휋(휇, 휎|푥1, … , 푥푛) 푛+2 2 2 (6) 1(푥̅−휇)2 (푛 − 1)푠 푛+2 − ( ) ( )−1 1 푛−1 1 2 휎2 2 1 2 − ( )푠2 = 푒 ⁄푛 ( ) 푒 2 휎2 , 휎 푛 + 2 2 √2휋 Γ ( ) 휎 √푛 2 (Using Right invariant prior) −∞ < 휇 < ∞, 휎 > 0 푛+3 1(푥̅−휇)2 (푛−1)푠2 2 푛+3 − ( ) ( )−1 1 푛−1 2 1 2 휎2 2 1 2 − ( )푠 ⁄푛 2 휎2 휋(휇, 휎|푥1, … , 푥푛) = 휎 푒 푛+3 ( ) 푒 , (7) √2휋 Γ( ) 휎2 √푛 2 (Using Left invariant prior) −∞ < 휇 < ∞, 휎 > 0 푛+4 1(푥̅−휇)2 (푛−1)푠2 2 푛+4 − ( ) ( )−1 1 푛−1 2 1 2 휎2 2 1 2 − ( )푠 ⁄푛 2 휎2 휋(휇, 휎|푥1, … , 푥푛) = 휎 푒 푛+4 ( ) 푒 , (8) √2휋 Γ( ) 휎2 √푛 2 (Using Jeffrey’s Rule prior) −∞ < 휇 < ∞, 휎 > 0 푛+1 1(푥̅−휇)2 (푛−1)푠2 2 푛+1 − ( ) ( )−1 1 푛−1 2 1 2 휎2 2 1 2 − ( )푠 ⁄푛 2 휎2 휋(휇, 휎|푥1, … , 푥푛) = 휎 푒 푛+1 ( ) 푒 , (9) √2휋 Γ( ) 휎2 √푛 2 (Using Uniform prior) −∞ < 휇 < ∞, 휎 > 0 푛 1(푥̅−휇)2 (푛−1)푠2 2 푛 − ( ) ( )−1 1 푛−1 2 1 2 휎2 2 1 2 − ( )푠 ⁄푛 2 휎2 휋(휇, 휎|푥1, … , 푥푛) = 휎 푒 푛 ( ) 푒 , (10) √2휋 Γ( ) 휎2 √푛 2 (Using Probability matching prior) −∞ < 휇 < ∞, 휎 > 0 Bayes Estimator for Coefficient of Variation and Inverse Coefficient… 725 The Bayes estimator of CV is 퐸(휃|푥1, … , 푥푛 ), where the expectation is taken with respect to the posterior density of 휇 and 휎. For the right invariant prior it is given by 푛+2 1(푥̅−휇)2 (푛−1)푠2 2 푛+2 − ( ) ( )−1 1 푛−1 2 휇 1 2 휎2 2 1 2 − ( )푠 ̂ ⁄푛 2 휎2 휃푅 = ∬ 휎 푒 푛+2 ( ) 푒 푑휇푑휎 (11) 휎 √2휋 Γ( ) 휎2 √푛 2 푛+2 푛−1 1/2 푠 Г( )( ) ( ) 2 2 푥̅ = 푛+3 Г( ) 2 푛+2 푛−1 1/2 Г( )( ) ̂ 2 2 ̂ 푠 = 푎푛휃 , where푎푛 = 푛+3 and 휃=( ) (12) Г( ) 푥̅ 2 The following theorem gives the Bayes estimators. Theorem 1: The Bayes estimators of CV corresponding to Left Invariant prior, Right Invariant prior, Jeffrey’s Rule prior, Uniform prior and Probability Matching prior are the following. 푛+2 푛−1 1/2 푠 Г( )( ) ( ) ̂ 2 2 푥̅ 휃푅 = 푛+3 (13) Г( ) 2 푛+1 푛−1 1/2 푠 Г( )( ) ( ) ̂ 2 2 푥̅ 휃퐿= 푛+2 (14) Г( ) 2 푛+3 푛−1 1/2 푠 Г( )( ) ( ) 2 2 푥̅ 휃̂JR = 푛+4 (15) Г( ) 2 푛 푛−1 1/2 푠 (Г( ))( ) ( ) 2 2 푥̅ 휃̂U = 푛+1 (16) Г( ) 2 푛−1 푛−1 1/2 푠 (Г( ))( ) ( ) 2 2 푥̅ and 휃̂PM = 푛 respectively. (17) Г( ) 2 Proof: Straight forward and is omitted. 726 Aruna Kalkur T.
Recommended publications
  • Approximated Bayes and Empirical Bayes Confidence Intervals—
    Ann. Inst. Statist. Math. Vol. 40, No. 4, 747-767 (1988) APPROXIMATED BAYES AND EMPIRICAL BAYES CONFIDENCE INTERVALSmTHE KNOWN VARIANCE CASE* A. J. VAN DER MERWE, P. C. N. GROENEWALD AND C. A. VAN DER MERWE Department of Mathematical Statistics, University of the Orange Free State, PO Box 339, Bloemfontein, Republic of South Africa (Received June 11, 1986; revised September 29, 1987) Abstract. In this paper hierarchical Bayes and empirical Bayes results are used to obtain confidence intervals of the population means in the case of real problems. This is achieved by approximating the posterior distribution with a Pearson distribution. In the first example hierarchical Bayes confidence intervals for the Efron and Morris (1975, J. Amer. Statist. Assoc., 70, 311-319) baseball data are obtained. The same methods are used in the second example to obtain confidence intervals of treatment effects as well as the difference between treatment effects in an analysis of variance experiment. In the third example hierarchical Bayes intervals of treatment effects are obtained and compared with normal approximations in the unequal variance case. Key words and phrases: Hierarchical Bayes, empirical Bayes estimation, Stein estimator, multivariate normal mean, Pearson curves, confidence intervals, posterior distribution, unequal variance case, normal approxima- tions. 1. Introduction In the Bayesian approach to inference, a posterior distribution of unknown parameters is produced as the normalized product of the like- lihood and a prior distribution. Inferences about the unknown parameters are then based on the entire posterior distribution resulting from the one specific data set which has actually occurred. In most hierarchical and empirical Bayes cases these posterior distributions are difficult to derive and cannot be obtained in closed form.
    [Show full text]
  • Chapter 6: Random Errors in Chemical Analysis
    Chapter 6: Random Errors in Chemical Analysis Source slideplayer.com/Fundamentals of Analytical Chemistry, F.J. Holler, S.R.Crouch Random errors are present in every measurement no matter how careful the experimenter. Random, or indeterminate, errors can never be totally eliminated and are often the major source of uncertainty in a determination. Random errors are caused by the many uncontrollable variables that accompany every measurement. The accumulated effect of the individual uncertainties causes replicate results to fluctuate randomly around the mean of the set. In this chapter, we consider the sources of random errors, the determination of their magnitude, and their effects on computed results of chemical analyses. We also introduce the significant figure convention and illustrate its use in reporting analytical results. 6A The nature of random errors - random error in the results of analysts 2 and 4 is much larger than that seen in the results of analysts 1 and 3. - The results of analyst 3 show outstanding precision but poor accuracy. The results of analyst 1 show excellent precision and good accuracy. Figure 6-1 A three-dimensional plot showing absolute error in Kjeldahl nitrogen determinations for four different analysts. Random Error Sources - Small undetectable uncertainties produce a detectable random error in the following way. - Imagine a situation in which just four small random errors combine to give an overall error. We will assume that each error has an equal probability of occurring and that each can cause the final result to be high or low by a fixed amount ±U. - Table 6.1 gives all the possible ways in which four errors can combine to give the indicated deviations from the mean value.
    [Show full text]
  • 1 Estimation and Beyond in the Bayes Universe
    ISyE8843A, Brani Vidakovic Handout 7 1 Estimation and Beyond in the Bayes Universe. 1.1 Estimation No Bayes estimate can be unbiased but Bayesians are not upset! No Bayes estimate with respect to the squared error loss can be unbiased, except in a trivial case when its Bayes’ risk is 0. Suppose that for a proper prior ¼ the Bayes estimator ±¼(X) is unbiased, Xjθ (8θ)E ±¼(X) = θ: This implies that the Bayes risk is 0. The Bayes risk of ±¼(X) can be calculated as repeated expectation in two ways, θ Xjθ 2 X θjX 2 r(¼; ±¼) = E E (θ ¡ ±¼(X)) = E E (θ ¡ ±¼(X)) : Thus, conveniently choosing either EθEXjθ or EX EθjX and using the properties of conditional expectation we have, θ Xjθ 2 θ Xjθ X θjX X θjX 2 r(¼; ±¼) = E E θ ¡ E E θ±¼(X) ¡ E E θ±¼(X) + E E ±¼(X) θ Xjθ 2 θ Xjθ X θjX X θjX 2 = E E θ ¡ E θ[E ±¼(X)] ¡ E ±¼(X)E θ + E E ±¼(X) θ Xjθ 2 θ X X θjX 2 = E E θ ¡ E θ ¢ θ ¡ E ±¼(X)±¼(X) + E E ±¼(X) = 0: Bayesians are not upset. To check for its unbiasedness, the Bayes estimator is averaged with respect to the model measure (Xjθ), and one of the Bayesian commandments is: Thou shall not average with respect to sample space, unless you have Bayesian design in mind. Even frequentist agree that insisting on unbiasedness can lead to bad estimators, and that in their quest to minimize the risk by trading off between variance and bias-squared a small dosage of bias can help.
    [Show full text]
  • Download Article (PDF)
    Journal of Statistical Theory and Applications, Vol. 17, No. 2 (June 2018) 359–374 ___________________________________________________________________________________________________________ BAYESIAN APPROACH IN ESTIMATION OF SHAPE PARAMETER OF THE EXPONENTIATED MOMENT EXPONENTIAL DISTRIBUTION Kawsar Fatima Department of Statistics, University of Kashmir, Srinagar, India [email protected] S.P Ahmad* Department of Statistics, University of Kashmir, Srinagar, India [email protected] Received 1 November 2016 Accepted 19 June 2017 Abstract In this paper, Bayes estimators of the unknown shape parameter of the exponentiated moment exponential distribution (EMED)have been derived by using two informative (gamma and chi-square) priors and two non- informative (Jeffrey’s and uniform) priors under different loss functions, namely, Squared Error Loss function, Entropy loss function and precautionary Loss function. The Maximum likelihood estimator (MLE) is obtained. Also, we used two real life data sets to illustrate the result derived. Keywords: Exponentiated Moment Exponential distribution; Maximum Likelihood Estimator; Bayesian estimation; Priors; Loss functions. 2000 Mathematics Subject Classification: 22E46, 53C35, 57S20 1. Introduction The exponentiated exponential distribution is a specific family of the exponentiated Weibull distribution. In analyzing several life time data situations, it has been observed that the dual parameter exponentiated exponential distribution can be more effectively used as compared to both dual parameters of gamma or Weibull distribution. When we consider the shape parameter of exponentiated exponential, gamma and Weibull is one, then these distributions becomes one parameter exponential distribution. Hence, these three distributions are the off shoots of the exponential distribution. Moment distributions have a vital role in mathematics and statistics, in particular probability theory, in the viewpoint research related to ecology, reliability, biomedical field, econometrics, survey sampling and in life-testing.
    [Show full text]
  • An Approach for Appraising the Accuracy of Suspended-Sediment Data
    An Approach for Appraising the Accuracy of Suspended-sediment Data U.S. GEOLOGICAL SURVEY PROFESSIONAL PAl>£R 1383 An Approach for Appraising the Accuracy of Suspended-sediment Data By D. E. BURKHAM U.S. GEOLOGICAL SURVEY PROFESSIONAL PAPER 1333 UNITED STATES GOVERNMENT PRINTING OFFICE, WASHINGTON, 1985 DEPARTMENT OF THE INTERIOR DONALD PAUL MODEL, Secretary U.S. GEOLOGICAL SURVEY Dallas L. Peck, Director First printing 1985 Second printing 1987 For sale by the Books and Open-File Reports Section, U.S. Geological Survey, Federal Center, Box 25425, Denver, CO 80225 CONTENTS Page Page Abstract ........... 1 Spatial error Continued Introduction ....... 1 Application of method ................................ 11 Problem ......... 1 Basic data ......................................... 11 Purpose and scope 2 Standard spatial error for multivertical procedure .... 11 Sampling error .......................................... 2 Standard spatial error for single-vertical procedure ... 13 Discussion of error .................................... 2 Temporal error ......................................... 13 Approach to solution .................................. 3 Discussion of error ................................... 13 Application of method ................................. 4 Approach to solution ................................. 14 Basic data .......................................... 4 Application of method ................................ 14 Standard sampling error ............................. 4 Basic data ........................................
    [Show full text]
  • Approximating the Distribution of the Product of Two Normally Distributed Random Variables
    S S symmetry Article Approximating the Distribution of the Product of Two Normally Distributed Random Variables Antonio Seijas-Macías 1,2 , Amílcar Oliveira 2,3 , Teresa A. Oliveira 2,3 and Víctor Leiva 4,* 1 Departamento de Economía, Universidade da Coruña, 15071 A Coruña, Spain; [email protected] 2 CEAUL, Faculdade de Ciências, Universidade de Lisboa, 1649-014 Lisboa, Portugal; [email protected] (A.O.); [email protected] (T.A.O.) 3 Departamento de Ciências e Tecnologia, Universidade Aberta, 1269-001 Lisboa, Portugal 4 Escuela de Ingeniería Industrial, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, Chile * Correspondence: [email protected] or [email protected] Received: 21 June 2020; Accepted: 18 July 2020; Published: 22 July 2020 Abstract: The distribution of the product of two normally distributed random variables has been an open problem from the early years in the XXth century. First approaches tried to determinate the mathematical and statistical properties of the distribution of such a product using different types of functions. Recently, an improvement in computational techniques has performed new approaches for calculating related integrals by using numerical integration. Another approach is to adopt any other distribution to approximate the probability density function of this product. The skew-normal distribution is a generalization of the normal distribution which considers skewness making it flexible. In this work, we approximate the distribution of the product of two normally distributed random variables using a type of skew-normal distribution. The influence of the parameters of the two normal distributions on the approximation is explored. When one of the normally distributed variables has an inverse coefficient of variation greater than one, our approximation performs better than when both normally distributed variables have inverse coefficients of variation less than one.
    [Show full text]
  • Bayes Estimator Recap - Example
    Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Last Lecture . Biostatistics 602 - Statistical Inference Lecture 16 • What is a Bayes Estimator? Evaluation of Bayes Estimator • Is a Bayes Estimator the best unbiased estimator? . • Compared to other estimators, what are advantages of Bayes Estimator? Hyun Min Kang • What is conjugate family? • What are the conjugate families of Binomial, Poisson, and Normal distribution? March 14th, 2013 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 1 / 28 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 2 / 28 Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Recap - Bayes Estimator Recap - Example • θ : parameter • π(θ) : prior distribution i.i.d. • X1, , Xn Bernoulli(p) • X θ fX(x θ) : sampling distribution ··· ∼ | ∼ | • π(p) Beta(α, β) • Posterior distribution of θ x ∼ | • α Prior guess : pˆ = α+β . Joint fX(x θ)π(θ) π(θ x) = = | • Posterior distribution : π(p x) Beta( xi + α, n xi + β) | Marginal m(x) | ∼ − • Bayes estimator ∑ ∑ m(x) = f(x θ)π(θ)dθ (Bayes’ rule) | α + x x n α α + β ∫ pˆ = i = i + α + β + n n α + β + n α + β α + β + n • Bayes Estimator of θ is ∑ ∑ E(θ x) = θπ(θ x)dθ | θ Ω | ∫ ∈ Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 3 / 28 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 4 / 28 Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Loss Function Optimality Loss Function Let L(θ, θˆ) be a function of θ and θˆ.
    [Show full text]
  • Analysis of the Coefficient of Variation in Shear and Tensile Bond Strength Tests
    J Appl Oral Sci 2005; 13(3): 243-6 www.fob.usp.br/revista or www.scielo.br/jaos ANALYSIS OF THE COEFFICIENT OF VARIATION IN SHEAR AND TENSILE BOND STRENGTH TESTS ANÁLISE DO COEFICIENTE DE VARIAÇÃO EM TESTES DE RESISTÊNCIA DA UNIÃO AO CISALHAMENTO E TRAÇÃO Fábio Lourenço ROMANO1, Gláucia Maria Bovi AMBROSANO1, Maria Beatriz Borges de Araújo MAGNANI1, Darcy Flávio NOUER1 1- MSc, Assistant Professor, Department of Orthodontics, Alfenas Pharmacy and Dental School – Efoa/Ceufe, Minas Gerais, Brazil. 2- DDS, MSc, Associate Professor of Biostatistics, Department of Social Dentistry, Piracicaba Dental School – UNICAMP, São Paulo, Brazil; CNPq researcher. 3- DDS, MSc, Assistant Professor of Orthodontics, Department of Child Dentistry, Piracicaba Dental School – UNICAMP, São Paulo, Brazil. 4- DDS, MSc, Full Professor of Orthodontics, Department of Child Dentistry, Piracicada Dental School – UNICAMP, São Paulo, Brazil. Corresponding address: Fábio Lourenço Romano - Avenida do Café, 131 Bloco E, Apartamento 16 - Vila Amélia - Ribeirão Preto - SP Cep.: 14050-230 - Phone: (16) 636 6648 - e-mail: [email protected] Received: July 29, 2004 - Modification: September 09, 2004 - Accepted: June 07, 2005 ABSTRACT The coefficient of variation is a dispersion measurement that does not depend on the unit scales, thus allowing the comparison of experimental results involving different variables. Its calculation is crucial for the adhesive experiments performed in laboratories because both precision and reliability can be verified. The aim of this study was to evaluate and to suggest a classification of the coefficient variation (CV) for in vitro experiments on shear and tensile strengths. The experiments were performed in laboratory by fifty international and national studies on adhesion materials.
    [Show full text]
  • CSC535: Probabilistic Graphical Models
    CSC535: Probabilistic Graphical Models Bayesian Probability and Statistics Prof. Jason Pacheco Why Graphical Models? Data elements often have dependence arising from structure Pose Estimation Protein Structure Exploit structure to simplify representation and computation Why “Probabilistic”? Stochastic processes have many sources of uncertainty Randomness in Measurement State of Nature Process PGMs let us represent and reason about these in structured ways What is Probability? What does it mean that the probability of heads is ½ ? Two schools of thought… Frequentist Perspective Proportion of successes (heads) in repeated trials (coin tosses) Bayesian Perspective Belief of outcomes based on assumptions about nature and the physics of coin flips Neither is better/worse, but we can compare interpretations… Administrivia • HW1 due 11:59pm tonight • Will accept submissions through Friday, -0.5pts per day late • HW only worth 4pts so maximum score on Friday is 75% • Late policy only applies to this HW Frequentist & Bayesian Modeling We will use the following notation throughout: - Unknown (e.g. coin bias) - Data Frequentist Bayesian (Conditional Model) (Generative Model) Prior Belief Likelihood • is a non-random unknown • is a random variable (latent) parameter • Requires specifying the • is the sampling / data prior belief generating distribution Frequentist Inference Example: Suppose we observe the outcome of N coin flips. What is the probability of heads (coin bias)? • Coin bias is not random (e.g. there is some true value) • Uncertainty reported
    [Show full text]
  • "The Distribution of Mckay's Approximation for the Coefficient Of
    ARTICLE IN PRESS Statistics & Probability Letters 78 (2008) 10–14 www.elsevier.com/locate/stapro The distribution of McKay’s approximation for the coefficient of variation Johannes Forkmana,Ã, Steve Verrillb aDepartment of Biometry and Engineering, Swedish University of Agricultural Sciences, P.O. Box 7032, SE-750 07 Uppsala, Sweden bU.S.D.A., Forest Products Laboratory, 1 Gifford Pinchot Drive, Madison, WI 53726, USA Received 12 July 2006; received in revised form 21 February 2007; accepted 10 April 2007 Available online 7 May 2007 Abstract McKay’s chi-square approximation for the coefficient of variation is type II noncentral beta distributed and asymptotically normal with mean n À 1 and variance smaller than 2ðn À 1Þ. r 2007 Elsevier B.V. All rights reserved. Keywords: Coefficient of variation; McKay’s approximation; Noncentral beta distribution 1. Introduction The coefficient of variation is defined as the standard deviation divided by the mean. This measure, which is commonly expressed as a percentage, is widely used since it is often necessary to relate the size of the variation to the level of the observations. McKay (1932) introduced a w2 approximation for the coefficient of variation calculated on normally distributed observations. It can be defined in the following way. Definition 1. Let yj, j ¼ 1; ...; n; be n independent observations from a normal distribution with expected value m and variance s2. Let g denote the population coefficient of variation, i.e. g ¼ s=m, and let c denote the sample coefficient of variation, i.e. vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u 1 u 1 Xn 1 Xn c ¼ t ðy À mÞ2; m ¼ y .
    [Show full text]
  • Robust Statistics in BD Facsdiva™ Software
    BD Biosciences Tech Note June 2012 Robust Statistics in BD FACSDiva™ Software Robust Statistics in BD FACSDiva™ Software Robust statistics defined Robust statistics provide an alternative approach to classical statistical estimators such as mean, standard deviation (SD), and percent coefficient of variation (%CV). These alternative procedures are more resistant to the statistical influences of outlying events in a sample population—hence the term “robust.” Real data sets often contain gross outliers, and it is impractical to systematically attempt to remove all outliers by gating procedures or other rule sets. The robust equivalent of the mean statistic is the median. The robust SD is designated rSD and the percent robust CV is designated %rCV. For perfectly normal distributions, classical and robust statistics give the same results. How robust statistics are calculated in BD FACSDiva™ software Median The mean, or average, is the sum of all the values divided by the number of values. For example, the mean of the values of [13, 10, 11, 12, 114] is 160 ÷ 5 = 32. If the outlier value of 114 is excluded, the mean is 11.5. The median is defined as the midpoint, or the 50th percentile of the values. It is the statistical center of the population. Half the values should be above the median and half below. In the previous example, the median is 12 (13 and 114 are above; 10 and 11 are below). Note that this is close to the mean with the outlier excluded. Robust standard deviation (rSD) The classical SD is a function of the deviation of individual data points to the mean of the population.
    [Show full text]
  • Correct Use of Percent Coefficient of Variation (%CV) Formula for Log-Transformed Data
    MOJ Proteomics & Bioinformatics Short Communication Open Access Correct use of percent coefficient of variation (%CV) formula for log-transformed data Abstract Volume 6 Issue 4 - 2017 The coefficient of variation (CV) is a unit less measure typically used to evaluate the Jesse A Canchola, Shaowu Tang, Pari Hemyari, variability of a population relative to its standard deviation and is normally presented as a percentage.1 When considering the percent coefficient of variation (%CV) for Ellen Paxinos, Ed Marins Roche Molecular Systems, Inc., USA log-transformed data, we have discovered the incorrect application of the standard %CV form in obtaining the %CV for log-transformed data. Upon review of various Correspondence: Jesse A Canchola, Roche Molecular Systems, journals, we have noted the formula for the %CV for log-transformed data was not Inc., 4300 Hacienda Drive, Pleasanton, CA 94588, USA, being applied correctly. This communication provides a framework from which the Email [email protected] correct mathematical formula for the %CV can be applied to log-transformed data. Received: October 30, 2017 | Published: November 16, 2017 Keywords: coefficient of variation, log-transformation, variances, statistical technique Abbreviations: CV, coefficient of variation; %CV, CV x 100% and variance of the transformation. If the untransformed %CV is used on log-normal data, the resulting Introduction %CV will be too small and give an overly optimistic, but incorrect, i. The percent coefficient of variation, %CV, is a unit less measure of view of the performance of the measured device. variation and can be considered as a “relative standard deviation” For example, Hatzakis et al.,1 Table 1, showed an assessment of since it is defined as the standard deviation divided by the mean inter-instrument, inter-operator, inter-day, inter-run, intra-run and multiplied by 100 percent: total variability of the Aptima HIV-1 Quant Dx in various HIV-1 RNA σ concentrations.
    [Show full text]