Table of Contents Statistical Analysis Measures of Statistical Central

Total Page:16

File Type:pdf, Size:1020Kb

Table of Contents Statistical Analysis Measures of Statistical Central Table of contents Statistical analysis Measures of statistical central tendencies Measures of variability Aleatory uncertainties Epistemic uncertainties Measures of statistical dispersion or deviation The range Mean difference The variance The standard deviation Coefficient of variation Measures of uncertainty Systems of events Entropy Random (stochastic) variables Discontinuous (discrete) random variables Moments of discrete random variables Probability distributions of discrete random variables Binomial distribution Poisson distribution Continuous random variables Probability Density Function Cumulative Distribution Function Probability distributions of continuous random variables Uniform distribution Simpson’s (triangular) distribution Normal distribution Lognormal distribution Shifted exponential distribution Gamma distribution Shifted Rayleigh distribution Type I Largest value (Gumbel) distribution Type III Smallest values (for ε = 0 it is known as the Weibull distribution) Beta distribution Type I Smallest values distribution Combinations of random variables Measures of statistical central tendencies Measures of central tendency of a set of data x1, x2, ..., xN locate only the centre of a distribution of measures. Other measures often are needed to describe data. The mean is often used to describe central tendencies. Mean has two related meanings in statistics: • the arithmetic mean • the expected value of a random variable. In mathematics and statistics, the arithmetic mean, often referred to as simply the mean or average. The term "arithmetic mean" is preferred in mathematics and statistics. The arithmetic mean is analytically defined on a data set x1, x2, ..., xN as it follows: 1 N μ()x ==xx∑ i N i=1 Measures of variability Statistics uses summary measures to describe the amount of variability or spread in a set of data x1, x2, ..., xN. The variability applies to the extent to which data points in a statistical distribution or data set diverge from the average or mean value. Variability also refers to the extent to which these data points differ from each other. There are several commonly used measures of variability: range, mean difference, variance and standard deviation as well as the combined measure of variability defined as coefficient of variation with respect to the mean value. Uncertainty represents a state of having limited knowledge where it is impossible to exactly describe the existing state, a future outcome, or more than one possible outcome. The uncertainty (doubt) in statistics and probability theory represents the estimated amount or percentage by which an observed or calculated value may differ from the true value. Uncertainties can be distinguished as being either aleatory or epistemic. Aleatory uncertainties Objective or external or irreducible uncertainty arises because of natural, unpredictable variability of the wave and wind climate or of ship operations. The inherent randomness normally cannot be reduced although the knowledge of the phenomena may help in quantifying the uncertainty. Epistemic uncertainties Uncertainty is due to a lack of knowledge about the climate properties. The epistemic (or subjective or internal or modelling) uncertainty can be reduced with sufficient study, better measurement facilities, more observations or improved modelling and, therefore, expert judgments may be useful in its reduction. The range A measure of statistical dispersion or deviation is a real number that is zero if all the data are identical, and increases as the data becomes more diverse. It cannot be less than zero. Most measures of dispersion have the same scale as the quantity being measured. In other words, if the measurements have units, such as metres or seconds, the measure of dispersion has the same units. Basic measures of dispersion include: • Range • Mean difference • Variance Additional measures are: • Standard deviation – the square root of the variance • Coefficient of variation – the standard deviation divided by the mean value • (See Excel example: GraduationRateofNavalArchitectureinZagreb) The example presents the statistical properties of the input and output rates of numbers of students naval architecture at the Faculty of Mechanical Engineering and Naval Architecture at the University of Zagreb. Studij brodogradnje 120 115 110 105 100 Upisano 95 90 Diplomiralo 85 80 75 70 65 60 55 50 45 40 Upisano/diplomiralo 35 30 25 20 15 10 5 0 Godina The range In descriptive statistics, the range is the length of the smallest interval which contains all the data of a dataset x1, x2, ..., xN. The range is calculated by subtracting the smallest observation (sample minimum Smin) from the greatest (sample maximum Smax) and indicates of statistical dispersion R=Smax-Smin. The range, in the sense of the difference between the highest and lowest scores, is also called the crude range. The midrange point, i.e. the point halfway between the two extremes, is an indicator of the central tendency of the data. It is not appropriate for small samples. The mean difference In probability theory and statistics, the mean difference is used as a measure of how far a set of numbers of a dataset x1, x2, ..., xN are spread out from each other. It is one of several descriptors of a probability distribution, describing how far the numbers lie from the mean (expected value). For random variable X = x1, x2, ..., xN with mean value μ the mean difference of X is: 1 N MDx() MD() x=−∑ abs ( xi μ ) RMD() x = N i=1 or the relative mean difference is then μ The variance In probability theory and statistics, the variance is another indicator used as a measure of how far a set of numbers are spread out from each other. For random variable X with expected value (mean) μ = E[X], the variance of X is: NN 222211 Var() x==σ ∑∑ ( xii −μμ ) = x − NNii==11 Proof: NN2 N 22211μμi 1 22 Var() x==σ ∑∑ ( xii −μμμ ) = ( x ) − 2 + N = ∑ x i − NNii==111 NN i = 1 The standard deviation The widely used measure of variability or diversity in statistics and probability theory is the standard deviation. It shows how much variation or "dispersion" there is from the "average". The standard deviation is the square root of its variance: σ ()X = Var () X The standard deviation, unlike variance, is expressed in the same units as the data. The coefficient of variation Other measures of dispersion are dimensionless (scale-free). In They have no units even if the variable itself has units. In widest use is the coefficient of variation defined as follows: σ ()X COV() X = μ()X For measurements with percentage as unit, the coefficient of variation and the standard deviation will have percentage points as unit. Measures of uncertainty Systems of events Random events are in general considered as abstract concepts and the relations among events are characterized axiomatically. The algebraic structure of the set of events turns out to be Boolean algebra. The disjoined random events E j with probabilities pi = p(Ei ) , iN=1, 2,⋅⋅⋅ , configure a system SN in a form of an N-element finite scheme: ⎛ EE12⋅⋅⋅ EjN ⋅⋅⋅ E⎞ S = ⎜ ⎟ N ⎜⎟ppEppE= = ⋅⋅⋅ ppE = ⋅⋅⋅ p = pE ⎝ 1122() () jj() NN()⎠ N The probability of a system of events SN is then in general pp()SNi=≤∑ 1. For a i=1 complete distribution is porp(PSNN ) ( )= 1. A system of N events: E1, E2, ... , EN is called a complete system of events if the following axioms hold: EkNk ≠ ∅(1,2,,) = ⋅⋅⋅ (a) E j E k = ∅ ( for j ≠ k ) (b) E12++⋅⋅⋅+=EEIN (c) The "∅" in (a) and (b) means an impossible event and "I" in (c) denotes a sure event. The fact that Ej and Ek are exclusive is expressed in (b). The (c) denotes that at least one of the events Ek, k = 1, 2, ..., N, occurs. Entropy Uncertainty of a single stochastic event E with known probability p=p(E)≠0 plays a fundamental role in information theory. To each probability can be assigned the equivalent number (2) of probabilities or events ν (E) =1/ p(E). The entropy of a single stochastic event E can be interpreted according to Wiener (1948) either as a measure of the information yielded by the event or how unexpected the event was and can be defined as the logarithm of the equivalent number of events ν (E) as follows: HE()== log()22ν E log1/()[] pE =− log() 2 pE The unit of unexpectedness H (1/ 2) = 1 expresses how unexpected is for example to get a tail when flipping a coin. More important than unexpectedness of a single stochastic event are the uncertainties of systems of N events. The uncertainty of a complete system S of N events can be expressed as the weighted sum of unexpectedness of all events by the Shannon’s entropy (Shannon and Weaver, 1949), as it follows: NN N HpNjjjjjj()S ==∑∑ logυ pppp log(1/ ) =− ∑ log jj==11 j = 1 The uncertainty of an incomplete system of N events S can be defined as the limiting case of the Renyi’s entropy (1970) of order 1, as it is shown: N R1 1 HppNjj()S =− ∑ log p()S j=1 The definition of the unit of uncertainty according to Renyi (1970) is not more and not less arbitrary than the choice of the unit of some physical quantity. E.g., if the logarithm applied is of base two, the unit of entropy is denoted as one "bit". One bit is the uncertainty of a system of two equally probable events. If the natural logarithm is applied, the unit is denoted as one "nit". Outcomes with 0 probabilities do not change the uncertainty. By convention, 0 log 0= 0. Some characteristics of the probabilistic uncertainty measures and properties of the entropy are summarized next. The entropy HN(S) is equal to zero when the state of the system S can be surely predicted, i.e., no uncertainty exists at all. This occurs when one of the probabilities of events pi, i=1,2,...,N is equal to one, let us say pk=1 and all other probabilities are equal to zero, pj=0, j≠k.
Recommended publications
  • Suitability of Different Probability Distributions for Performing Schedule Risk Simulations in Project Management
    2016 Proceedings of PICMET '16: Technology Management for Social Innovation Suitability of Different Probability Distributions for Performing Schedule Risk Simulations in Project Management J. Krige Visser Department of Engineering and Technology Management, University of Pretoria, Pretoria, South Africa Abstract--Project managers are often confronted with the The Project Evaluation and Review Technique (PERT), in question on what is the probability of finishing a project within conjunction with the Critical Path Method (CPM), were budget or finishing a project on time. One method or tool that is developed in the 1950’s to address the uncertainty in project useful in answering these questions at various stages of a project duration for complex projects [11], [13]. The expected value is to develop a Monte Carlo simulation for the cost or duration or mean value for each activity of the project network was of the project and to update and repeat the simulations with actual data as the project progresses. The PERT method became calculated by applying the beta distribution and three popular in the 1950’s to express the uncertainty in the duration estimates for the duration of the activity. The total project of activities. Many other distributions are available for use in duration was determined by adding all the duration values of cost or schedule simulations. the activities on the critical path. The Monte Carlo simulation This paper discusses the results of a project to investigate the (MCS) provides a distribution for the total project duration output of schedule simulations when different distributions, e.g. and is therefore more useful as a method or tool for decision triangular, normal, lognormal or betaPert, are used to express making.
    [Show full text]
  • EART6 Lab Standard Deviation in Probability and Statistics, The
    EART6 Lab Standard deviation In probability and statistics, the standard deviation is a measure of the dispersion of a collection of values. It can apply to a probability distribution, a random variable, a population or a data set. The standard deviation is usually denoted with the letter σ. It is defined as the root-mean-square (RMS) deviation of the values from their mean, or as the square root of the variance. Formulated by Galton in the late 1860s, the standard deviation remains the most common measure of statistical dispersion, measuring how widely spread the values in a data set are. If many data points are close to the mean, then the standard deviation is small; if many data points are far from the mean, then the standard deviation is large. If all data values are equal, then the standard deviation is zero. A useful property of standard deviation is that, unlike variance, it is expressed in the same units as the data. 2 (x " x ) ! = # N N Fig. 1 Dark blue is less than one standard deviation from the mean. For Fig. 2. A data set with a mean the normal distribution, this accounts for 68.27 % of the set; while two of 50 (shown in blue) and a standard deviations from the mean (medium and dark blue) account for standard deviation (σ) of 20. 95.45%; three standard deviations (light, medium, and dark blue) account for 99.73%; and four standard deviations account for 99.994%. The two points of the curve which are one standard deviation from the mean are also the inflection points.
    [Show full text]
  • Robust Logistic Regression to Static Geometric Representation of Ratios
    Journal of Mathematics and Statistics 5 (3):226-233, 2009 ISSN 1549-3644 © 2009 Science Publications Robust Logistic Regression to Static Geometric Representation of Ratios 1Alireza Bahiraie, 2Noor Akma Ibrahim, 3A.K.M. Azhar and 4Ismail Bin Mohd 1,2 Institute for Mathematical Research, University Putra Malaysia, 43400 Serdang, Selangor, Malaysia 3Graduate School of Management, University Putra Malaysia, 43400 Serdang, Selangor, Malaysia 4Department of Mathematics, Faculty of Science, University Malaysia Terengganu, 21030, Malaysia Abstract: Problem statement: Some methodological problems concerning financial ratios such as non- proportionality, non-asymetricity, non-salacity were solved in this study and we presented a complementary technique for empirical analysis of financial ratios and bankruptcy risk. This new method would be a general methodological guideline associated with financial data and bankruptcy risk. Approach: We proposed the use of a new measure of risk, the Share Risk (SR) measure. We provided evidence of the extent to which changes in values of this index are associated with changes in each axis values and how this may alter our economic interpretation of changes in the patterns and directions. Our simple methodology provided a geometric illustration of the new proposed risk measure and transformation behavior. This study also employed Robust logit method, which extends the logit model by considering outlier. Results: Results showed new SR method obtained better numerical results in compare to common ratios approach. With respect to accuracy results, Logistic and Robust Logistic Regression Analysis illustrated that this new transformation (SR) produced more accurate prediction statistically and can be used as an alternative for common ratios. Additionally, robust logit model outperforms logit model in both approaches and was substantially superior to the logit method in predictions to assess sample forecast performances and regressions.
    [Show full text]
  • Iam 530 Elements of Probability and Statistics
    IAM 530 ELEMENTS OF PROBABILITY AND STATISTICS LECTURE 3-RANDOM VARIABLES VARIABLE • Studying the behavior of random variables, and more importantly functions of random variables is essential for both the theory and practice of statistics. Variable: A characteristic of population or sample that is of interest for us. Random Variable: A function defined on the sample space S that associates a real number with each outcome in S. In other words, a numerical value to each outcome of a particular experiment. • For each element of an experiment’s sample space, the random variable can take on exactly one value TYPES OF RANDOM VARIABLES We will start with univariate random variables. • Discrete Random Variable: A random variable is called discrete if its range consists of a countable (possibly infinite) number of elements. • Continuous Random Variable: A random variable is called continuous if it can take on any value along a continuum (but may be reported “discretely”). In other words, its outcome can be any value in an interval of the real number line. Note: • Random Variables are denoted by upper case letters (X) • Individual outcomes for RV are denoted by lower case letters (x) DISCRETE RANDOM VARIABLES EXAMPLES • A random variable which takes on values in {0,1} is known as a Bernoulli random variable. • Discrete Uniform distribution: 1 P(X x) ; x 1,2,..., N; N 1,2,... N • Throw a fair die. P(X=1)=…=P(X=6)=1/6 DISCRETE RANDOM VARIABLES • Probability Distribution: Table, Graph, or Formula that describes values a random variable can take on, and its corresponding probability (discrete random variable) or density (continuous random variable).
    [Show full text]
  • Final Paper (PDF)
    Analytic Method for Probabilistic Cost and Schedule Risk Analysis Final Report 5 April2013 PREPARED FOR: NATIONAL AERONAUTICS AND SPACE ADMINISTRATION (NASA) OFFICE OF PROGRAM ANALYSIS AND EVALUATION (PA&E) COST ANALYSIS DIVISION (CAD) Felecia L. London Contracting Officer NASA GODDARD SPACE FLIGHT CENTER, PROCUREMENT OPERATIONS DIVISION OFFICE FOR HEADQUARTERS PROCUREMENT, 210.H Phone: 301-286-6693 Fax:301-286-1746 e-mail: [email protected] Contract Number: NNHl OPR24Z Order Number: NNH12PV48D PREPARED BY: RAYMOND P. COVERT, COVARUS, LLC UNDER SUBCONTRACT TO GALORATHINCORPORATED ~ SEER. br G A L 0 R A T H [This Page Intentionally Left Blank] ii TABLE OF CONTENTS 1 Executive Summacy.................................................................................................. 11 2 In.troduction .............................................................................................................. 12 2.1 Probabilistic Nature of Estimates .................................................................................... 12 2.2 Uncertainty and Risk ....................................................................................................... 12 2.2.1 Probability Density and Probability Mass ................................................................ 12 2.2.2 Cumulative Probability ............................................................................................. 13 2.2.3 Definition ofRisk ..................................................................................................... 14 2.3
    [Show full text]
  • Chapter 6: Random Errors in Chemical Analysis
    Chapter 6: Random Errors in Chemical Analysis Source slideplayer.com/Fundamentals of Analytical Chemistry, F.J. Holler, S.R.Crouch Random errors are present in every measurement no matter how careful the experimenter. Random, or indeterminate, errors can never be totally eliminated and are often the major source of uncertainty in a determination. Random errors are caused by the many uncontrollable variables that accompany every measurement. The accumulated effect of the individual uncertainties causes replicate results to fluctuate randomly around the mean of the set. In this chapter, we consider the sources of random errors, the determination of their magnitude, and their effects on computed results of chemical analyses. We also introduce the significant figure convention and illustrate its use in reporting analytical results. 6A The nature of random errors - random error in the results of analysts 2 and 4 is much larger than that seen in the results of analysts 1 and 3. - The results of analyst 3 show outstanding precision but poor accuracy. The results of analyst 1 show excellent precision and good accuracy. Figure 6-1 A three-dimensional plot showing absolute error in Kjeldahl nitrogen determinations for four different analysts. Random Error Sources - Small undetectable uncertainties produce a detectable random error in the following way. - Imagine a situation in which just four small random errors combine to give an overall error. We will assume that each error has an equal probability of occurring and that each can cause the final result to be high or low by a fixed amount ±U. - Table 6.1 gives all the possible ways in which four errors can combine to give the indicated deviations from the mean value.
    [Show full text]
  • Location-Scale Distributions
    Location–Scale Distributions Linear Estimation and Probability Plotting Using MATLAB Horst Rinne Copyright: Prof. em. Dr. Horst Rinne Department of Economics and Management Science Justus–Liebig–University, Giessen, Germany Contents Preface VII List of Figures IX List of Tables XII 1 The family of location–scale distributions 1 1.1 Properties of location–scale distributions . 1 1.2 Genuine location–scale distributions — A short listing . 5 1.3 Distributions transformable to location–scale type . 11 2 Order statistics 18 2.1 Distributional concepts . 18 2.2 Moments of order statistics . 21 2.2.1 Definitions and basic formulas . 21 2.2.2 Identities, recurrence relations and approximations . 26 2.3 Functions of order statistics . 32 3 Statistical graphics 36 3.1 Some historical remarks . 36 3.2 The role of graphical methods in statistics . 38 3.2.1 Graphical versus numerical techniques . 38 3.2.2 Manipulation with graphs and graphical perception . 39 3.2.3 Graphical displays in statistics . 41 3.3 Distribution assessment by graphs . 43 3.3.1 PP–plots and QQ–plots . 43 3.3.2 Probability paper and plotting positions . 47 3.3.3 Hazard plot . 54 3.3.4 TTT–plot . 56 4 Linear estimation — Theory and methods 59 4.1 Types of sampling data . 59 IV Contents 4.2 Estimators based on moments of order statistics . 63 4.2.1 GLS estimators . 64 4.2.1.1 GLS for a general location–scale distribution . 65 4.2.1.2 GLS for a symmetric location–scale distribution . 71 4.2.1.3 GLS and censored samples .
    [Show full text]
  • Section III Non-Parametric Distribution Functions
    DRAFT 2-6-98 DO NOT CITE OR QUOTE LIST OF ATTACHMENTS ATTACHMENT 1: Glossary ...................................................... A1-2 ATTACHMENT 2: Probabilistic Risk Assessments and Monte-Carlo Methods: A Brief Introduction ...................................................................... A2-2 Tiered Approach to Risk Assessment .......................................... A2-3 The Origin of Monte-Carlo Techniques ........................................ A2-3 What is Monte-Carlo Analysis? .............................................. A2-4 Random Nature of the Monte Carlo Analysis .................................... A2-6 For More Information ..................................................... A2-6 ATTACHMENT 3: Distribution Selection ............................................ A3-1 Section I Introduction .................................................... A3-2 Monte-Carlo Modeling Options ........................................ A3-2 Organization of Document ........................................... A3-3 Section II Parametric Methods .............................................. A3-5 Activity I ! Selecting Candidate Distributions ............................ A3-5 Make Use of Prior Knowledge .................................. A3-5 Explore the Data ............................................ A3-7 Summary Statistics .................................... A3-7 Graphical Data Analysis. ................................ A3-9 Formal Tests for Normality and Lognormality ............... A3-10 Activity II ! Estimation of Parameters
    [Show full text]
  • An Approach for Appraising the Accuracy of Suspended-Sediment Data
    An Approach for Appraising the Accuracy of Suspended-sediment Data U.S. GEOLOGICAL SURVEY PROFESSIONAL PAl>£R 1383 An Approach for Appraising the Accuracy of Suspended-sediment Data By D. E. BURKHAM U.S. GEOLOGICAL SURVEY PROFESSIONAL PAPER 1333 UNITED STATES GOVERNMENT PRINTING OFFICE, WASHINGTON, 1985 DEPARTMENT OF THE INTERIOR DONALD PAUL MODEL, Secretary U.S. GEOLOGICAL SURVEY Dallas L. Peck, Director First printing 1985 Second printing 1987 For sale by the Books and Open-File Reports Section, U.S. Geological Survey, Federal Center, Box 25425, Denver, CO 80225 CONTENTS Page Page Abstract ........... 1 Spatial error Continued Introduction ....... 1 Application of method ................................ 11 Problem ......... 1 Basic data ......................................... 11 Purpose and scope 2 Standard spatial error for multivertical procedure .... 11 Sampling error .......................................... 2 Standard spatial error for single-vertical procedure ... 13 Discussion of error .................................... 2 Temporal error ......................................... 13 Approach to solution .................................. 3 Discussion of error ................................... 13 Application of method ................................. 4 Approach to solution ................................. 14 Basic data .......................................... 4 Application of method ................................ 14 Standard sampling error ............................. 4 Basic data ........................................
    [Show full text]
  • Approximating the Distribution of the Product of Two Normally Distributed Random Variables
    S S symmetry Article Approximating the Distribution of the Product of Two Normally Distributed Random Variables Antonio Seijas-Macías 1,2 , Amílcar Oliveira 2,3 , Teresa A. Oliveira 2,3 and Víctor Leiva 4,* 1 Departamento de Economía, Universidade da Coruña, 15071 A Coruña, Spain; [email protected] 2 CEAUL, Faculdade de Ciências, Universidade de Lisboa, 1649-014 Lisboa, Portugal; [email protected] (A.O.); [email protected] (T.A.O.) 3 Departamento de Ciências e Tecnologia, Universidade Aberta, 1269-001 Lisboa, Portugal 4 Escuela de Ingeniería Industrial, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, Chile * Correspondence: [email protected] or [email protected] Received: 21 June 2020; Accepted: 18 July 2020; Published: 22 July 2020 Abstract: The distribution of the product of two normally distributed random variables has been an open problem from the early years in the XXth century. First approaches tried to determinate the mathematical and statistical properties of the distribution of such a product using different types of functions. Recently, an improvement in computational techniques has performed new approaches for calculating related integrals by using numerical integration. Another approach is to adopt any other distribution to approximate the probability density function of this product. The skew-normal distribution is a generalization of the normal distribution which considers skewness making it flexible. In this work, we approximate the distribution of the product of two normally distributed random variables using a type of skew-normal distribution. The influence of the parameters of the two normal distributions on the approximation is explored. When one of the normally distributed variables has an inverse coefficient of variation greater than one, our approximation performs better than when both normally distributed variables have inverse coefficients of variation less than one.
    [Show full text]
  • Fixed-K Asymptotic Inference About Tail Properties
    Journal of the American Statistical Association ISSN: 0162-1459 (Print) 1537-274X (Online) Journal homepage: http://www.tandfonline.com/loi/uasa20 Fixed-k Asymptotic Inference About Tail Properties Ulrich K. Müller & Yulong Wang To cite this article: Ulrich K. Müller & Yulong Wang (2017) Fixed-k Asymptotic Inference About Tail Properties, Journal of the American Statistical Association, 112:519, 1334-1343, DOI: 10.1080/01621459.2016.1215990 To link to this article: https://doi.org/10.1080/01621459.2016.1215990 Accepted author version posted online: 12 Aug 2016. Published online: 13 Jun 2017. Submit your article to this journal Article views: 206 View related articles View Crossmark data Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=uasa20 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION , VOL. , NO. , –, Theory and Methods https://doi.org/./.. Fixed-k Asymptotic Inference About Tail Properties Ulrich K. Müller and Yulong Wang Department of Economics, Princeton University, Princeton, NJ ABSTRACT ARTICLE HISTORY We consider inference about tail properties of a distribution from an iid sample, based on extreme value the- Received February ory. All of the numerous previous suggestions rely on asymptotics where eventually, an infinite number of Accepted July observations from the tail behave as predicted by extreme value theory, enabling the consistent estimation KEYWORDS of the key tail index, and the construction of confidence intervals using the delta method or other classic Extreme quantiles; tail approaches. In small samples, however, extreme value theory might well provide good approximations for conditional expectations; only a relatively small number of tail observations.
    [Show full text]
  • Analysis of the Coefficient of Variation in Shear and Tensile Bond Strength Tests
    J Appl Oral Sci 2005; 13(3): 243-6 www.fob.usp.br/revista or www.scielo.br/jaos ANALYSIS OF THE COEFFICIENT OF VARIATION IN SHEAR AND TENSILE BOND STRENGTH TESTS ANÁLISE DO COEFICIENTE DE VARIAÇÃO EM TESTES DE RESISTÊNCIA DA UNIÃO AO CISALHAMENTO E TRAÇÃO Fábio Lourenço ROMANO1, Gláucia Maria Bovi AMBROSANO1, Maria Beatriz Borges de Araújo MAGNANI1, Darcy Flávio NOUER1 1- MSc, Assistant Professor, Department of Orthodontics, Alfenas Pharmacy and Dental School – Efoa/Ceufe, Minas Gerais, Brazil. 2- DDS, MSc, Associate Professor of Biostatistics, Department of Social Dentistry, Piracicaba Dental School – UNICAMP, São Paulo, Brazil; CNPq researcher. 3- DDS, MSc, Assistant Professor of Orthodontics, Department of Child Dentistry, Piracicaba Dental School – UNICAMP, São Paulo, Brazil. 4- DDS, MSc, Full Professor of Orthodontics, Department of Child Dentistry, Piracicada Dental School – UNICAMP, São Paulo, Brazil. Corresponding address: Fábio Lourenço Romano - Avenida do Café, 131 Bloco E, Apartamento 16 - Vila Amélia - Ribeirão Preto - SP Cep.: 14050-230 - Phone: (16) 636 6648 - e-mail: [email protected] Received: July 29, 2004 - Modification: September 09, 2004 - Accepted: June 07, 2005 ABSTRACT The coefficient of variation is a dispersion measurement that does not depend on the unit scales, thus allowing the comparison of experimental results involving different variables. Its calculation is crucial for the adhesive experiments performed in laboratories because both precision and reliability can be verified. The aim of this study was to evaluate and to suggest a classification of the coefficient variation (CV) for in vitro experiments on shear and tensile strengths. The experiments were performed in laboratory by fifty international and national studies on adhesion materials.
    [Show full text]