1 Introduction: Statistical Intervals

Total Page:16

File Type:pdf, Size:1020Kb

1 Introduction: Statistical Intervals Avd. Matematisk statistik Sf2955 COMPUTER INTENSIVE METHODS TOLERANCE INTERVALS with A COMPUTER ASSIGNMENT 2011 Timo Koski 1 Introduction: Statistical Intervals Many practical problems are phrased in terms of individual measurements rather than parameters of distributions. We take two such examples. The first one will require, what is to be called a prediction interval, instead of a confidence interval. A consumer is considering buying a car. Then this person should be far more interested in knowing whether a full tank on a particu- lar automobile will suffice to carry her/him the 500 kms to her/his destination than in learning that there is a 95% confidence inter- val for the mean mileage of the model, which is possible to use to project the average or total gasoline consumption for the ma- nufactured fleet of such cars over their first 5000 kilometers of use. A different situation appears in the following. This will require, what is to be called a tolerance interval, instead of a confidence interval. A design engineer is charged with the problem of determining how large a tank the car model really needs to guarantee that 99% of the cars produced will have a cruising range of 500 kilometers. 1 What the engineer really needs is a tolerance interval for a fraction of 100 β = 99% mileages of such automobiles. × Prediction and tolerance intervals address problems of inference for (future) measurements. 2 Definitions of Other Intervals than Confi- dence Intervals We must distinguish between two different questions (I - II) concerning in- ference for future values. Let X1,...,Xn be I.I.D. with the (cumulative) distribution (function) F and determine an interval [L, U], L < U, such that either of (I) or (II) holds: I For at least γ 100% proportion of time, the proportion β 100% of × × future observations Xn+1,...,Xn+m will fall in [L, U]. II The probability that Xn+1 falls in [L, U] is at least γ. The question posed under I is that of tolerance intervals. Tolerance intervals are meant to locate the bulk of an underlying distribution. The question II is that of designing (average) prediction intervals for an individual observation. 2.1 I: Definition of Tolerance Interval and an Example Let X1,...,Xn be I.I.D. with the (cumulative) distribution (function) F . We have two statistics L = L (X1,...,Xn) and U = U (X1,...,Xn) such that L (X ,...,X ) U (X ,...,X ) . (2.1) 1 n ≤ 1 n Then we have the concept due to Walter Shewhart1. Definition 2.1 β,γ (0, 1). Assume that L and U are such that ∈ P ((F (U) F (L)) β) γ. (2.2) − ≥ ≥ Then the interval [L, U] is a β -content and γ-confidence tolerance in- terval. 1W. Shewhart: Economic Control of Quality of Manufactured Product. Van Nostrand Company, Inc., New York, 1931, republished in 1980 as the 50th Anniversary Commemo- rative Reissue, 1981, by ASQC Quality Press, Milwaukee. 2 If possible, we should replace the last inequality in (2.2) with an equality. In words, we are proposing an interval within which, for at least γ % of × time, the proportion β 100% of future observations Xn+1,...,Xn+m falls. Let us contrast this with× the notion of a confidence interval. The method of confidence intervals creates intervals that cover a real valued population parameter of a distribution (e.g., the mean or the variance) with some pro- bability (giving a degree of confidence for a given interval). The bounds of a tolerance interval depict a range of possible data values that represents a specified percentage of the population. In very simplified terms, a confidence interval characterizes what is known about a single quan- tity given a set of data, whereas a tolerance interval characterizes what is known about values across a collection of items. We shall try to clarify this. In general, the distribution function F would not be known in contexts, where a tolerance interval is desired. In such a case we should be familiar with the result in the following example. Example 2.1 The non-parametric tolerance interval 2 Let F have a probability density. Let X(1),X(2),...,X(n) be the order statistic. Set W = F X 2 1 F X 1 . (k +k ) − (k ) Then W Beta(k2, n k2 1). For given β >0 ochγ > 0 we find n, k2 and ∈ − − k1 such that P (W β) γ, ≥ ≥ then [x(k1), x(k2+k1)] is a β -content and γ-confidence tolerance interval. Or, the problem solved ! However, it has been found that the tolerance in- tervals according to the non-parametric method will tend to be wider than intervals designed specifically for, e.g., scale or location parameter families. 2.2 II: Definition of Prediction or Average Tolerance Intervals The prediction interval can be generically written as P (X [L, U]) γ, (2.3) n+1 ∈ ≥ 2see S.S. Wilks: Mathematical Statistics, John Wiley & Sons, New York 1962, pp. 334 335 − 3 or P (L (X ,...,X ) X U (X ,...,X )) γ, 1 n ≤ n+1 ≤ 1 n ≥ or E (F (U (X ,...,X )) F (L (X ,...,X ))) γ. F 1 n − 1 n ≥ Somebody has called the prediction and tolerance intervals the ’most slippery of all concepts’. A prediction inteval does not tell that a fraction of γ of future observations will fall within [L, U]. It is the expected probability content of [L, U] that is at least γ, and many samples will lead to intervals covering less than 100 γ% of the underlying distribution. Let us note that the 100 × γ% × prediction confidence relates to the whole process of generating X1,...,Xn and Xn+1. 3 The Tolerance Interval for a Normal Dis- tribution 3.1 k -Factor Tolerance Interval Let F N(m, σ2) and let us assume that m and σ are unknown and ↔ X ,...,X are I.I.D. N(m, σ2). We set 1 n ∼ n n 1 1 2 X = X S2 = X X . n i n 1 i − i=1 − i=1 X X Then, as can be expected, we take two statistics L and U such that L = L (X ,...,X )= X kS, 1 n − U = U (X1,...,Xn)= X + kS. Here the values of k are to be chosen such that the tolerance limits L and U satisfy (2.2) for given β and γ. Such k:s are known as k-factors, and [X kS, X + kS] − is known as the k-factor tolerance interval. Let us next check that such a k can be found and how this might be done. 4 We shall write first down (F (U) F (L)) (conditioned on X = x) in (2.2), whereby it is convenient− to introduce the following auxiliary notation x+ks 1 2 1 − (t−m) A (k, x)= e 2σ2 dt σ√2π − Zx ks = F (U) F (L) X = x. − | Then we set P k, β X = P A k, X β X n | n ≥ | = the conditional probability that Ak, X β given X. ≥ Then iterated expectation gives P (k, β)= E P k, β X . n n | Thus Pn (k, β) is the probability that the interval [X kS, X + k] includes at least 100 β% of the outcomes of a random− variable × with the distribution N(m, σ2). Then for each n β [0, 1] and γ [0, 1] there exists a k such that ∈ ∈ Pn (k, β)= γ. (3.1) Clearly the eqn. (3.1) is the current instance of (2.2), and must be solved w.r.t. k by computational means to get the β -content and γ-confidence k-factor tolerance interval. There are known algorithms like the Wald-Wolfowitz method for approximate solution of (3.1). A. Wald and J. Wolfowitz showed (in 1946) that k r u, (3.2) ≈ × where r is a function of n and β is determined from 1 +r 1 √n 2 e−t /2dt = β, 1 √2π −r Z √n 5 e.g., by the Newton-Raphson method, and u is defined by f u = 2 , sχ1−γ (f) 2 where f = n 1 and χ1−γ (f) is the 1 γ percentile of χ2 distribution− with f degrees of freeedom.− k-values computed according to (3.2) have been tabulated 3. 3.2 A Prediction Interval We invoke some piece of standard elementary theory of confidence intervals to create a prediction interval by means of a trick. 2 X ,X ,...,X 1 N(µ , σ ) 1 2 n ∼ 1 2 Y ,Y ,...,Y 2 N(µ , σ ), 1 2 n ∼ 2 where all X’s and Y ’s are independent. We want to find a confidence interval for µ µ . We estimate µ µ by X Y . It follows that 1 − 2 1 − 2 − (X Y ) (µ µ ) − − 1 − 2 1 1 σ n1 + n2 q has a N(0, 1) -distribution. Let us assume that σ is unknown. We estimate σ2 by (n 1)s2 +(n 1)s2 s2 = 1 − 1 2 − 2 , n + n 2 1 2 − 2 2 2 where s1 and s2 are the familiar unbiased estimates of σ based on X’s and Y ’s, respectively. Then it follows that (X Y ) (µ µ ) − − 1 − 2 (3.3) 1 1 S n1 + n2 q 3e.g., in D.B. Owen: Handbook of Statistical Tables. Addison-Wesley, Palo Alto, 1962,. Here one finds a table for r and a table for u for some choices of β (= P in the table) and γ. Disturbingly these values are said to be unreliable..
Recommended publications
  • Tolerance-Package
    Package ‘tolerance’ February 5, 2020 Type Package Title Statistical Tolerance Intervals and Regions Version 2.0.0 Date 2020-02-04 Depends R (>= 3.5.0) Imports MASS, rgl, stats4 Description Statistical tolerance limits provide the limits between which we can expect to find a speci- fied proportion of a sampled population with a given level of confidence. This package pro- vides functions for estimating tolerance limits (intervals) for various univariate distributions (bi- nomial, Cauchy, discrete Pareto, exponential, two-parameter exponential, extreme value, hyper- geometric, Laplace, logistic, negative binomial, negative hypergeometric, normal, Pareto, Pois- son-Lindley, Poisson, uniform, and Zipf-Mandelbrot), Bayesian normal tolerance limits, multi- variate normal tolerance regions, nonparametric tolerance intervals, tolerance bands for regres- sion settings (linear regression, nonlinear regression, nonparametric regression, and multivari- ate regression), and analysis of variance tolerance intervals. Visualizations are also avail- able for most of these settings. License GPL (>= 2) NeedsCompilation no Author Derek S. Young [aut, cre] Maintainer Derek S. Young <[email protected]> Repository CRAN Date/Publication 2020-02-05 13:10:05 UTC R topics documented: tolerance-package . .3 acc.samp . .4 anovatol.int . .5 bayesnormtol.int . .7 bintol.int . .9 bonftol.int . 12 cautol.int . 13 1 2 R topics documented: diffnormtol.int . 14 DiffProp . 16 DiscretePareto . 18 distfree.est . 19 dpareto.ll . 20 dparetotol.int . 21 exp2tol.int . 23 exptol.int . 25 exttol.int . 26 F1.............................................. 28 fidbintol.int . 29 fidnegbintol.int . 31 fidpoistol.int . 33 gamtol.int . 35 hypertol.int . 37 K.factor . 38 K.factor.sim . 40 K.table . 42 laptol.int . 44 logistol.int .
    [Show full text]
  • Comparing Interval Estimates What Is a Tolerance Interval? Other Interval
    Evaluating Tolerance Interval Estimates Michelle Quinlan, University of Nebraska-Lincoln James Schwenke, Boehringer Ingelheim Pharmaceuticals, Inc. University of Nebraska Walt Stroup, University of Nebraska-Lincoln Department of Statistics PQRI Stability Shelf Life Working Group Comparing Interval Estimates 22 . Computed using be / (ratio of between to within batch variance) and The mean of each interval estimate across iterations is computed Statistical interval estimates are constructed to central t-distribution Comparisons are made among the interval estimates . Estimate parameters . Variance is a linear combination of independent mean squares, df calculated . Quantify characteristics of population using Satterthwaite approximation To correctly interpret estimates, it must be clearly defined what each interval β-content tolerance interval: is estimating Prμ,σˆˆ {Pr X [μˆ - kσ ˆ x < X < μ ˆ + kσ ˆ x |μ,σ ˆ ˆ x ] β} = γ . Confidence/prediction intervals are well understood x γ = confidence coefficient . Definition of a tolerance interval varies among literature sources . Interval that contains at least 100β% of population with given confidence What is a Tolerance Interval? level γ (Mee) Tolerance intervals are being used with more frequency, thus a consistent o Computed using factors from Normal and Chi-squared distributions definition needs to be established o β-content interval in models with only 1 source of variation are computed using noncentral t-distribution Definitions found in literature: . Wald and Wolfowitz use same definition to define tolerance intervals but instead use the formula: . A bound that covers at least (100-α)% of the measurements with (100- γ)% n confidence (Walpole and Myers) X 2 rs χ n,β o Focuses on where individual observations fall o Equivalent to a (100-γ)% CI on middle (100-α)% of Normal distribution SAS® Proc Capabilities Method 3 computes an approximate statistical .
    [Show full text]
  • Meta-Analysis of Standardised Mean Differences from Randomised Trials with Treatment-Related Clustering Associated with Care Providers
    This is a repository copy of Meta-analysis of standardised mean differences from randomised trials with treatment-related clustering associated with care providers. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/109207/ Version: Accepted Version Article: Walwyn, R and Roberts, C (2017) Meta-analysis of standardised mean differences from randomised trials with treatment-related clustering associated with care providers. Statistics in Medicine, 36 (7). pp. 1043-1067. ISSN 0277-6715 https://doi.org/10.1002/sim.7186 © 2016 John Wiley & Sons, Ltd. This is the peer reviewed version of the following article: "Walwyn, R., and Roberts, C. (2017) Meta-analysis of standardised mean differences from randomised trials with treatment-related clustering associated with care providers. Statist. Med., 36 (7): 1043–1067. doi: 10.1002/sim.7186." which has been published in final form at https://doi.org/10.1002/sim.7186. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving. Reuse Unless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version - refer to the White Rose Research Online record for this item. Where records identify the publisher as the copyright holder, users can verify any specific terms of use on the publisher’s website.
    [Show full text]
  • A Fuzzy-Statistical Tolerance Interval from Residuals of Crisp Linear Regression Models
    mathematics Article A Fuzzy-Statistical Tolerance Interval from Residuals of Crisp Linear Regression Models Maryam Al-Kandari 1,*, Kingsley Adjenughwure 2 and Kyriakos Papadopoulos 1 1 Department of Mathematics, Kuwait University, P.O. Box 5969, Safat 13060, Khaldiyah City, Kuwait; [email protected] 2 Department of Civil Engineering, Democritus University of Thrace, 67100 Xanthi, Greece; [email protected] * Correspondence: [email protected] Received: 10 August 2020; Accepted: 21 August 2020; Published: 25 August 2020 Abstract: Linear regression is a simple but powerful tool for prediction. However, it still suffers from some deficiencies, which are related to the assumptions made when using a model like normality of residuals, uncorrelated errors, where the mean of residuals should be zero. Sometimes these assumptions are violated or partially violated, thereby leading to uncertainties or unreliability in the predictions. This paper introduces a new method to account for uncertainty in the residuals of a linear regression model. First, the error in the estimation of the dependent variable is calculated and transformed to a fuzzy number, and this fuzzy error is then added to the original crisp prediction, thereby resulting in a fuzzy prediction. The results are compared to a fuzzy linear regression with crisp input and fuzzy output, in terms of their ability to represent uncertainty in prediction. Keywords: tolerance interval; fuzzy linear regression; crisp linear regression; fuzzy-statistics 1. Introduction In classical linear regression models, assumptions like linearity, fixed independent variables, normality of residuals, uncorrelated errors are made to simplify model estimation procedures. Despite these assumptions, the results are often taken at face value with very little effort, to adequately represent the uncertainty in predictions made by the model.
    [Show full text]
  • Interpretation of 95-95 Tolerance Limits for Safety System Setpoint
    NRC Staff Interpretation of 95/95 Tolerance Limits in Safety System Setpoint Analysis NRC Public Meeting with GE-Hitachi September 28, 2010 NRC Headquarters 6003 Executive Boulevard, Rockville, MD 20852 David Rahn, Sr. Electronics Engineer, NRR/DE/EICB [email protected] Disclaimer • The following slides summarize the collective opinions of several members of the NRC staff concerning the interpretation of statistical representations used in the analysis of instrument channel performance. This information is being considered for use in establishing acceptance review criteria pertinent to the review of proposed instrument setpoint methodologies. The information herein represents work in progress, and does not necessarily represent the concurrence of all cognizant staff members or a final decision in this matter. 2 Regulatory Guide 1.105 • Regulatory Position C.1 of Rev. 3 (1999) of Reg Guide 1.105 states: – “Section 4 of ISA-S67.04-1994 specifies the methods, but not the criterion, for combining uncertainties in determining a trip setpoint and its allowable values. The 95/95 tolerance limit is an acceptable criterion for uncertainties. That is, there is a 95% probability that the constructed limits contain 95% of the population of interest for the surveillance interval selected.” 3 Regulatory Guide 1.105 (continued) • Revision 2 (1986) of Reg. Guide 1.105 stated: – “Paragraph 4.3 of the standard specifies the methods for combining uncertainties in determining a trip setpoint and its allowable values. Typically, the NRC staff has accepted 95% as a probability limit for errors. That is, of the observed distribution of values for a particular error component in the empirical data base, 95% of the data points will be bounded by the value selected.
    [Show full text]
  • Recommended Methods for Outlier Detection and Calculations Of
    Recommended Methods for Outlier Detection and Calculations of Tolerance Intervals and Percentiles – Application to RMP data for Mercury-, PCBs-, and PAH-contaminated Sediments Final Report May 27th, 2011 Prepared for: Regional Monitoring Program for Water Quality in San Francisco Bay, Oakland CA Prepared by: Dr. Don L. Stevens, Jr. Principal Statistician Stevens Environmental Statistics, LLC 6200 W. Starr Road Wasilla, AK 99654 Introduction This report presents results of a review of existing methods for outlier detection and for calculation of percentiles and tolerance intervals. Based on the review, a simple method of outlier detection that could be applied annually as new data becomes available is recommended along with a recommendation for a method for calculating tolerance intervals for population percentiles. The recommended methods are applied to RMP probability data collected in the San Francisco Estuary from 2002 through 2009 for Hg, PCB, and PAH concentrations. Outlier detection There is a long history in statistics of attempts to identify outliers, which are in some sense data points that are unusually high or low . Barnett and Lewis (1994) define an outlier in a set of data as “an observation (or subset of observations) which appears to be inconsistent with the remainder of that set of data. This captures the intuitive notion, but does not provide a constructive pathway. Hawkins (1980) defined outliers as data points which are generated by a different distribution than the bulk observations. Hawkins definition suggests something like using a mixture distribution to separate the data into one or more distributions, or, if outliers are few relative to the number of data points, to fit the “bulk” distribution with a robust/resistant estimation procedure, and then determine which points do not conform to the bulk distribution.
    [Show full text]
  • Read Book Understanding the New Statistics : Effect Sizes, Confidence
    UNDERSTANDING THE NEW STATISTICS : EFFECT SIZES, CONFIDENCE INTERVALS, AND META-ANALYSIS PDF, EPUB, EBOOK Geoff Cumming | 536 pages | 17 Aug 2011 | Taylor & Francis Ltd | 9780415879682 | English | London, United Kingdom Understanding The New Statistics : Effect Sizes, Confidence Intervals, and Meta- Analysis PDF Book Numerous examples reinforce learning, and show that many disciplines are using the new statistics. Hope you found this article helpful. Sample variance is defined as the sum of squared differences from the mean, also known as the mean-squared-error MSE :. Having made the claim, the plaintiffs were hard pressed to exclude short-term trials, other than to argue that such trials frequently had zero adverse events in either the medication or placebo arms. Then you can plug these components into the confidence interval formula that corresponds to your data. Scheaffer present a solid foundation in statistical theory while conveying the relevance and importance of the theory in solving practical problems in the real world. Thanks for reading! Graphs are tied in with ESCI to make important concepts vividly clear and memorable. Part I: Questions 1a-1e. These are the upper and lower bounds of the confidence interval. A thorough understanding of biology, no matter which subfield, requires a thorough understanding of statistics. The exercises are grouped into seven chapters with titles matching those in the author's Mathematical Statistics. It begins with a chapter on descriptive statistics that immediately exposes the reader to real data. What is a critical value? The confidence level is the percentage of times you expect to reproduce an estimate between the upper and lower bounds of the confidence interval, and is set by the alpha value.
    [Show full text]
  • Tolerance Intervals
    Tolerance Intervals K. Krishnamoorthy University of Louisiana at Lafayette, Lafayette, LA, USA Dakar International Conference on Recent Developments in Applied Statistics March 17, 2014 1 / 60 A motivating example Air lead levels collected by the National Institute of Occupational Safety and Health (NIOSH) at a laboratory, for health hazard evaluation. The air lead levels were collected from 15 different areas within the facility. Air lead levels (µg/m3) 200 120 15 7 8 6 48 61 380 80 29 1000 350 1400 110 A normal distribution fitted the log-transformed lead levels quite well (that is, the sample is from a lognormal distribution). Objective: Are 90% of air lead levels in the facility below the occupational exposure limit (OEL) 50 µg/m3 ? 2 / 60 A motivating example Y : Air lead levels, X : log-transformed air lead levels. µ and σ2: population mean and variance for X . X N(µ,σ2). ∼ exp(µ): median air lead level. The usual confidence interval for µ: X¯ and S, the sample mean and standard deviation of the log-transformed data for a sample of size n. S A 95% confidence interval for µ: X¯ tn 1;.975 ± − √n S A 95% upper confidence bound for µ: X¯ + tn 1;.95 . − √n Confidence intervals for the median air lead level can be obtained. 3 / 60 A motivating example To predict the air lead level at a particular area within the laboratory, a 95% prediction interval 1 X¯ tn 1;.975S 1+ ± − r n for the log-transformed lead level can be used. However, the confidence interval and prediction interval cannot answer this question, “Are 90% of the population lead levels below a threshold?” What is required is a tolerance interval; more specifically, an upper tolerance limit.
    [Show full text]
  • Normal Tolerance Interval Procedures in the Tolerance Package by Derek S
    CONTRIBUTED RESEARCH ARTICLES 200 Normal Tolerance Interval Procedures in the tolerance Package by Derek S. Young Abstract Statistical tolerance intervals are used for a broad range of applications, such as quality control, engineering design tests, environmental monitoring, and bioequivalence testing. tolerance is the only R package devoted to procedures for tolerance intervals and regions. Perhaps the most commonly-employed functions of the package involve normal tolerance intervals. A number of new procedures for this setting have been included in recent versions of tolerance. In this paper, we discuss and illustrate the functions that implement these normal tolerance interval procedures, one of which is a new, novel type of operating characteristic curve. Introduction and overview of the tolerance package Statistical tolerance intervals of the form (1 − a, P) provide bounds to capture at least a specified proportion P of the sampled population with a given confidence level 1 − a. The quantity P is called the content of the tolerance interval and the confidence level 1 − a reflects the sampling variability. There is an extensive literature on tolerance intervals with some of the earliest works being Wilks (1941, 1942) and Wald(1943). The texts by Guttman(1970) and Krishnamoorthy and Mathew(2009) are devoted to the theoretical development and application of tolerance intervals, while the text by Hahn and Meeker(1991) discusses their application in the broader context of statistical intervals. tolerance (Young, 2010) is a popular R package for constructing exact and approximate tolerance intervals and regions. Since its initial release in 2009, the package has grown to include tolerance interval procedures for a large number of parametric distributions, nonparametric settings, and regression models.
    [Show full text]
  • Posterior Distribution Vs Tolerance Intervals for Sampling Plan Determination in Pharmaceutical Manufacturing
    Posterior Distribution vs Tolerance intervals for Sampling Plan Determination in Pharmaceutical Manufacturing Bayes2016, Leuven 20/05/2016 Martin Otava Manufacturing, Toxicology and Applied Statistical Sciences EU Janssen Pharmaceutica, Belgium Process development 1. Engineering runs: to make the process running 2. Characterization phase: to explore its basic properties 3. Factors optimization: designed experiments, optimization 4. Validation phase: to evaluate the final process setup 5. Production phase: to be able to detect any possible issues occurring in the process Process performance qualification (PPQ) protocol • Protocol of final validation experiment • Data available from previous stages (engineering runs, characterization study, DoE) • Different settings for different data sources Example Data Set (simulated): • 12 Batches from pre-PPQ studies of varying purpose • 4 Batches: 8 Bags, 5 observations per bag • 8 Batches: 8 Bags, only 1 observation per Bag • Simulated data set based on structure of real data set (but all the value of parameters are completely artificial) • Response: % of label claim • Acceptance criteria: each individual value between 90-110 • Future experiment: 3 Batches, 10 Bags, 1 Sample Data set PPQ protocol • Main question: • Sampling plan for validation experiment • Our focus: • Asses the quality of the process • Estimate probability of passing validation experiment (given the sampling plan) Methodology Frequentist framework • , • Point estimate of • 95% Confidence interval on : confidence statement on parameter
    [Show full text]
  • Tolerance Intervals for Normal Data
    PASS Sample Size Software NCSS.com Chapter 830 Tolerance Intervals for Normal Data Introduction This routine calculates the sample size needed to obtain a specified coverage of a β-content tolerance interval at a stated confidence level for data from the normal distribution and data without a specified distribution. These intervals are constructed so that they contain at least 100β% of the population with probability of at least 100(1 - α)%. For example, in water management, a drinking water standard might be that one is 95% confident that certain chemical concentrations are not exceeded more than 3% of the time. Difference between a Confidence Interval and a Tolerance Interval It is easy to get confused about the difference between a confidence interval and a tolerance interval. Just remember than a confidence interval is usually a probability statement about the value of a distributional parameter such as the mean or proportion. On the other hand, a tolerance interval is a probability statement about a proportion of the distribution from which the sample is drawn. Technical Details This section is primarily based on results in Guenther (1977). Other useful references are Guenther (1972), Hahn and Meeker (1991), Howe (1969), Krishnamoorthy and Mathew (2009), Odeh, Chou, and Owen (1987), and Young, Gordon, Zhu, and Olin (2016). A tolerance interval is constructed from a random sample so that a specified proportion of the population is contained within the interval. The interval is defined by two limits, L1 and L2, which are constructed using = , = + where is the sample mean, s is the sample standard1 ̅ − deviation, 2 and̅ k is calculated as described below.
    [Show full text]
  • Alternatives to Credibility Intervals in Validity Generalization Research Roger E
    Tolerance Intervals: Alternatives to Credibility Intervals in Validity Generalization Research Roger E. Millsap Baruch College, City University of Now York In validity generalization research, the estimated interval,&dquo; borrowing the concept from Bayesian mean and variance of the true validity distribution are statistics (Novick & Jackson, 1974). often used to construct a credibility interval, an inter- Since this work Schmidt and Hunter, val of the true valid- early by containing a specified proportion calculation of intervals has ity distribution. The statistical interpretation of this in- the credibility become terval in the literature has varied between Bayesian a routine step in studies of validity generalization and classical (frequentist) viewpoints. Credibility in- (Calender 85 Osbum, 19~ 19 Callender, Osbum, tervals are here discussed from the frequentist perspec- Greener, & Ashworth, 1982; Linn, Hamisch, & tive. These are known as "tolerance intervals" in the Dunbar, 1981; Linn & Hastings, 1984; Pearlman, statistical literature. Two new methods for construct- Schmidt, & Hunter, 1980; Hunter, & ing a credibility interval are presented. Unlike the cur- Schmidt, rent method of constructing the credibility interval, Caplan, 1981; Schmidt, Hunter, Pearlman, & Shane, tolerance intervals have known performance character- 1979). In studies where the variance in true validi- istics across repeated applications, justifying confi- ties remains substantial after removing artifactual dence statements. The new methods may be useful in variation, the lower limits of the credibility interval validity generalization research involving a small or moderate number of validation studies. Index is used to justify conclusions about the generaliz- terms: Bayesian statistics, Credibility intervals, Meta- ability of the test’ validity across settings and or- analysis, Tolerance intervals, True validity distribu- ganizations (Osbum, Callender, Greener, & Ash- tion, Validity generalization.
    [Show full text]