Basic Statistical Concepts Statistical Population

Total Page:16

File Type:pdf, Size:1020Kb

Basic Statistical Concepts Statistical Population Basic Statistical Concepts Statistical Population • The entire underlying set of observations from which samples are drawn. – Philosophical meaning: all observations that could ever be taken for range of inference • e.g. all barnacle populations that have ever existed, that exist or that will exist – Practical meaning: all observations within a reasonable range of inference • e.g. barnacle populations on that stretch of coast 1 Statistical Sample •A representative subset of a population. – What counts as being representative • Unbiased and hopefully precise Strategies • Define survey objectives: what is the goal of survey or experiment? What are your hypotheses? • Define population parameters to estimate (e.g. number of individuals, growth, color etc). • Implement sampling strategy – measure every individual (think of implications in terms of cost, time, practicality especially if destructive) – measure a representative portion of the population (a sample) 2 Sampling • Goal: – Every unit and combination of units in the population (of interest) has an equal chance of selection. • This is a fundamental assumption in all estimation procedures •How: – Many ways if underlying distribution is not uniform » In the absence of information about underlying distribution the only safe strategy is random sampling » Costs: sometimes difficult, and may lead to its own source of bias (if sample size is low). Much more about this later Sampling Objectives • To obtain an unbiased estimate of a population mean • To assess the precision of the estimate (i.e. calculate the standard error of the mean) • To obtain as precise an estimate of the parameters as possible for time, effort and money spent 3 Measures of location • Population mean () - the average value • Sample mean = y estimates • Population median - the middle value • Sample median estimates population median • In a normal distribution the mean=median (also the mode), this is not ensured in other distributions Y Y Mean & median Median Mean Measures of dispersion • Population variance (2) - average sum of squared deviations from mean • Measured sample variance (s2) estimates population variance 2 (xi - x) n -1 • Standard deviation (s) – square root of variance – same units as original variable 4 Measures (statistics) of Dispersion 2 Population Sum of Squares (xi - ) 2 Sample Sum of Squares SS = (xi - x) (x - )2 2 i Population variance = n • Note, units are squared • Denominator is (n) (x - x)2 2 i Sample variance s = n -1 • Note, units are squared • Denominator is (n-1) Sample 2 (xi - x) standard deviation s = n -1 • Note, units are not squared More Statistics of Dispersion 2 s s = Standard error of the mean sx = • This is also the Standard Deviation n n of the sample means s Coefficient of variation CV = • Measurement of variation x independent of units • Expressed as a percentage of mean (xi -x ) (yi -y ) Covariance sxy = • Measure of how two variables covary n -1 • Range is between -8 and + 8 • Value depends in part on range in data – bigger numbers yield bigger values of covariance 5 Types of estimates • Point estimate – Single value estimate of the parameter, e.g. y is a point estimate of , s is a point estimate of • Interval estimate – Range within which the parameter lies known with some degree of confidence, e.g. 95% confidence interval is an interval estimate of Sampling distribution The frequency (or probability) distribution of a statistic (e.g. sample mean): • Many samples (size n) from population • Calculate all the sample means • Plot frequency distribution of sample means (sampling distribution) 6 P(y) y y Multiple samples - P(y) - multiple sample means y- Sampling distribution of sample means True Mean = 25 Means 21.5 36 22.3 22 27 19 23.0 41 23.9 12 24.9 25 33 23 25.1 25.8 31 26.5 Mean = 21.5 27.8 10 20 30 40 29.9 23 36 24 28 28 25 21 17 16 40 Mean = 25.8 Number of cases Estimate of Mean 7 Sampling distribution of mean • The sampling distribution of the sample mean approaches a normal distribution as n gets larger - Central Limit Theorem. • The mean of this sampling distribution is , the mean of original population. Large number of Samples 16 0.3 Proportion per Bar per Proportion 12 0.2 8 # of cases # 0.1 4 Probability 0 0.0 15 20 25 30 35 Estimate of Mean Estimate of Mean (x) 8 Sampling distribution of mean • The sampling distribution of the sample means approaches a normal distribution as n gets larger - Central Limit Theorem. • The mean of this sampling distribution is , the mean of original population. • The standard deviation of this sampling distribution is approximated by s/n, the standard deviation of any given sample divided by square root of sample size - the standard error of the mean. Standard deviation can be calculated for any distribution The standard deviation of the distribution of sample means can be calculated the same as for a given sample x (x - x)2 sx = i N -1 Where: 1. x = mean of the means and ~ Probability number of 2.5% 2.5% ~2 s ~2 sx means used in x distribution Estimate of Mean (x) 9 Standard deviation can be calculated for any distribution The standard deviation of the distribution of sample means can be calculated the same as for a given sample (x - x)2 sx = i However: N -1 To do so would require an x immense sampling effort, hence an approximation is used: s sx ~ SEM = n Where: s = sample standard deviation Probability 2.5% 2.5% and ~2 SEM ~2 SEM n = number of replicates in the sample Estimate of Mean (x) Standard error of mean • population SD estimated by sample SE: s/n • measures precision of sample mean • how close sample mean is likely to be to true population mean 10 Standard error of mean • If SE is low: – repeated samples would produce similar sample means – therefore, any single sample mean likely to be close to population mean • If SE is high: – repeated samples would produce very different sample means – therefore, any single sample mean may not be close to population mean Effect of Standard error on estimate of (assume df= large) 1 SEM=2 1 SEM=5 0.30 0.30 0.24 0.24 y t y i 0.18 t l i i 0.18 l i b b a a b b o o r 0.12 r P 0.12 P ~2 SEM ~2 SEM 0.06 0.06 2.5% 2.5% ~2 SEM ~2 SEM 0.00 0.00 0 10 20 30 40 0 10 20 30 40 Estimate of Mean Estimate of Mean 16 24 11 Worked example Lovett et al. (2000) measured the 2- concentration of SO4 (sulfate) in 39 North American forested streams (qk2002, Box 2.2) 2- Statistic Value Stream SO4 (mmol.L-1) Sample mean 61.92 Santa Cruz 50.6 Sample median 62.10 Colgate 55.4 Sample variance 27.46 Halsey 56.5 Sample SD 5.24 Batavia Hill 57.5 SE of mean 0.84 Interval estimate • How confident are we in a single sample estimate of , i.e. how close do we think our sample mean is to the unknown population mean. • Remember is a fixed, but unknown, value. • Interval (range of values) within which we are 95% (for example) sure occurs - a confidence interval 12 Distribution of sample means 99% 95% P( y ) y Calculate the proportion of sample means within a range of values. Transform distribution of means to a distribution with mean = 0 and standard deviation = 1 t statistic y s / n 13 0.4 0.3 y t i l i Null distribution b a 0.2 b o r P 0.1 0.0 -5 -4 -3 -2 -1 0 1 2 3 4 5 t = y s / n t statistic – interpretation and units • The deviation between the sample and population mean is expressed in terms of Standard error (i.e. Standard deviations of the y sampling distribution) • Hence the value of t’s are in standard errors • For example t=2 indicates s / n that the deviation (y- ) is equal to 2 x the standard error 14 The t statistic •This t statistic follows a t-distribution, which has a mathematical formula. • Same as normal distribution for n>30 otherwise flatter, more spread than normal distribution. • Different t distributions for different sample sizes < 30 (actually df which is n-1). 0.4 N=30 0.3 Null distributions N=3 y t i l i b a 0.2 b o r P 0.1 0.0 -5 -4 -3 -2 -1 0 1 2 3 4 5 t = y s / n 15 Two tailed t-values Probabilities of t = y occurring outside the range s / n –tdf to + tdf Probability Degrees of Freedom .01 .02 .05 .10 .20 1 63.66 31.82 12.71 6.314 3.078 4 df 2 9.925 6.965 4.303 2.920 1.886 3 5.841 4.541 3.182 2.353 1.638 4 4.604 3.747 2.776 2.132 1.533 95% 5 4.032 3.365 2.571 2.015 1.476 -2.78 +2.78 10 3.169 2.764 2.228 1.812 1.372 -5 -4 -3 -2 -1 0 1 2 3 4 5 15 2.947 2.602 2.132 1.753 1.341 t = y 20 2.845 2.528 2.086 1.725 1.325 s / n 25 2.787 2.485 2.060 1.708 1.316 z 2.575 2.326 1.960 1.645 1.282 One and two tailed t-values (df 4) Degrees of Freedom .005/.01 .01/.02 .025/.05 .05/.10 .10/.20 1 63.66 31.82 12.71 6.314 3.078 2 9.925 6.965 4.303 2.920 1.886 3 5.841 4.541 3.182 2.353 1.638 4 4.604 3.747 2.776 2.132 1.533 5 4.032 3.365 2.571 2.015 1.476 10 3.169 2.764 2.228 1.812 1.372 15 2.947 2.602 2.132 1.753 1.341 20 2.845 2.528 2.086 1.725 1.325 25 2.787 2.485 2.060 1.708 1.316 z 2.575 2.326 1.960 1.645 1.282 2 tailed 1 tailed 1 tailed 95% 95% -2.132 95% -2.78 +2.78 +2.132 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 t = y s / n 16 The t statistic •This t statistic follows a t-distribution, which has a mathematical formula.
Recommended publications
  • User Guide April 2004 EPA/600/R04/079 April 2004
    ProUCL Version 3.0 User Guide April 2004 EPA/600/R04/079 April 2004 ProUCL Version 3.0 User Guide by Anita Singh Lockheed Martin Environmental Services 1050 E. Flamingo Road, Suite E120, Las Vegas, NV 89119 Ashok K. Singh Department of Mathematical Sciences University of Nevada, Las Vegas, NV 89154 Robert W. Maichle Lockheed Martin Environmental Services 1050 E. Flamingo Road, Suite E120, Las Vegas, NV 89119 i Table of Contents Authors ..................................................................... i Table of Contents ............................................................. ii Disclaimer ................................................................. vii Executive Summary ........................................................ viii Introduction ................................................................ ix Installation Instructions ........................................................ 1 Minimum Hardware Requirements ............................................... 1 A. ProUCL Menu Structure .................................................... 2 1. File ............................................................... 3 2. View .............................................................. 4 3. Help .............................................................. 5 B. ProUCL Components ....................................................... 6 1. File ................................................................7 a. Input File Format ...............................................9 b. Result of Opening an Input Data File
    [Show full text]
  • Nearest Neighbor Methods Philip M
    Statistics Preprints Statistics 12-2001 Nearest Neighbor Methods Philip M. Dixon Iowa State University, [email protected] Follow this and additional works at: http://lib.dr.iastate.edu/stat_las_preprints Part of the Statistics and Probability Commons Recommended Citation Dixon, Philip M., "Nearest Neighbor Methods" (2001). Statistics Preprints. 51. http://lib.dr.iastate.edu/stat_las_preprints/51 This Article is brought to you for free and open access by the Statistics at Iowa State University Digital Repository. It has been accepted for inclusion in Statistics Preprints by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Nearest Neighbor Methods Abstract Nearest neighbor methods are a diverse group of statistical methods united by the idea that the similarity between a point and its nearest neighbor can be used for statistical inference. This review article summarizes two common environmetric applications: nearest neighbor methods for spatial point processes and nearest neighbor designs and analyses for field experiments. In spatial point processes, the appropriate similarity is the distance between a point and its nearest neighbor. Given a realization of a spatial point process, the mean nearest neighbor distance or the distribution of distances can be used for inference about the spatial process. One common application is to test whether the process is a homogeneous Poisson process. These methods can be extended to describe relationships between two or more spatial point processes. These methods are illustrated using data on the locations of trees in a swamp hardwood forest. In field experiments, neighboring plots often have similar characteristics before treatments are imposed.
    [Show full text]
  • Experimental Investigation of the Tension and Compression Creep Behavior of Alumina-Spinel Refractories at High Temperatures
    ceramics Article Experimental Investigation of the Tension and Compression Creep Behavior of Alumina-Spinel Refractories at High Temperatures Lucas Teixeira 1,* , Soheil Samadi 2 , Jean Gillibert 1 , Shengli Jin 2 , Thomas Sayet 1 , Dietmar Gruber 2 and Eric Blond 1 1 Université d’Orléans, Université de Tours, INSA-CVL, LaMé, 45072 Orléans, France; [email protected] (J.G.); [email protected] (T.S.); [email protected] (E.B.) 2 Chair of Ceramics, Montanuniversität Leoben, 8700 Leoben, Austria; [email protected] (S.S.); [email protected] (S.J.); [email protected] (D.G.) * Correspondence: [email protected] Received: 3 September 2020; Accepted: 18 September 2020; Published: 22 September 2020 Abstract: Refractory materials are subjected to thermomechanical loads during their working life, and consequent creep strain and stress relaxation are often observed. In this work, the asymmetric high temperature primary and secondary creep behavior of a material used in the working lining of steel ladles is characterized, using uniaxial tension and compression creep tests and an inverse identification procedure to calculate the parameters of a Norton-Bailey based law. The experimental creep curves are presented, as well as the curves resulting from the identified parameters, and a statistical analysis is made to evaluate the confidence of the results. Keywords: refractories; creep; parameters identification 1. Introduction Refractory materials, known for their physical and chemical stability, are used in high temperature processes in different industries, such as iron and steel making, cement and aerospace. These materials are exposed to thermomechanical loads, corrosion/erosion from solids, liquids and gases, gas diffusion, and mechanical abrasion [1].
    [Show full text]
  • Introduction to Basic Concepts
    Introduction to Basic Concepts ! There are 5 basic steps to resource management: " 1st - Assess current situation (gather and analyze info) " 2nd - Develop a plan " 3rd- Implement the plan " 4th- Monitor results (effects) of plan " 5th- Replan ! This class will help you do the 1st and 4th steps in the process ! For any study of vegetation there are basically 8 steps: " 1st Set objectives - What do you wan to know? You can’t study everything, you must have a goal " 2nd Choose variable you are going to measure Make sure you are meeting your objectives " 3rd Read published research & ask around Find out what is known about your topic and research area " 4th Choose a sampling method, number of samples, analysis technique, and statistical hypothesis " 5th Go collect data - Details, details, details.... Pay attention to detail - Practice and train with selected methods before you start - Be able to adjust once you get to the field " 6th Analyze the data - Summarize data - Use a statistical comparison " 7th Interpret the results - What did you find? Why is it important? - What can your results tell you about the ecosystem? - Remember the limitations of your technique " 8th Write the report - Your results are worthless if they stay in your files - Clearly document your methods - Use tables and figures to explain your data - Be correct and concise - No matter how smart you are, you will sound stupid if you can’t clearly communicate your results A good reference for several aspects of rangeland inventory and monitoring is: http://rangelandswest.org/az/monitoringtechnical.html
    [Show full text]
  • Shapiro, A., “Statistical Inference of Moment Structures”
    Statistical Inference of Moment Structures Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA e-mail: [email protected] Abstract This chapter is focused on statistical inference of moment structures models. Although the theory is presented in terms of general moment structures, the main emphasis is on the analysis of covariance structures. We discuss identifiability and the minimum discrepancy function (MDF) approach to statistical analysis (estimation) of such models. We address such topics of the large samples theory as consistency, asymptotic normality of MDF estima- tors and asymptotic chi-squaredness of MDF test statistics. Finally, we discuss asymptotic robustness of the normal theory based MDF statistical inference in the analysis of covariance structures. 1 Contents 1 Introduction 1 2 Moment Structures Models 1 3 Discrepancy Functions Estimation Approach 6 4 Consistency of MDF Estimators 9 5 Asymptotic Analysis of the MDF Estimation Procedure 12 5.1 Asymptotics of MDF Test Statistics . 15 5.2 Nested Models . 19 5.3 Asymptotics of MDF Estimators . 20 5.4 The Case of Boundary Population Value . 22 6 Asymptotic Robustness of the MDF Statistical Inference 24 6.1 Elliptical distributions . 26 6.2 Linear latent variate models . 27 1 Introduction In this paper we discuss statistical inference of moment structures where first and/or second population moments are hypothesized to have a parametric structure. Classical examples of such models are multinomial and covariance structures models. The presented theory is sufficiently general to handle various situations, however the main focus is on covariance structures. Theory and applications of covariance structures were motivated first by the factor analysis model and its various generalizations and later by a development of the LISREL models (J¨oreskog [15, 16]) (see also Browne [8] for a thorough discussion of covariance structures modeling).
    [Show full text]
  • Standard Deviation - Wikipedia Visited on 9/25/2018
    Standard deviation - Wikipedia Visited on 9/25/2018 Not logged in Talk Contributions Create account Log in Article Talk Read Edit View history Wiki Loves Monuments: The world's largest photography competition is now open! Photograph a historic site, learn more about our history, and win prizes. Main page Contents Featured content Standard deviation Current events Random article From Wikipedia, the free encyclopedia Donate to Wikipedia For other uses, see Standard deviation (disambiguation). Wikipedia store In statistics , the standard deviation (SD, also represented by Interaction the Greek letter sigma σ or the Latin letter s) is a measure that is Help used to quantify the amount of variation or dispersion of a set About Wikipedia of data values.[1] A low standard deviation indicates that the data Community portal Recent changes points tend to be close to the mean (also called the expected Contact page value) of the set, while a high standard deviation indicates that A plot of normal distribution (or bell- the data points are spread out over a wider range of values. shaped curve) where each band has a Tools The standard deviation of a random variable , statistical width of 1 standard deviation – See What links here also: 68–95–99.7 rule population, data set , or probability distribution is the square root Related changes Upload file of its variance . It is algebraically simpler, though in practice [2][3] Special pages less robust , than the average absolute deviation . A useful Permanent link property of the standard deviation is that, unlike the variance, it Page information is expressed in the same units as the data.
    [Show full text]
  • Sampling Designs for National Forest Assessments Ronald E
    Knowledge reference for national forest assessments Sampling designs for national forest assessments Ronald E. McRoberts,1 Erkki O. Tomppo,2 and Raymond L. Czaplewski 3 THIS CHAPTER DISCUSSES THE FOLLOWING POINTS: • Definition of a population for sampling purposes. • Selection of the size and shape for a plot configuration. • Distinguishing among simple random, systematic, stratified and cluster sampling designs. • Methods for constructing sampling design. • Estimating population means and variances. • Estimating sampling errors. • Special considerations for tropical forest inventories. Abstract estimation methods is a key component of the overall process for information management National forest assessments are best conducted and data registration for NFAs (see chapter with sufficiently accurate and scientifically on Information management and data defensible estimates of forest attributes. This registration, p. 93). chapter discusses the statistical design of the sampling plan for a forest inventory, including 1.1 Objectives the process used to define the population to be The goal is to estimate the condition of forests sampled and the selection of a sample intended for an entire nation using data collected from to satisfy NFA precision requirements. The a sample of field plots. The basic objectives team designing a national forest inventory of an NFA are assumed to be fourfold: (i) to should include an experienced statistician. If obtain national estimates of the total area such an expert is not available, this section of forest, subdivided by major categories provides guidance and recommendations for of forest types and conditions, as well as relatively simple sampling designs that reduce the numbers and distributions of trees by risk and improve chances for success.
    [Show full text]
  • Calculating 95% Upper Confidence Limit (UCL) Guidance Document
    STATE OF CONNECTICUT DEPARTMENT OF ENERGY AND ENVIRONMENTAL PROTECTION Guidance for Calculating the 95% Upper Confidence Level for Demonstrating Compliance with the Remediation Standard Regulations May 30, 2014 Robert J. Klee, Commissioner 79 Elm Street, Hartford, CT 06106 www.ct.gov/deep/remediation 860-424-3705 TABLE OF CONTENTS LIST OF ACRONYMS iii DEFINITION OF TERMS iv 1. Introduction 1 1.1 Definition of 95 % Upper Confidence Level 1 1.2 Data Quality Considerations 2 1.3 Applicability 2 1.3.1 Soil 3 1.3.2 Groundwater 3 1.4 Document Organization 4 2. Developing a Data Set for a Release Area 4 2.1 Data Selection for 95% UCL Calculation for a Release Area 5 2.2 Non-Detect Soil Results in a Release Area Data Set 7 2.3 Quality Control Soil Sample Results in a Release Area Data Set 7 3. Developing a Data Set for a Groundwater Plume 7 3.1 Data Selection for 95% UCL Calculation for a Groundwater Plume 8 3.2 Non-Detect Results in a Groundwater Plume Data Set 8 3.3 Quality Control Results in a Groundwater Plume Data Set 8 4. Evaluating the Data Set 9 4.1 Distribution of COC Concentrations in the Environment 9 4.2 Appropriate Data Set Size 10 4.3 Statistical DQOs 10 4.3.1 Randomness of Data Set 10 4.3.2 Strength of Data Set 10 4.3.3 Skewness of Data Set 11 5. Statistical Calculation Methods 11 i 5.1 Data Distributions 12 5.2 Handling of Non-Detect Results in Statistical Calculations 12 6.
    [Show full text]
  • Introduction to Estimation
    5: Introduction to Estimation Parameters and statistics Statistical inference is the act of generalizing from a sample to a population with calculated degree of certainty. The two forms of statistical inference are estimation and hypothesis testing. This chapter introduces estimation. The next chapter introduces hypothesis testing. A statistical population represents the set of all possible values for a variable. In practice, we do not study the entire population. Instead, we use data in a sample to shed light on the wider population. The process of generalizing from the sample to the population is statistical inference. The term parameter is used to refer to a numerical characteristic of a population. Examples of parameters include the population mean (μ) and the population standard deviation (σ). A numerical characteristic of the sample is a statistic. In this chapter we introduce a particular type of statistic called an estimate. The sample mean x is the natural estimator of population mean μ. Sample standard deviation s is the natural estimator of population standard deviation σ. Different symbols are used to denote parameters and estimates. (e.g., μ versus x ). The parameter is a fixed constant. In contrast, the estimator varies from sample to sample. Other differences are summarized: Parameters Estimators Source Population Sample Value known? No Yes (calculate) Notation Greek (μ) Roman ( x ) Vary from sample to sample No Yes Error-prone No Yes Page 5.1 (C:\data\StatPrimer\estimation.doc, 8/1/2006) Sampling distribution of a mean (SDM) If we had the opportunity to take repeated samples from the same population, samples means ( x s) would vary from sample to sample and form a sampling distribution means (SDM).
    [Show full text]
  • Exercise 1C Scientific Investigation: Statistical Analysis
    Exercise 1C Scientific Investigation: Statistical Analysis Parts of this lab adapted from General Ecology Labs, Dr. Chris Brown, Tennessee Technological University and Ecology on Campus, Dr. Robert Kingsolver, Bellarmine University. In part C of our Scientific Investigation labs, we will use the measurement data from part B to ask new questions and apply some basic statistics. Ecology is the ambitious attempt to understand life on a grand scale. We know that the mechanics of the living world are too vast to see from a single vantage point, too gradual to observe in a single lifetime, and too complex to capture in a single narrative. This is why ecology has always been a quantitative discipline. Measurement empowers ecologists because our measuring instruments extend our senses, and numerical records extend our capacity for observation. With measurement data, we can compare the growth rates of trees across a continent, through a series of environmental conditions, or over a period of years. Imagine trying to compare from memory the water clarity of two streams located on different continents visited in separate years, and you can easily appreciate the value of measurement. Numerical data extend our capacity for judgment too. Since a stream runs muddier after a rain and clearer in periods of drought, how could you possibly wade into two streams in different seasons and hope to develop a valid comparison? In a world characterized by change, data sets provide reliability unrealized by single observations. Quantitative concepts such as averages, ratios, variances, and probabilities reveal ecological patterns that would otherwise remain unseen and unknowable.
    [Show full text]
  • Quality Guidelines for Frames Insocialstatistics
    Quality Guidelines for Frames in Social Statistics Version 1.51 ESSnet KOMUSO Quality in Multisource Statistics http://ec.europa.eu/eurostat/cros/content/essnet-quality-multisource-statistics_en Framework Partnership Agreement No 07112.2015.003-2015.226 Specific Grant Agreement No 3 (SGA-3) No 07112.2018.007-2018.0444 Work Package 2 QUALITY GUIDELINES FOR FRAMES IN SOCIAL STATISTICS Version 1.51, 2019-09-30 Prepared by: Thomas Burg (Statistics Austria), Alexander Kowarik (Statistics Austria), Magdalena Six (Statistics Austria), Giovanna Brancato (Istat, Italy) and Danuté Krapavickaité (LS, Lithuania) Reviewers: Fabian Bach (ESTAT), Lionel Vigino (ESTAT), Li-Chun Zhang (SSB, Norway), Ton de Waal (CBS, The Netherlands), Angela McCourt (CSO, Ireland), Eva-Maria Asamer (Statistics Austria) and Christoph Waldner (Statistics Austria) ESSnet co-ordinator: Niels Ploug (DST, Denmark), email [email protected], telephone +45 3917 3951 1 Quality Guidelines for Frames in Social Statistics Version 1.51 Table of contents 1 Purpose and Objectives of the Document ................................................................................................................ 3 2 Frames in Official Statistics ....................................................................................................................................... 5 2.1 Definition of a Frame ........................................................................................................................................ 5 2.2 Frames in Social Statistics ................................................................................................................................
    [Show full text]
  • Is the 'N = 30 Rule of Thumb' of Ecological Field Studies Reliable? a Call for Greater Attention to the Variability in Our Data
    Animal Biodiversity and Conservation 37.1 (2014) Forum95 Is the 'n = 30 rule of thumb' of ecological field studies reliable? A call for greater attention to the variability in our data A. Martínez–Abraín Martínez–Abraín, A., 2014. Is the 'n = 30 rule of thumb' of ecological field studies reliable? A call for greater attention to the variability in our data. Animal Biodiversity and Conservation, 37.1: 95–100, Doi: https://doi. org/10.32800/abc.2014.37.0095 Abstract Is the 'n = 30 rule of thumb' of ecological field studies reliable? A call for greater attention to the variability in our data.— A common practice of experimental design in field ecology, which relies on the Central Limit Theorem, is the use of the 'n = 30 rule of thumb'. I show here that papers published in Animal Biodiversity and Conservation during the period 2010–2013 adjust to this rule. Samples collected around this relatively small size have the advantage of coupling statistically–significant results with large effect sizes, which is positive because field researchers are commonly interested in large ecological effects. However, the power to detect a large effect size depends not only on sample size but, importantly, also on between–population variability. By means of a hypothetical example, I show here that the statistical power is little affected by small–medium variance changes between populations. However, power decreases abruptly beyond a certain threshold, which I identify roughly around a five–fold difference in variance between populations. Hence, researchers should explore variance profiles of their study populations to make sure beforehand that their study populations lies within the safe zone to use the 'n = 30 rule of thumb'.
    [Show full text]