Coverage and Nonresponse Errors of Sampling Frames for Mail, Telephone, Face-To-Face, and Web Surveys by Frances Chumney
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Esomar/Grbn Guideline for Online Sample Quality
ESOMAR/GRBN GUIDELINE FOR ONLINE SAMPLE QUALITY ESOMAR GRBN ONLINE SAMPLE QUALITY GUIDELINE ESOMAR, the World Association for Social, Opinion and Market Research, is the essential organisation for encouraging, advancing and elevating market research: www.esomar.org. GRBN, the Global Research Business Network, connects 38 research associations and over 3500 research businesses on five continents: www.grbn.org. © 2015 ESOMAR and GRBN. Issued February 2015. This Guideline is drafted in English and the English text is the definitive version. The text may be copied, distributed and transmitted under the condition that appropriate attribution is made and the following notice is included “© 2015 ESOMAR and GRBN”. 2 ESOMAR GRBN ONLINE SAMPLE QUALITY GUIDELINE CONTENTS 1 INTRODUCTION AND SCOPE ................................................................................................... 4 2 DEFINITIONS .............................................................................................................................. 4 3 KEY REQUIREMENTS ................................................................................................................ 6 3.1 The claimed identity of each research participant should be validated. .................................................. 6 3.2 Providers must ensure that no research participant completes the same survey more than once ......... 8 3.3 Research participant engagement should be measured and reported on ............................................... 9 3.4 The identity and personal -
SAMPLING DESIGN & WEIGHTING in the Original
Appendix A 2096 APPENDIX A: SAMPLING DESIGN & WEIGHTING In the original National Science Foundation grant, support was given for a modified probability sample. Samples for the 1972 through 1974 surveys followed this design. This modified probability design, described below, introduces the quota element at the block level. The NSF renewal grant, awarded for the 1975-1977 surveys, provided funds for a full probability sample design, a design which is acknowledged to be superior. Thus, having the wherewithal to shift to a full probability sample with predesignated respondents, the 1975 and 1976 studies were conducted with a transitional sample design, viz., one-half full probability and one-half block quota. The sample was divided into two parts for several reasons: 1) to provide data for possibly interesting methodological comparisons; and 2) on the chance that there are some differences over time, that it would be possible to assign these differences to either shifts in sample designs, or changes in response patterns. For example, if the percentage of respondents who indicated that they were "very happy" increased by 10 percent between 1974 and 1976, it would be possible to determine whether it was due to changes in sample design, or an actual increase in happiness. There is considerable controversy and ambiguity about the merits of these two samples. Text book tests of significance assume full rather than modified probability samples, and simple random rather than clustered random samples. In general, the question of what to do with a mixture of samples is no easier solved than the question of what to do with the "pure" types. -
5.1 Survey Frame Methodology
Regional Course on Statistical Business Registers: Data sources, maintenance and quality assurance Perak, Malaysia 21-25 May, 2018 4 .1 Survey fram e methodology REVIEW For sampling purposes, a snapshot of the live register at a particular point in t im e is needed. The collect ion of active statistical units in the snapshot is referred to as a frozen frame. REVIEW A sampling frame for a survey is a subset of t he frozen fram e t hat includes units and characteristics needed for t he survey. A single frozen fram e should be used for all surveys in a given reference period Creat ing sam pling fram es SPECIFICATIONS Three m ain t hings need t o be specified t o draw appropriate sampling frames: ▸ Target population (which units?) ▸ Variables of interest ▸ Reference period CHOICE OF STATISTICAL UNIT Financial data Production data Regional data Ent erpri ses are Establishments or Establishments or typically the most kind-of-activity local units should be appropriate units to units are typically used if regional use for financial data. the most appropriate disaggregation is for production data. necessary. Typically a single t ype of unit is used for each survey, but t here are except ions where t arget populat ions include m ult iple unit t ypes. CHOICE OF STATISTICAL UNIT Enterprise groups are useful for financial analyses and for studying company strategies, but they are not normally the target populations for surveys because t hey are t oo diverse and unstable. SURVEYS OF EMPLOYMENT The sam pling fram es for t hese include all active units that are em ployers. -
1988: Coverage Error in Establishment Surveys
COVERAGE ERROR IN ESTABLISHMENT SURVEYS Carl A. Konschnik U.S. Bureau of the CensusI I. Definition of Coverage Error in the survey planning stage results in a sam- pled population which is too far removed from Coverage error which includes both under- the target population. Since estimates based coverage and overcoverage, is defined as "the on data drawn from the sampled population apply error in an estimate that results from (I) fail- properly only to the sampled population, inter- ure to include all units belonging to the est in the target population dictates that the defined population or failure to include speci- sampled population be as close as practicable fied units in the conduct of the survey to the target population. Nevertheless, in the (undercoverage), and (2) inclusion of some units following discussion of the sources, measure- erroneously either because of a defective frame ment, and control of coverage error, only or because of inclusion of unspecified units or deficiencies relative to the sampled population inclusion of specified units more than once in are included. Thus, when speaking of defective the actual survey (overcoverage)" (Office of frames, only those deficiencies are discussed Federal Statistical Policy and Standards, 1978). which arise when the population which is sampled Coverage errors are closely related to but differs from the population intended to be clearly distinct from content errors, which are sampled (the sampled population). defined as the "errors of observation or objec- tive measurement, of recording, of imputation, Coverage Error Source Categories or of other processing which results in associ- ating a wrong value of the characteristic with a We will now look briefly at the two cate- specified unit" (Office of Federal Statistical gories of coverage error--defective frames and Policy and Standards, 1978). -
American Community Survey Design and Methodology (January 2014) Chapter 15: Improving Data Quality by Reducing Non-Sampling Erro
American Community Survey Design and Methodology (January 2014) Chapter 15: Improving Data Quality by Reducing Non-Sampling Error Version 2.0 January 30, 2014 ACS Design and Methodology (January 2014) – Improving Data Quality Page ii [This page intentionally left blank] Version 2.0 January 30, 2014 ACS Design and Methodology (January 2014) – Improving Data Quality Page iii Table of Contents Chapter 15: Improving Data Quality by Reducing Non-Sampling Error ............................... 1 15.1 Overview .......................................................................................................................... 1 15.2 Coverage Error ................................................................................................................. 2 15.3 Nonresponse Error............................................................................................................ 3 15.4 Measurement Error ........................................................................................................... 6 15.5 Processing Error ............................................................................................................... 7 15.6 Census Bureau Statistical Quality Standards ................................................................... 8 15.7 References ........................................................................................................................ 9 Version 2.0 January 30, 2014 ACS Design and Methodology (January 2014) – Improving Data Quality Page iv [This page intentionally left -
R(Y NONRESPONSE in SURVEY RESEARCH Proceedings of the Eighth International Workshop on Household Survey Nonresponse 24-26 September 1997
ZUMA Zentrum für Umfragen, Melhoden und Analysen No. 4 r(y NONRESPONSE IN SURVEY RESEARCH Proceedings of the Eighth International Workshop on Household Survey Nonresponse 24-26 September 1997 Edited by Achim Koch and Rolf Porst Copyright O 1998 by ZUMA, Mannheini, Gerinany All rights reserved. No part of tliis book rnay be reproduced or utilized in any form or by aiiy means, electronic or mechanical, including photocopying, recording, or by any inforniation Storage and retrieval System, without permission in writing froni the publisher. Editors: Achim Koch and Rolf Porst Publisher: Zentrum für Umfragen, Methoden und Analysen (ZUMA) ZUMA is a member of the Gesellschaft Sozialwissenschaftlicher Infrastruktureinrichtungen e.V. (GESIS) ZUMA Board Chair: Prof. Dr. Max Kaase Dii-ector: Prof. Dr. Peter Ph. Mohlcr P.O. Box 12 21 55 D - 68072.-Mannheim Germany Phone: +49-62 1- 1246-0 Fax: +49-62 1- 1246- 100 Internet: http://www.social-science-gesis.de/ Printed by Druck & Kopie hanel, Mannheim ISBN 3-924220-15-8 Contents Preface and Acknowledgements Current Issues in Household Survey Nonresponse at Statistics Canada Larry Swin und David Dolson Assessment of Efforts to Reduce Nonresponse Bias: 1996 Survey of Income and Program Participation (SIPP) Preston. Jay Waite, Vicki J. Huggi~isund Stephen 1'. Mnck Tlie Impact of Nonresponse on the Unemployment Rate in the Current Population Survey (CPS) Ciyde Tucker arzd Brian A. Harris-Kojetin An Evaluation of Unit Nonresponse Bias in the Italian Households Budget Survey Claudio Ceccarelli, Giuliana Coccia and Fahio Crescetzzi Nonresponse in the 1996 Income Survey (Supplement to the Microcensus) Eva Huvasi anci Acfhnz Marron The Stability ol' Nonresponse Rates According to Socio-Dernographic Categories Metku Znletel anci Vasju Vehovar Understanding Household Survey Nonresponse Through Geo-demographic Coding Schemes Jolin King Response Distributions when TDE is lntroduced Hikan L. -
Coverage Bias in European Telephone Surveys: Developments of Landline and Mobile Phone Coverage Across Countries and Over Time
www.ssoar.info Coverage Bias in European Telephone Surveys: Developments of Landline and Mobile Phone Coverage across Countries and over Time Mohorko, Anja; Leeuw, Edith de; Hox, Joop Veröffentlichungsversion / Published Version Zeitschriftenartikel / journal article Empfohlene Zitierung / Suggested Citation: Mohorko, A., Leeuw, E. d., & Hox, J. (2013). Coverage Bias in European Telephone Surveys: Developments of Landline and Mobile Phone Coverage across Countries and over Time. Survey Methods: Insights from the Field, 1-13. https://doi.org/10.13094/SMIF-2013-00002 Nutzungsbedingungen: Terms of use: Dieser Text wird unter einer CC BY-NC-ND Lizenz This document is made available under a CC BY-NC-ND Licence (Namensnennung-Nicht-kommerziell-Keine Bearbeitung) zur (Attribution-Non Comercial-NoDerivatives). For more Information Verfügung gestellt. Nähere Auskünfte zu den CC-Lizenzen finden see: Sie hier: https://creativecommons.org/licenses/by-nc-nd/4.0 https://creativecommons.org/licenses/by-nc-nd/4.0/deed.de Diese Version ist zitierbar unter / This version is citable under: https://nbn-resolving.org/urn:nbn:de:0168-ssoar-333873 Survey Methods: Insights from the Field Anja Mohorko, Department of Methodology and Statistics, Utrecht University Edith de Leeuw, Department of Methodology and Statistics, Utrecht University Joop Hox, Department of Methodology and Statistics, Utrecht University 25.01.2013 How to cite this article: Mohorko, A., de Leeuw, E., & Hox, J. (2013). Coverage Bias in European Telephone Surveys: Developments of Landline and Mobile Phone Coverage across Countries and over Time. Survey Methods: Insights from the Field. Retrieved from http://surveyinsights.org/?p=828 Abstract With the decrease of landline phones in the last decade, telephone survey methodologists face a new challenge to overcome coverage bias. -
A Total Survey Error Perspective on Comparative Surveys
A Total Survey Error Perspective on Comparative Surveys October 2nd 2014 – ITSEW – Washington DC Beth-Ellen Pennell, University of Michigan Lars Lyberg, Stockholm University Peter Mohler, University of Mannheim Kristen Cibelli Hibben, University of Michigan (presenting author) Gelaye Worku, Stockholm University © 2014 by the Regents of the University of Michigan Presentation Outline 1. Background 2. Integrated Total Survey Error Model Contribution to Errors: Representation: Coverage, Sampling, Nonresponse errors Measurement: Validity, Measurement errors Measurement error – expanded view 3. Conclusion © 2014 by the Regents of the University of Michigan Background 1. The scope, scale, and number of multicultural, multiregional or multinational (3M) surveys continues to grow a. Examples: European Social Survey (20+ countries), the World Values Survey (75+ countries), the Gallup World Poll (160 countries) 2. The purpose is comparability – An estimate for one country should be reasonably comparable to another. 3. However, comparability in 3M surveys is affected by various error sources that we are used to considering plus some additional ones. © 2014 by the Regents of the University of Michigan Background, continued… 1. 3M surveys have an error typology that differs from monocultural surveys. a. Error sources in 3M surveys are typically amplified b. 3M surveys often require additional operations – i.e. adaptation, translation 2. Few attempts have been made to adapt the TSE framework to 3M surveys (except see: Smith’s (2011) refined TSE model). 3. The proposed TSE typology integrates error sources – and associated methodological and operational challenges that may limit comparability. © 2014 by the Regents of the University of Michigan Total Survey Error (Groves et al., 2004) © 2014 by the Regents of the University of Michigan An Integrated TSE Model Some general points: 1. -
Use of Sampling in the Census
Regional Workshop on the Operational Guidelines of the WCA 2020 Amman, Jordan 1-4 April 2019 Use of sampling in the census Technical Session 5.2 Oleg Cara Statistician, Agricultural Censuses Team FAO Statistics Division (ESS) 1 CONTENTS Complete enumeration censuses versus sample-based censuses Uses of sampling at other census stages Sample designs based on list frames Sample designs based on area frames Sample designs based on multiple frames Choice of sample design Country examples 2 Background As mentioned in the WCA 2020, Vol. 1 (Chapter 4, paragraph 4.34), when deciding whether to conduct a census by complete or sample enumeration, in addition to efficiency considerations (precision versus costs), one should take into account: desired level of aggregation for census data; use of the census as a frame for ongoing sampling surveys; data content of the census; and capacity to deal with sampling methods and subsequent statistical analysis based on samples. 3 Complete enumeration versus sample enumeration census Complete enumeration Sample enumeration Advantages 1. Reliable census results for the 1. Is generally less costly that a smallest administrative and complete enumeration geographic units and on rare events 2. Contributes to decrease the overall (such as crops/livestock types) response burden 2. Provides a reliable frame for the organization of subsequent regular 3. Requires a smaller number of infra-annual and annual sample enumerators and supervisors than a surveys. In terms of frames, it is census conducted by complete much less demanding in respect of enumeration. Consequently, the the holdings’ characteristics non-sampling errors can be 3. Requires fewer highly qualified expected to be lower because of statistical personnel with expert the employment of better trained knowledge of sampling methods enumerators and supervisors and than a census conducted on a sample basis. -
Ethical Considerations in the Total Survey Error Context
Ethical Considerations in the Total Survey Error Context Julie de Jong 3MCII International Conference July 26-29, 2016 Chicago, Illinois © 2014 by the Regents of the University of Michigan Acknowledgements • Invited paper in Advances in Comparative Survey Methods: Multicultural, Multinational and Multiregional Contexts (3MC), edited by T.P. Johnson, B.E. Pennell, I.A.L. Stoop, & B. Dorer. New York, NY: John Wiley & Sons. • Special thanks to Kirsten Alcser, Christopher Antoun, Ashley Bowers, Judi Clemens, Christina Lien, Steve Pennell, and Kristen Cibelli Hibben who contributed substantially to earlier iterations of the chapter “Ethical Considerations” in the Cross-Cultural Survey Guidelines from which this chapter draws material. © 2014 by the Regents of the University of Michigan Overview – Legal and professional regulations and socio-political contexts can lead to differences in how ethical protocols are realized across countries. – Differences in the implementation of ethical standards contributes to the sources of total survey error (TSE) • TSE describes statistical properties of survey estimates by incorporating a variety of error sources and is used in the survey design stage, in the data collection process, and evaluation and post survey adjustments (Groves & Lyberg, 2010) • The TSE framework is generally used to design and evaluate a single-country survey – The framework has been extended to 3MC surveys and the impact of TSE on overall comparison error (Smith 2011) 3 © 2014 by the Regents of the University of Michigan Objectives -
Eurobarometer 422
Flash Eurobarometer 422 CROSS-BORDER COOPERATION IN THE EU SUMMARY Fieldwork: June 2015 Publication: September 2015 This survey has been requested by the European Commission, Directorate-General for Regional and Urban Policy and co-ordinated by Directorate-General for Communication. This document does not represent the point of view of the European Commission. The interpretations and opinions contained in it are solely those of the authors. Flash Eurobarometer 422 - TNS Political & Social Flash Eurobarometer 422 Project title “Cross-border cooperation in the EU” Linguistic Version EN Catalogue Number KN-04-15-608-EN-N ISBN 978-92-79-50789-2 DOI 10.2776/964957 © European Union, 2015 Flash Eurobarometer 422 Cross-border cooperation in the EU Conducted by TNS Political & Social at the request of the European Commission, Directorate-General Regional and Urban Policy Survey co-ordinated by the European Commission, Directorate-General for Communication (DG COMM “Strategy, Corporate Communication Actions and Eurobarometer” Unit) FLASH EUROBAROMETER 422 “Cross-border cooperation in the EU” TABLE OF CONTENTS INTRODUCTION .................................................................................................. 2 I. Awareness of EU regional policy-funded cross-border cooperation activities6 II. Going abroad to other countries ................................................................. 10 III. social trust of the EU population living in border regions covered by the Interreg cross-border cooperation programmes ............................................. -
Standards and Guidelines for Reporting on Coverage Martin Provost Statistics Canada R
Standards and Guidelines for Reporting on Coverage Martin Provost Statistics Canada R. H. Coats Building, 15th Floor, Tunney’s Pasture, Ottawa ONT, Canada, K1A 0T6, [email protected] Key Words: Quality indicators, coverage, ideal, target, survey and frame populations 1. Introduction The level of coverage provided by a survey or a statistical program has a major impact on the overall quality of the survey process. It is therefore important that users of the survey data be informed of the coverage quality provided by the survey and of any potential problem associated with identified defects in terms of coverage. Those measures of coverage add to the set of quality indicators that together provide a complete picture that facilitate an adequate use of the survey data and an optimization of the survey resources to allow for its constant improvement. But how should one define coverage (based on what units, etc.)? And how should survey managers report on it? This paper outlines the content and describes the procedures, issues and topics which will have to be covered in standards and guidelines for reporting on coverage at Statistics Canada to standardize the terminology, the definitions and the measures relating to these coverage issues. These standards and guidelines have been proposed for inclusion in Statistics Canada's Quality Assurance Framework as a supplement to this agency's Policy on Informing Users of Data Quality and Methodology. 2. Statistics Canada’s Quality Assurance Framework Statistics Canada (STC) has always invested considerable effort in achieving an adequate level of quality for all of its products and services.