Estimating General Parameters from Non-Probability Surveys Using Propensity Score Adjustment
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Choosing the Sample
CHAPTER IV CHOOSING THE SAMPLE This chapter is written for survey coordinators and technical resource persons. It will enable you to: U Understand the basic concepts of sampling. U Calculate the required sample size for national and subnational estimates. U Determine the number of clusters to be used. U Choose a sampling scheme. UNDERSTANDING THE BASIC CONCEPTS OF SAMPLING In the context of multiple-indicator surveys, sampling is a process for selecting respondents from a population. In our case, the respondents will usually be the mothers, or caretakers, of children in each household visited,1 who will answer all of the questions in the Child Health modules. The Water and Sanitation and Salt Iodization modules refer to the whole household and may be answered by any adult. Questions in these modules are asked even where there are no children under the age of 15 years. In principle, our survey could cover all households in the population. If all mothers being interviewed could provide perfect answers, we could measure all indicators with complete accuracy. However, interviewing all mothers would be time-consuming, expensive and wasteful. It is therefore necessary to interview a sample of these women to obtain estimates of the actual indicators. The difference between the estimate and the actual indicator is called sampling error. Sampling errors are caused by the fact that a sample&and not the entire population&is surveyed. Sampling error can be minimized by taking certain precautions: 3 Choose your sample of respondents in an unbiased way. 3 Select a large enough sample for your estimates to be precise. -
Best Representative Value” for Daily Total Column Ozone Reporting
Atmos. Meas. Tech., 10, 4697–4704, 2017 https://doi.org/10.5194/amt-10-4697-2017 © Author(s) 2017. This work is distributed under the Creative Commons Attribution 4.0 License. A more representative “best representative value” for daily total column ozone reporting Andrew R. D. Smedley, John S. Rimmer, and Ann R. Webb Centre for Atmospheric Science, University of Manchester, Manchester, M13 9PL, UK Correspondence to: Andrew R. D. Smedley ([email protected]) Received: 8 June 2017 – Discussion started: 13 July 2017 Revised: 20 October 2017 – Accepted: 20 October 2017 – Published: 5 December 2017 Abstract. Long-term trends of total column ozone (TCO), 1 Introduction assessments of stratospheric ozone recovery, and satellite validation are underpinned by a reliance on daily “best repre- Global ground-based monitoring of total column ozone sentative values” from Brewer spectrophotometers and other (TCO) relies on the international network of Brewer spec- ground-based ozone instruments. In turn reporting of these trophotometers since they were first developed in the 1980s daily total column ozone values to the World Ozone and Ul- (Kerr et al., 1981), which has expanded the number of traviolet Radiation Data Centre (WOUDC) has traditionally sites and measurement possibilities from their still-operating been predicated upon a simple choice between direct sun predecessor instrument, the Dobson spectrophotometer. To- (DS) and zenith sky (ZS) observations. For mid- and high- gether these networks provide validation of satellite-retrieved latitude monitoring sites impacted by cloud cover we dis- total column ozone as well as instantaneous point measure- cuss the potential deficiencies of this approach in terms of ments that have value for near-real-time low-ozone alerts, its rejection of otherwise valid observations and capability particularly when sited near population centres, as inputs to to evenly sample throughout the day. -
Computing Effect Sizes for Clustered Randomized Trials
Computing Effect Sizes for Clustered Randomized Trials Terri Pigott, C2 Methods Editor & Co-Chair Professor, Loyola University Chicago [email protected] The Campbell Collaboration www.campbellcollaboration.org Computing effect sizes in clustered trials • In an experimental study, we are interested in the difference in performance between the treatment and control group • In this case, we use the standardized mean difference, given by YYTC− d = gg Control group sp mean Treatment group mean Pooled sample standard deviation Campbell Collaboration Colloquium – August 2011 www.campbellcollaboration.org 1 Variance of the standardized mean difference NNTC+ d2 Sd2 ()=+ NNTC2( N T+ N C ) where NT is the sample size for the treatment group, and NC is the sample size for the control group Campbell Collaboration Colloquium – August 2011 www.campbellcollaboration.org TREATMENT GROUP CONTROL GROUP TT T CC C YY,,..., Y YY12,,..., YC 12 NT N Overall Trt Mean Overall Cntl Mean T Y C Yg g S 2 2 Trt SCntl 2 S pooled Campbell Collaboration Colloquium – August 2011 www.campbellcollaboration.org 2 In cluster randomized trials, SMD more complex • In cluster randomized trials, we have clusters such as schools or clinics randomized to treatment and control • We have at least two means: mean performance for each cluster, and the overall group mean • We also have several components of variance – the within- cluster variance, the variance between cluster means, and the total variance • Next slide is an illustration Campbell Collaboration Colloquium – August 2011 www.campbellcollaboration.org TREATMENT GROUP CONTROL GROUP Cntl Cluster mC Trt Cluster 1 Trt Cluster mT Cntl Cluster 1 TT T T CC C C YY,...ggg Y ,..., Y YY11,.. -
The Effect of Changing Scores for Multi-Way Tables with Open-Ended
The Effect of Changing Scores for Multi-way Tables with Open-ended Ordered Categories Ayfer Ezgi YILMAZ∗y and Tulay SARACBASIz Abstract Log-linear models are used to analyze the contingency tables. If the variables are ordinal or interval, because the score values affect both the model significance and parameter estimates, selection of score values has importance. Sometimes an interval variable contains open-ended categories as the first or last category. While the variable has open- ended classes, estimates of the lowermost and/or uppermost values of distribution must be handled carefully. In that case, the unknown val- ues of first and last classes can be estimated firstly, and then the score values can be calculated. In the previous studies, the unknown bound- aries were estimated by using interquartile range (IQR). In this study, we suggested interdecile range (IDR), interpercentile range (IPR), and the mid-distance range (MDR) as alternatives to IQR to detect the effects of score values on model parameters. Keywords: Contingency tables, Log-linear models, Interval measurement, Open- ended categories, Scores. 2000 AMS Classification: 62H17 1. Introduction Categorical variables, which have a measurement scale consisting of a set of cate- gories, are of importance in many fields often in the medical, social, and behavioral sciences. The tables that represent these variables are called contingency tables. Log-linear model equations are applied to analyze these tables. Interaction, row effects, and association parameters are strictly important to interpret the tables. In the presence of an ordinal variable, score values should be considered. As us- ing row effects parameters for nominal{ordinal tables, association parameter is suggested for ordinal{ordinal tables. -
Measures of Dispersion
MEASURES OF DISPERSION Measures of Dispersion • While measures of central tendency indicate what value of a variable is (in one sense or other) “average” or “central” or “typical” in a set of data, measures of dispersion (or variability or spread) indicate (in one sense or other) the extent to which the observed values are “spread out” around that center — how “far apart” observed values typically are from each other and therefore from some average value (in particular, the mean). Thus: – if all cases have identical observed values (and thereby are also identical to [any] average value), dispersion is zero; – if most cases have observed values that are quite “close together” (and thereby are also quite “close” to the average value), dispersion is low (but greater than zero); and – if many cases have observed values that are quite “far away” from many others (or from the average value), dispersion is high. • A measure of dispersion provides a summary statistic that indicates the magnitude of such dispersion and, like a measure of central tendency, is a univariate statistic. Importance of the Magnitude Dispersion Around the Average • Dispersion around the mean test score. • Baltimore and Seattle have about the same mean daily temperature (about 65 degrees) but very different dispersions around that mean. • Dispersion (Inequality) around average household income. Hypothetical Ideological Dispersion Hypothetical Ideological Dispersion (cont.) Dispersion in Percent Democratic in CDs Measures of Dispersion • Because dispersion is concerned with how “close together” or “far apart” observed values are (i.e., with the magnitude of the intervals between them), measures of dispersion are defined only for interval (or ratio) variables, – or, in any case, variables we are willing to treat as interval (like IDEOLOGY in the preceding charts). -
Evaluating Probability Sampling Strategies for Estimating Redd Counts: an Example with Chinook Salmon (Oncorhynchus Tshawytscha)
1814 Evaluating probability sampling strategies for estimating redd counts: an example with Chinook salmon (Oncorhynchus tshawytscha) Jean-Yves Courbois, Stephen L. Katz, Daniel J. Isaak, E. Ashley Steel, Russell F. Thurow, A. Michelle Wargo Rub, Tony Olsen, and Chris E. Jordan Abstract: Precise, unbiased estimates of population size are an essential tool for fisheries management. For a wide variety of salmonid fishes, redd counts from a sample of reaches are commonly used to monitor annual trends in abundance. Using a 9-year time series of georeferenced censuses of Chinook salmon (Oncorhynchus tshawytscha) redds from central Idaho, USA, we evaluated a wide range of common sampling strategies for estimating the total abundance of redds. We evaluated two sampling-unit sizes (200 and 1000 m reaches), three sample proportions (0.05, 0.10, and 0.29), and six sampling strat- egies (index sampling, simple random sampling, systematic sampling, stratified sampling, adaptive cluster sampling, and a spatially balanced design). We evaluated the strategies based on their accuracy (confidence interval coverage), precision (relative standard error), and cost (based on travel time). Accuracy increased with increasing number of redds, increasing sample size, and smaller sampling units. The total number of redds in the watershed and budgetary constraints influenced which strategies were most precise and effective. For years with very few redds (<0.15 reddsÁkm–1), a stratified sampling strategy and inexpensive strategies were most efficient, whereas for years with more redds (0.15–2.9 reddsÁkm–1), either of two more expensive systematic strategies were most precise. Re´sume´ : La gestion des peˆches requiert comme outils essentiels des estimations pre´cises et non fausse´es de la taille des populations. -
Cluster Sampling
Day 5 sampling - clustering SAMPLE POPULATION SAMPLING: IS ESTIMATING THE CHARACTERISTICS OF THE WHOLE POPULATION USING INFORMATION COLLECTED FROM A SAMPLE GROUP. The sampling process comprises several stages: •Defining the population of concern •Specifying a sampling frame, a set of items or events possible to measure •Specifying a sampling method for selecting items or events from the frame •Determining the sample size •Implementing the sampling plan •Sampling and data collecting 2 Simple random sampling 3 In a simple random sample (SRS) of a given size, all such subsets of the frame are given an equal probability. In particular, the variance between individual results within the sample is a good indicator of variance in the overall population, which makes it relatively easy to estimate the accuracy of results. SRS can be vulnerable to sampling error because the randomness of the selection may result in a sample that doesn't reflect the makeup of the population. Systematic sampling 4 Systematic sampling (also known as interval sampling) relies on arranging the study population according to some ordering scheme and then selecting elements at regular intervals through that ordered list Systematic sampling involves a random start and then proceeds with the selection of every kth element from then onwards. In this case, k=(population size/sample size). It is important that the starting point is not automatically the first in the list, but is instead randomly chosen from within the first to the kth element in the list. STRATIFIED SAMPLING 5 WHEN THE POPULATION EMBRACES A NUMBER OF DISTINCT CATEGORIES, THE FRAME CAN BE ORGANIZED BY THESE CATEGORIES INTO SEPARATE "STRATA." EACH STRATUM IS THEN SAMPLED AS AN INDEPENDENT SUB-POPULATION, OUT OF WHICH INDIVIDUAL ELEMENTS CAN BE RANDOMLY SELECTED Cluster sampling Sometimes it is more cost-effective to select respondents in groups ('clusters') Quota sampling Minimax sampling Accidental sampling Voluntary Sampling …. -
TWO-WAY MODELS for GRAVITY Koen Jochmans*
TWO-WAY MODELS FOR GRAVITY Koen Jochmans* Abstract—Empirical models for dyadic interactions between n agents often likelihood (Gouriéroux, Monfort, & Trognon, 1984a). As an feature agent-specific parameters. Fixed-effect estimators of such models generally have bias of order n−1, which is nonnegligible relative to their empirical application, we estimate a gravity equation in levels standard error. Therefore, confidence sets based on the asymptotic distribu- (as advocated by Santos Silva & Tenreyro, 2006), controlling tion have incorrect coverage. This paper looks at models with multiplicative for multilateral resistance terms. unobservables and fixed effects. We derive moment conditions that are free of fixed effects and use them to set up estimators that are n-consistent, Related work by Fernández-Val and Weidner (2016) on asymptotically normally distributed, and asymptotically unbiased. We pro- likelihood-based estimation of two-way models shows that vide Monte Carlo evidence for a range of models. We estimate a gravity (under regularity conditions) the bias of the fixed-effect esti- equation as an empirical illustration. mator of two-way models in general is O(n−1) and needs to be corrected in order to perform asymptotically valid infer- I. Introduction ence. Our approach is different, as we work with moment conditions that are free of fixed effects, implying the associ- MPIRICAL models for dyadic interactions between n ated estimators to be asymptotically unbiased. Also, the class E agents frequently contain agent-specific fixed effects. of models that Fernández-Val and Weidner (2016) consider The inclusion of such effects captures unobserved charac- and the one under study here are different, and they are not teristics that are heterogeneous across agents. -
Introduction to Survey Sampling and Analysis Procedures (Chapter)
SAS/STAT® 9.3 User’s Guide Introduction to Survey Sampling and Analysis Procedures (Chapter) SAS® Documentation This document is an individual chapter from SAS/STAT® 9.3 User’s Guide. The correct bibliographic citation for the complete manual is as follows: SAS Institute Inc. 2011. SAS/STAT® 9.3 User’s Guide. Cary, NC: SAS Institute Inc. Copyright © 2011, SAS Institute Inc., Cary, NC, USA All rights reserved. Produced in the United States of America. For a Web download or e-book: Your use of this publication shall be governed by the terms established by the vendor at the time you acquire this publication. The scanning, uploading, and distribution of this book via the Internet or any other means without the permission of the publisher is illegal and punishable by law. Please purchase only authorized electronic editions and do not participate in or encourage electronic piracy of copyrighted materials. Your support of others’ rights is appreciated. U.S. Government Restricted Rights Notice: Use, duplication, or disclosure of this software and related documentation by the U.S. government is subject to the Agreement with SAS Institute and the restrictions set forth in FAR 52.227-19, Commercial Computer Software-Restricted Rights (June 1987). SAS Institute Inc., SAS Campus Drive, Cary, North Carolina 27513. 1st electronic book, July 2011 SAS® Publishing provides a complete selection of books and electronic products to help customers use SAS software to its fullest potential. For more information about our e-books, e-learning products, CDs, and hard-copy books, visit the SAS Publishing Web site at support.sas.com/publishing or call 1-800-727-3228. -
Analyses in Support of Rebasing Updating the Medicare Home
Analyses in Support of Rebasing & Updating the Medicare Home Health Payment Rates – CY 2014 Home Health Prospective Payment System Final Rule November 27, 2013 Prepared for: Centers for Medicare and Medicaid Services Chronic Care Policy Group Division of Home Health & Hospice Prepared by: Brant Morefield T.J. Christian Henry Goldberg Abt Associates Inc. 55 Wheeler Street Cambridge, MA 02138 Analyses in Support of Rebasing & Updating Medicare Home Health Payment Rates – CY 2014 Home Health Prospective Payment System Final Rule Table of Contents 1. Introduction and Overview.............................................................................................................. 1 2. Data .................................................................................................................................................... 2 2.1 Claims Data .............................................................................................................................. 2 2.1.1 Data Acquisition......................................................................................................... 2 2.1.2 Processing .................................................................................................................. 2 2.2 Cost Report Data ...................................................................................................................... 3 2.2.1 Data Acquisition......................................................................................................... 3 2.2.2 Processing ................................................................................................................. -
Sampling and Household Listing Manual [DHSM4]
SAMPLING AND HOUSEHOLD LISTING MANuaL Demographic and Health Surveys Methodology This document is part of the Demographic and Health Survey’s DHS Toolkit of methodology for the MEASURE DHS Phase III project, implemented from 2008-2013. This publication was produced for review by the United States Agency for International Development (USAID). It was prepared by MEASURE DHS/ICF International. [THIS PAGE IS INTENTIONALLY BLANK] Demographic and Health Survey Sampling and Household Listing Manual ICF International Calverton, Maryland USA September 2012 MEASURE DHS is a five-year project to assist institutions in collecting and analyzing data needed to plan, monitor, and evaluate population, health, and nutrition programs. MEASURE DHS is funded by the U.S. Agency for International Development (USAID). The project is implemented by ICF International in Calverton, Maryland, in partnership with the Johns Hopkins Bloomberg School of Public Health/Center for Communication Programs, the Program for Appropriate Technology in Health (PATH), Futures Institute, Camris International, and Blue Raster. The main objectives of the MEASURE DHS program are to: 1) provide improved information through appropriate data collection, analysis, and evaluation; 2) improve coordination and partnerships in data collection at the international and country levels; 3) increase host-country institutionalization of data collection capacity; 4) improve data collection and analysis tools and methodologies; and 5) improve the dissemination and utilization of data. For information about the Demographic and Health Surveys (DHS) program, write to DHS, ICF International, 11785 Beltsville Drive, Suite 300, Calverton, MD 20705, U.S.A. (Telephone: 301-572- 0200; fax: 301-572-0999; e-mail: [email protected]; Internet: http://www.measuredhs.com). -
Taylor's Power Law and Fixed-Precision Sampling
87 ARTICLE Taylor’s power law and fixed-precision sampling: application to abundance of fish sampled by gillnets in an African lake Meng Xu, Jeppe Kolding, and Joel E. Cohen Abstract: Taylor’s power law (TPL) describes the variance of population abundance as a power-law function of the mean abundance for a single or a group of species. Using consistently sampled long-term (1958–2001) multimesh capture data of Lake Kariba in Africa, we showed that TPL robustly described the relationship between the temporal mean and the temporal variance of the captured fish assemblage abundance (regardless of species), separately when abundance was measured by numbers of individuals and by aggregate weight. The strong correlation between the mean of abundance and the variance of abundance was not altered after adding other abiotic or biotic variables into the TPL model. We analytically connected the parameters of TPL when abundance was measured separately by the aggregate weight and by the aggregate number, using a weight–number scaling relationship. We utilized TPL to find the number of samples required for fixed-precision sampling and compared the number of samples when sampling was performed with a single gillnet mesh size and with multiple mesh sizes. These results facilitate optimizing the sampling design to estimate fish assemblage abundance with specified precision, as needed in stock management and conservation. Résumé : La loi de puissance de Taylor (LPT) décrit la variance de l’abondance d’une population comme étant une fonction de puissance de l’abondance moyenne pour une espèce ou un groupe d’espèces. En utilisant des données de capture sur une longue période (1958–2001) obtenues de manière cohérente avec des filets de mailles variées dans le lac Kariba en Afrique, nous avons démontré que la LPT décrit de manière robuste la relation entre la moyenne temporelle et la variance temporelle de l’abondance de l’assemblage de poissons capturés (peu importe l’espèce), que l’abondance soit mesurée sur la base du nombre d’individus ou de la masse cumulative.