Sampling Techniques
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
SAMPLING DESIGN & WEIGHTING in the Original
Appendix A 2096 APPENDIX A: SAMPLING DESIGN & WEIGHTING In the original National Science Foundation grant, support was given for a modified probability sample. Samples for the 1972 through 1974 surveys followed this design. This modified probability design, described below, introduces the quota element at the block level. The NSF renewal grant, awarded for the 1975-1977 surveys, provided funds for a full probability sample design, a design which is acknowledged to be superior. Thus, having the wherewithal to shift to a full probability sample with predesignated respondents, the 1975 and 1976 studies were conducted with a transitional sample design, viz., one-half full probability and one-half block quota. The sample was divided into two parts for several reasons: 1) to provide data for possibly interesting methodological comparisons; and 2) on the chance that there are some differences over time, that it would be possible to assign these differences to either shifts in sample designs, or changes in response patterns. For example, if the percentage of respondents who indicated that they were "very happy" increased by 10 percent between 1974 and 1976, it would be possible to determine whether it was due to changes in sample design, or an actual increase in happiness. There is considerable controversy and ambiguity about the merits of these two samples. Text book tests of significance assume full rather than modified probability samples, and simple random rather than clustered random samples. In general, the question of what to do with a mixture of samples is no easier solved than the question of what to do with the "pure" types. -
Stratified Sampling Using Cluster Analysis: a Sample Selection Strategy for Improved Generalizations from Experiments
Article Evaluation Review 1-31 ª The Author(s) 2014 Stratified Sampling Reprints and permission: sagepub.com/journalsPermissions.nav DOI: 10.1177/0193841X13516324 Using Cluster erx.sagepub.com Analysis: A Sample Selection Strategy for Improved Generalizations From Experiments Elizabeth Tipton1 Abstract Background: An important question in the design of experiments is how to ensure that the findings from the experiment are generalizable to a larger population. This concern with generalizability is particularly important when treatment effects are heterogeneous and when selecting units into the experiment using random sampling is not possible—two conditions commonly met in large-scale educational experiments. Method: This article introduces a model-based balanced-sampling framework for improv- ing generalizations, with a focus on developing methods that are robust to model misspecification. Additionally, the article provides a new method for sample selection within this framework: First units in an inference popula- tion are divided into relatively homogenous strata using cluster analysis, and 1 Department of Human Development, Teachers College, Columbia University, NY, USA Corresponding Author: Elizabeth Tipton, Department of Human Development, Teachers College, Columbia Univer- sity, 525 W 120th St, Box 118, NY 10027, USA. Email: [email protected] 2 Evaluation Review then the sample is selected using distance rankings. Result: In order to demonstrate and evaluate the method, a reanalysis of a completed experiment is conducted. This example compares samples selected using the new method with the actual sample used in the experiment. Results indicate that even under high nonresponse, balance is better on most covariates and that fewer coverage errors result. Conclusion: The article concludes with a discussion of additional benefits and limitations of the method. -
Stratified Random Sampling from Streaming and Stored Data
Stratified Random Sampling from Streaming and Stored Data Trong Duc Nguyen Ming-Hung Shih Divesh Srivastava Iowa State University, USA Iowa State University, USA AT&T Labs–Research, USA Srikanta Tirthapura Bojian Xu Iowa State University, USA Eastern Washington University, USA ABSTRACT SRS provides the flexibility to emphasize some strata over Stratified random sampling (SRS) is a widely used sampling tech- others through controlling the allocation of sample sizes; for nique for approximate query processing. We consider SRS on instance, a stratum with a high standard deviation can be given continuously arriving data streams, and make the following con- a larger allocation than another stratum with a smaller standard tributions. We present a lower bound that shows that any stream- deviation. In the above example, if we desire a stratified sample ing algorithm for SRS must have (in the worst case) a variance of size three, it is best to allocate a smaller sample of size one to that is Ω¹rº factor away from the optimal, where r is the number the first stratum and a larger sample size of two to thesecond of strata. We present S-VOILA, a streaming algorithm for SRS stratum, since the standard deviation of the second stratum is that is locally variance-optimal. Results from experiments on real higher. Doing so, the variance of estimate of the population mean 3 and synthetic data show that S-VOILA results in a variance that is further reduces to approximately 1:23 × 10 . The strength of typically close to an optimal offline algorithm, which was given SRS is that a stratified random sample can be used to answer the entire input beforehand. -
Sampling Methods It’S Impractical to Poll an Entire Population—Say, All 145 Million Registered Voters in the United States
Sampling Methods It’s impractical to poll an entire population—say, all 145 million registered voters in the United States. That is why pollsters select a sample of individuals that represents the whole population. Understanding how respondents come to be selected to be in a poll is a big step toward determining how well their views and opinions mirror those of the voting population. To sample individuals, polling organizations can choose from a wide variety of options. Pollsters generally divide them into two types: those that are based on probability sampling methods and those based on non-probability sampling techniques. For more than five decades probability sampling was the standard method for polls. But in recent years, as fewer people respond to polls and the costs of polls have gone up, researchers have turned to non-probability based sampling methods. For example, they may collect data on-line from volunteers who have joined an Internet panel. In a number of instances, these non-probability samples have produced results that were comparable or, in some cases, more accurate in predicting election outcomes than probability-based surveys. Now, more than ever, journalists and the public need to understand the strengths and weaknesses of both sampling techniques to effectively evaluate the quality of a survey, particularly election polls. Probability and Non-probability Samples In a probability sample, all persons in the target population have a change of being selected for the survey sample and we know what that chance is. For example, in a telephone survey based on random digit dialing (RDD) sampling, researchers know the chance or probability that a particular telephone number will be selected. -
MRS Guidance on How to Read Opinion Polls
What are opinion polls? MRS guidance on how to read opinion polls June 2016 1 June 2016 www.mrs.org.uk MRS Guidance Note: How to read opinion polls MRS has produced this Guidance Note to help individuals evaluate, understand and interpret Opinion Polls. This guidance is primarily for non-researchers who commission and/or use opinion polls. Researchers can use this guidance to support their understanding of the reporting rules contained within the MRS Code of Conduct. Opinion Polls – The Essential Points What is an Opinion Poll? An opinion poll is a survey of public opinion obtained by questioning a representative sample of individuals selected from a clearly defined target audience or population. For example, it may be a survey of c. 1,000 UK adults aged 16 years and over. When conducted appropriately, opinion polls can add value to the national debate on topics of interest, including voting intentions. Typically, individuals or organisations commission a research organisation to undertake an opinion poll. The results to an opinion poll are either carried out for private use or for publication. What is sampling? Opinion polls are carried out among a sub-set of a given target audience or population and this sub-set is called a sample. Whilst the number included in a sample may differ, opinion poll samples are typically between c. 1,000 and 2,000 participants. When a sample is selected from a given target audience or population, the possibility of a sampling error is introduced. This is because the demographic profile of the sub-sample selected may not be identical to the profile of the target audience / population. -
Categorical Data Analysis
Categorical Data Analysis Related topics/headings: Categorical data analysis; or, Nonparametric statistics; or, chi-square tests for the analysis of categorical data. OVERVIEW For our hypothesis testing so far, we have been using parametric statistical methods. Parametric methods (1) assume some knowledge about the characteristics of the parent population (e.g. normality) (2) require measurement equivalent to at least an interval scale (calculating a mean or a variance makes no sense otherwise). Frequently, however, there are research problems in which one wants to make direct inferences about two or more distributions, either by asking if a population distribution has some particular specifiable form, or by asking if two or more population distributions are identical. These questions occur most often when variables are qualitative in nature, making it impossible to carry out the usual inferences in terms of means or variances. For such problems, we use nonparametric methods. Nonparametric methods (1) do not depend on any assumptions about the parameters of the parent population (2) generally assume data are only measured at the nominal or ordinal level. There are two common types of hypothesis-testing problems that are addressed with nonparametric methods: (1) How well does a sample distribution correspond with a hypothetical population distribution? As you might guess, the best evidence one has about a population distribution is the sample distribution. The greater the discrepancy between the sample and theoretical distributions, the more we question the “goodness” of the theory. EX: Suppose we wanted to see whether the distribution of educational achievement had changed over the last 25 years. We might take as our null hypothesis that the distribution of educational achievement had not changed, and see how well our modern-day sample supported that theory. -
Lecture 8: Sampling Methods
Lecture 8: Sampling Methods Donglei Du ([email protected]) Faculty of Business Administration, University of New Brunswick, NB Canada Fredericton E3B 9Y2 Donglei Du (UNB) ADM 2623: Business Statistics 1 / 30 Table of contents 1 Sampling Methods Why Sampling Probability vs non-probability sampling methods Sampling with replacement vs without replacement Random Sampling Methods 2 Simple random sampling with and without replacement Simple random sampling without replacement Simple random sampling with replacement 3 Sampling error vs non-sampling error 4 Sampling distribution of sample statistic Histogram of the sample mean under SRR 5 Distribution of the sample mean under SRR: The central limit theorem Donglei Du (UNB) ADM 2623: Business Statistics 2 / 30 Layout 1 Sampling Methods Why Sampling Probability vs non-probability sampling methods Sampling with replacement vs without replacement Random Sampling Methods 2 Simple random sampling with and without replacement Simple random sampling without replacement Simple random sampling with replacement 3 Sampling error vs non-sampling error 4 Sampling distribution of sample statistic Histogram of the sample mean under SRR 5 Distribution of the sample mean under SRR: The central limit theorem Donglei Du (UNB) ADM 2623: Business Statistics 3 / 30 Why sampling? The physical impossibility of checking all items in the population, and, also, it would be too time-consuming The studying of all the items in a population would not be cost effective The sample results are usually adequate The destructive nature of certain tests Donglei Du (UNB) ADM 2623: Business Statistics 4 / 30 Sampling Methods Probability Sampling: Each data unit in the population has a known likelihood of being included in the sample. -
IBM SPSS Complex Samples Business Analytics
IBM Software IBM SPSS Complex Samples Business Analytics IBM SPSS Complex Samples Correctly compute complex samples statistics When you conduct sample surveys, use a statistics package dedicated to Highlights producing correct estimates for complex sample data. IBM® SPSS® Complex Samples provides specialized statistics that enable you to • Increase the precision of your sample or correctly and easily compute statistics and their standard errors from ensure a representative sample with stratified sampling. complex sample designs. You can apply it to: • Select groups of sampling units with • Survey research – Obtain descriptive and inferential statistics for clustered sampling. survey data. • Select an initial sample, then create • Market research – Analyze customer satisfaction data. a second-stage sample with multistage • Health research – Analyze large public-use datasets on public health sampling. topics such as health and nutrition or alcohol use and traffic fatalities. • Social science – Conduct secondary research on public survey datasets. • Public opinion research – Characterize attitudes on policy issues. SPSS Complex Samples provides you with everything you need for working with complex samples. It includes: • An intuitive Sampling Wizard that guides you step by step through the process of designing a scheme and drawing a sample. • An easy-to-use Analysis Preparation Wizard to help prepare public-use datasets that have been sampled, such as the National Health Inventory Survey data from the Centers for Disease Control and Prevention -
Using Sampling Matching Methods to Remove Selectivity in Survey Analysis with Categorical Data
Using Sampling Matching Methods to Remove Selectivity in Survey Analysis with Categorical Data Han Zheng (s1950142) Supervisor: Dr. Ton de Waal (CBS) Second Supervisor: Prof. Willem Jan Heiser (Leiden University) master thesis Defended on Month Day, 2019 Specialization: Data Science STATISTICAL SCIENCE FOR THE LIFE AND BEHAVIOURAL SCIENCES Abstract A problem for survey datasets is that the data may cone from a selective group of the pop- ulation. This is hard to produce unbiased and accurate estimates for the entire population. One way to overcome this problem is to use sample matching. In sample matching, one draws a sample from the population using a well-defined sampling mechanism. Next, units in the survey dataset are matched to units in the drawn sample using some background information. Usually the background information is insufficiently detaild to enable exact matching, where a unit in the survey dataset is matched to the same unit in the drawn sample. Instead one usually needs to rely on synthetic methods on matching where a unit in the survey dataset is matched to a similar unit in the drawn sample. This study developed several methods in sample matching for categorical data. A selective panel represents the available completed but biased dataset which used to estimate the target variable distribution of the population. The result shows that the exact matching is unex- pectedly performs best among all matching methods, and using a weighted sampling instead of random sampling has not contributes to increase the accuracy of matching. Although the predictive mean matching lost the competition against exact matching, with proper adjust- ment of transforming categorical variables into numerical values would substantial increase the accuracy of matching. -
3 Stratified Simple Random Sampling
3 STRATIFIED SIMPLE RANDOM SAMPLING • Suppose the population is partitioned into disjoint sets of sampling units called strata. If a sample is selected within each stratum, then this sampling procedure is known as stratified sampling. • If we can assume the strata are sampled independently across strata, then (i) the estimator of t or yU can be found by combining stratum sample sums or means using appropriate weights (ii) the variances of estimators associated with the individual strata can be summed to obtain the variance an estimator associated with the whole population. (Given independence, the variance of a sum equals the sum of the individual variances.) • (ii) implies that only within-stratum variances contribute to the variance of an estimator. Thus, the basic motivating principle behind using stratification to produce an estimator with small variance is to partition the population so that units within each stratum are as similar as possible. This is known as the stratification principle. • In ecological studies, it is common to stratify a geographical region into subregions that are similar with respect to a known variable such as elevation, animal habitat type, vegetation types, etc. because it is suspected that the y-values may vary greatly across strata while they will tend to be similar within each stratum. Analogously, when sampling people, it is common to stratify on variables such as gender, age groups, income levels, education levels, marital status, etc. • Sometimes strata are formed based on sampling convenience. For example, suppose a large study region appears to be homogeneous (that is, there are no spatial patterns) and is stratified based on the geographical proximity of sampling units. -
7.2 Sampling Plans and Experimental Design
7.2 Sampling Plans and Experimental Design Statistics 1, Fall 2008 Inferential Statistics • Goal: Make inferences about the population based on data from a sample • Examples – Estimate population mean, μ or σ, based on data from the sample Methods of Sampling • I’ll cover five common sampling methods, more exist – Simple random sampling – Stratified random sampling – Cluster sampling – Systematic 1‐in‐k sampling – Convenience sampling Simple Random Sampling • Each sample of size n from a population of size N has an equal chance of being selected • Implies that each subject has an equal chance of being included in the sample • Example: Select a random sample of size 5 from this class. Computer Generated SRS • Obtain a list of the population (sampling frame) • Use Excel – Generate random number for each subject using rand() function – copy|paste special|values to fix random numbers – sort on random number, sample is the first n in this list • Use R (R is a free statistical software package) – sample(1:N,n), N=population size, n=sample size – Returns n randomly selected digits between 1 and N – default is sampling WOR Stratified Random Sampling • The population is divided into subgroups, or strata • A SRS is selected from each strata • Ensures representation from each strata, no guarantee with SRS Stratified Random Sampling • Example: Stratify the class by gender, randomly selected 3 female and 3 male students • Example: Voter poll –stratify the nation by state, randomly selected 100 voters in each state Cluster Sampling • The subjects in -
Overview of Propensity Score Analysis
1 Overview of Propensity Score Analysis Learning Objectives zz Describe the advantages of propensity score methods for reducing bias in treatment effect estimates from observational studies zz Present Rubin’s causal model and its assumptions zz Enumerate and overview the steps of propensity score analysis zz Describe the characteristics of data from complex surveys and their relevance to propensity score analysis zz Enumerate resources for learning the R programming language and software zz Identify major resources available in the R software for propensity score analysis 1.1. Introduction The objective of this chapter is to provide the common theoretical foundation for all propensity score methods and provide a brief description of each method. It will also introduce the R software, point the readers toward resources for learning the R language, and briefly introduce packages available in R relevant to propensity score analysis. Draft ProofPropensity score- Doanalysis methodsnot aim copy, to reduce bias inpost, treatment effect or estimates distribute obtained from observational studies, which are studies estimating treatment effects with research designs that do not have random assignment of participants to condi- tions. The term observational studies as used here includes both studies where there is 1 Copyright ©2017 by SAGE Publications, Inc. This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher. 2 Practical Propensity Score Methods Using R no random assignment but there is manipulation of conditions and studies that lack both random assignment and manipulation of conditions. Research designs to estimate treatment effects that do not have random assignment to conditions are also referred as quasi-experimental or nonexperimental designs.