Questions Planned for the 2020 Census and American Community Survey Federal Legislative and Program Uses
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Statistical Inference: How Reliable Is a Survey?
Math 203, Fall 2008: Statistical Inference: How reliable is a survey? Consider a survey with a single question, to which respondents are asked to give an answer of yes or no. Suppose you pick a random sample of n people, and you find that the proportion that answered yes isp ˆ. Question: How close isp ˆ to the actual proportion p of people in the whole population who would have answered yes? In order for there to be a reliable answer to this question, the sample size, n, must be big enough so that the sample distribution is close to a bell shaped curve (i.e., close to a normal distribution). But even if n is big enough that the distribution is close to a normal distribution, usually you need to make n even bigger in order to make sure your margin of error is reasonably small. Thus the first thing to do is to be sure n is big enough for the sample distribution to be close to normal. The industry standard for being close enough is for n to be big enough so that 1 − p 1 − p n > 9 and n > 9 p p both hold. When p is about 50%, n can be as small as 10, but when p gets close to 0 or close to 1, the sample size n needs to get bigger. If p is 1% or 99%, then n must be at least 892, for example. (Note also that n here depends on p but not on the size of the whole population.) See Figures 1 and 2 showing frequency histograms for the number of yes respondents if p = 1% when the sample size n is 10 versus 1000 (this data was obtained by running a computer simulation taking 10000 samples). -
811D Ecollomic Statistics Adrllillistra!Tioll
811d Ecollomic Statistics Adrllillistra!tioll BUREAU THE CENSUS • I n i • I Charles G. Langham Issued 1973 U.S. D OF COM ERCE Frederick B. Dent. Secretary Social Economic Statistics Edward D. Administrator BU OF THE CENSUS Vincent P. Barabba, Acting Director Vincent Director Associate Director for Economic Associate Director for Statistical Standards and 11/1",1"\"/1,, DATA USER SERVICES OFFICE Robert B. Chief ACKNOWLEDGMENTS This report was in the Data User Services Office Charles G. direction of Chief, Review and many persons the Bureau. Library of Congress Card No.: 13-600143 SUGGESTED CiTATION U.S. Bureau of the Census. The Economic Censuses of the United by Charles G. longham. Working Paper D.C., U.S. Government Printing Office, 1B13 For sale by Publication Oistribution Section. Social and Economic Statistics Administration, Washington, D.C. 20233. Price 50 cents. N Page Economic Censuses in the 19th Century . 1 The First "Economic Censuses" . 1 Economic Censuses Discontinued, Resumed, and Augmented . 1 Improvements in the 1850 Census . 2 The "Kennedy Report" and the Civil War . • . 3 Economic Censuses and the Industrial Revolution. 4 Economic Censuses Adjust to the Times: The Censuses of 1880, 1890, and 1900 .........................•.. , . 4 Economic Censuses in the 20th Century . 8 Enumerations on Specialized Economic Topics, 1902 to 1937 . 8 Censuses of Manufacturing and Mineral Industries, 1905 to 1920. 8 Wartime Data Needs and Biennial Censuses of Manufactures. 9 Economic Censuses and the Great Depression. 10 The War and Postwar Developments: Economic Censuses Discontinued, Resumed, and Rescheduled. 13 The 1954 Budget Crisis. 15 Postwar Developments in Economic Census Taking: The Computer, and" Administrative Records" . -
2019 TIGER/Line Shapefiles Technical Documentation
TIGER/Line® Shapefiles 2019 Technical Documentation ™ Issued September 2019220192018 SUGGESTED CITATION FILES: 2019 TIGER/Line Shapefiles (machine- readable data files) / prepared by the U.S. Census Bureau, 2019 U.S. Department of Commerce Economic and Statistics Administration Wilbur Ross, Secretary TECHNICAL DOCUMENTATION: Karen Dunn Kelley, 2019 TIGER/Line Shapefiles Technical Under Secretary for Economic Affairs Documentation / prepared by the U.S. Census Bureau, 2019 U.S. Census Bureau Dr. Steven Dillingham, Albert Fontenot, Director Associate Director for Decennial Census Programs Dr. Ron Jarmin, Deputy Director and Chief Operating Officer GEOGRAPHY DIVISION Deirdre Dalpiaz Bishop, Chief Andrea G. Johnson, Michael R. Ratcliffe, Assistant Division Chief for Assistant Division Chief for Address and Spatial Data Updates Geographic Standards, Criteria, Research, and Quality Monique Eleby, Assistant Division Chief for Gregory F. Hanks, Jr., Geographic Program Management Deputy Division Chief and External Engagement Laura Waggoner, Assistant Division Chief for Geographic Data Collection and Products 1-0 Table of Contents 1. Introduction ...................................................................................................................... 1-1 1. Introduction 1.1 What is a Shapefile? A shapefile is a geospatial data format for use in geographic information system (GIS) software. Shapefiles spatially describe vector data such as points, lines, and polygons, representing, for instance, landmarks, roads, and lakes. The Environmental Systems Research Institute (Esri) created the format for use in their software, but the shapefile format works in additional Geographic Information System (GIS) software as well. 1.2 What are TIGER/Line Shapefiles? The TIGER/Line Shapefiles are the fully supported, core geographic product from the U.S. Census Bureau. They are extracts of selected geographic and cartographic information from the U.S. -
SAMPLING DESIGN & WEIGHTING in the Original
Appendix A 2096 APPENDIX A: SAMPLING DESIGN & WEIGHTING In the original National Science Foundation grant, support was given for a modified probability sample. Samples for the 1972 through 1974 surveys followed this design. This modified probability design, described below, introduces the quota element at the block level. The NSF renewal grant, awarded for the 1975-1977 surveys, provided funds for a full probability sample design, a design which is acknowledged to be superior. Thus, having the wherewithal to shift to a full probability sample with predesignated respondents, the 1975 and 1976 studies were conducted with a transitional sample design, viz., one-half full probability and one-half block quota. The sample was divided into two parts for several reasons: 1) to provide data for possibly interesting methodological comparisons; and 2) on the chance that there are some differences over time, that it would be possible to assign these differences to either shifts in sample designs, or changes in response patterns. For example, if the percentage of respondents who indicated that they were "very happy" increased by 10 percent between 1974 and 1976, it would be possible to determine whether it was due to changes in sample design, or an actual increase in happiness. There is considerable controversy and ambiguity about the merits of these two samples. Text book tests of significance assume full rather than modified probability samples, and simple random rather than clustered random samples. In general, the question of what to do with a mixture of samples is no easier solved than the question of what to do with the "pure" types. -
2020 Census Barriers, Attitudes, and Motivators Study Survey Report
2020 Census Barriers, Attitudes, and Motivators Study Survey Report A New Design for the 21st Century January 24, 2019 Version 2.0 Prepared by Kyley McGeeney, Brian Kriz, Shawnna Mullenax, Laura Kail, Gina Walejko, Monica Vines, Nancy Bates, and Yazmín García Trejo 2020 Census Research | 2020 CBAMS Survey Report Page intentionally left blank. ii 2020 Census Research | 2020 CBAMS Survey Report Table of Contents List of Tables ................................................................................................................................... iv List of Figures .................................................................................................................................. iv Executive Summary ......................................................................................................................... 1 Introduction ............................................................................................................................. 3 Background .............................................................................................................................. 5 CBAMS I ......................................................................................................................................... 5 CBAMS II ........................................................................................................................................ 6 2020 CBAMS Survey Climate ........................................................................................................ -
Summary of Human Subjects Protection Issues Related to Large Sample Surveys
Summary of Human Subjects Protection Issues Related to Large Sample Surveys U.S. Department of Justice Bureau of Justice Statistics Joan E. Sieber June 2001, NCJ 187692 U.S. Department of Justice Office of Justice Programs John Ashcroft Attorney General Bureau of Justice Statistics Lawrence A. Greenfeld Acting Director Report of work performed under a BJS purchase order to Joan E. Sieber, Department of Psychology, California State University at Hayward, Hayward, California 94542, (510) 538-5424, e-mail [email protected]. The author acknowledges the assistance of Caroline Wolf Harlow, BJS Statistician and project monitor. Ellen Goldberg edited the document. Contents of this report do not necessarily reflect the views or policies of the Bureau of Justice Statistics or the Department of Justice. This report and others from the Bureau of Justice Statistics are available through the Internet — http://www.ojp.usdoj.gov/bjs Table of Contents 1. Introduction 2 Limitations of the Common Rule with respect to survey research 2 2. Risks and benefits of participation in sample surveys 5 Standard risk issues, researcher responses, and IRB requirements 5 Long-term consequences 6 Background issues 6 3. Procedures to protect privacy and maintain confidentiality 9 Standard issues and problems 9 Confidentiality assurances and their consequences 21 Emerging issues of privacy and confidentiality 22 4. Other procedures for minimizing risks and promoting benefits 23 Identifying and minimizing risks 23 Identifying and maximizing possible benefits 26 5. Procedures for responding to requests for help or assistance 28 Standard procedures 28 Background considerations 28 A specific recommendation: An experiment within the survey 32 6. -
THE CENSUS in U.S. HISTORY Library of Congress of Library
Bill of Rights Constitutional Rights in Action Foundation FALL 2019 Volume 35 No1 THE CENSUS IN U.S. HISTORY Library of Congress of Library A census taker talks to a group of women, men, and children in 1870. The Constitution requires that a census be taken every ten After the 1910 census, the House set the total num- years. This means counting all persons, citizens and ber of House seats at 435. Since then, when Congress noncitizens alike, in the United States. In addition to reapportions itself after each census, those states gain- conducting a population count, the census has evolved to collect massive amounts of information on the growth and ing population may pick up more seats in the House at development of the nation. the expense of states declining in population that have to lose seats. Why Do We Have a Census? Who is counted in apportioning seats in the House? The original purpose of the census was to determine The Constitution originally included “the whole Number the number of representatives each state is entitled to in of free persons” plus indentured servants but excluded the U.S. House of Representatives. The apportionment “Indians not taxed.” What about slaves? The North and (distribution) of seats in the House depends on the pop- South argued about this at the Constitutional Conven- ulation of each state. Every state is guaranteed at least tion, finally agreeing to the three-fifths compromise. one seat. Slaves would be counted in each census, but only three- After the first census in 1790, the House decided a fifths of the count would be included in a state’s popu- state was allowed one representative for each approxi- lation for the purpose of House apportionment. -
Survey Experiments
IU Workshop in Methods – 2019 Survey Experiments Testing Causality in Diverse Samples Trenton D. Mize Department of Sociology & Advanced Methodologies (AMAP) Purdue University Survey Experiments Page 1 Survey Experiments Page 2 Contents INTRODUCTION ............................................................................................................................................................................ 8 Overview .............................................................................................................................................................................. 8 What is a survey experiment? .................................................................................................................................... 9 What is an experiment?.............................................................................................................................................. 10 Independent and dependent variables ................................................................................................................. 11 Experimental Conditions ............................................................................................................................................. 12 WHY CONDUCT A SURVEY EXPERIMENT? ........................................................................................................................... 13 Internal, external, and construct validity .......................................................................................................... -
Survey Nonresponse Bias and the Coronavirus Pandemic∗
Coronavirus Infects Surveys, Too: Survey Nonresponse Bias and the Coronavirus Pandemic∗ Jonathan Rothbaum U.S. Census Bureau† Adam Bee U.S. Census Bureau‡ May 3, 2021 Abstract Nonresponse rates have been increasing in household surveys over time, increasing the potential of nonresponse bias. We make two contributions to the literature on nonresponse bias. First, we expand the set of data sources used. We use information returns filings (such as W-2's and 1099 forms) to identify individuals in respondent and nonrespondent households in the Current Population Survey Annual Social and Eco- nomic Supplement (CPS ASEC). We link those individuals to income, demographic, and socioeconomic information available in administrative data and prior surveys and the decennial census. We show that survey nonresponse was unique during the pan- demic | nonresponse increased substantially and was more strongly associated with income than in prior years. Response patterns changed by education, Hispanic origin, and citizenship and nativity. Second, We adjust for nonrandom nonresponse using entropy balance weights { a computationally efficient method of adjusting weights to match to a high-dimensional vector of moment constraints. In the 2020 CPS ASEC, nonresponse biased income estimates up substantially, whereas in other years, we do not find evidence of nonresponse bias in income or poverty statistics. With the sur- vey weights, real median household income was $68,700 in 2019, up 6.8 percent from 2018. After adjusting for nonresponse bias during the pandemic, we estimate that real median household income in 2019 was 2.8 percent lower than the survey estimate at $66,790. ∗This report is released to inform interested parties of ongoing research and to encourage discussion. -
Evaluating Survey Questions Question
What Respondents Do to Answer a Evaluating Survey Questions Question • Comprehend Question • Retrieve Information from Memory Chase H. Harrison Ph.D. • Summarize Information Program on Survey Research • Report an Answer Harvard University Problems in Answering Survey Problems in Answering Survey Questions Questions – Failure to comprehend – Failure to recall • If respondents don’t understand question, they • Questions assume respondents have information cannot answer it • If respondents never learned something, they • If different respondents understand question cannot provide information about it differently, they end up answering different questions • Problems with researcher putting more emphasis on subject than respondent Problems in Answering Survey Problems in Answering Survey Questions Questions – Problems Summarizing – Problems Reporting Answers • If respondents are thinking about a lot of things, • Confusing or vague answer formats lead to they can inconsistently summarize variability • If the way the respondent remembers something • Interactions with interviewers or technology can doesn’t readily correspond to the question, they lead to problems (sensitive or embarrassing may be inconsistemt responses) 1 Evaluating Survey Questions Focus Groups • Early stage • Qualitative research tool – Focus groups to understand topics or dimensions of measures • Used to develop ideas for questionnaires • Pre-Test Stage – Cognitive interviews to understand question meaning • Used to understand scope of issues – Pre-test under typical field -
MRS Guidance on How to Read Opinion Polls
What are opinion polls? MRS guidance on how to read opinion polls June 2016 1 June 2016 www.mrs.org.uk MRS Guidance Note: How to read opinion polls MRS has produced this Guidance Note to help individuals evaluate, understand and interpret Opinion Polls. This guidance is primarily for non-researchers who commission and/or use opinion polls. Researchers can use this guidance to support their understanding of the reporting rules contained within the MRS Code of Conduct. Opinion Polls – The Essential Points What is an Opinion Poll? An opinion poll is a survey of public opinion obtained by questioning a representative sample of individuals selected from a clearly defined target audience or population. For example, it may be a survey of c. 1,000 UK adults aged 16 years and over. When conducted appropriately, opinion polls can add value to the national debate on topics of interest, including voting intentions. Typically, individuals or organisations commission a research organisation to undertake an opinion poll. The results to an opinion poll are either carried out for private use or for publication. What is sampling? Opinion polls are carried out among a sub-set of a given target audience or population and this sub-set is called a sample. Whilst the number included in a sample may differ, opinion poll samples are typically between c. 1,000 and 2,000 participants. When a sample is selected from a given target audience or population, the possibility of a sampling error is introduced. This is because the demographic profile of the sub-sample selected may not be identical to the profile of the target audience / population. -
The Evidence from World Values Survey Data
Munich Personal RePEc Archive The return of religious Antisemitism? The evidence from World Values Survey data Tausch, Arno Innsbruck University and Corvinus University 17 November 2018 Online at https://mpra.ub.uni-muenchen.de/90093/ MPRA Paper No. 90093, posted 18 Nov 2018 03:28 UTC The return of religious Antisemitism? The evidence from World Values Survey data Arno Tausch Abstract 1) Background: This paper addresses the return of religious Antisemitism by a multivariate analysis of global opinion data from 28 countries. 2) Methods: For the lack of any available alternative we used the World Values Survey (WVS) Antisemitism study item: rejection of Jewish neighbors. It is closely correlated with the recent ADL-100 Index of Antisemitism for more than 100 countries. To test the combined effects of religion and background variables like gender, age, education, income and life satisfaction on Antisemitism, we applied the full range of multivariate analysis including promax factor analysis and multiple OLS regression. 3) Results: Although religion as such still seems to be connected with the phenomenon of Antisemitism, intervening variables such as restrictive attitudes on gender and the religion-state relationship play an important role. Western Evangelical and Oriental Christianity, Islam, Hinduism and Buddhism are performing badly on this account, and there is also a clear global North-South divide for these phenomena. 4) Conclusions: Challenging patriarchic gender ideologies and fundamentalist conceptions of the relationship between religion and state, which are important drivers of Antisemitism, will be an important task in the future. Multiculturalism must be aware of prejudice, patriarchy and religious fundamentalism in the global South.