1 Social-Science Research and the General Social Surveys Tom W
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
This Project Has Received Funding from the European Union's Horizon
Deliverable Number: D6.6 Deliverable Title: Report on legal and ethical framework and strategies related to access, use, re-use, dissemination and preservation of administrative data Work Package: 6: New forms of data: legal, ethical and quality matters Deliverable type: Report Dissemination status: Public Submitted by: NIDI Authors: George Groenewold, Susana Cabaco, Linn-Merethe Rød, Tom Emery Date submitted: 23/08/19 This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 654221. www.seriss.eu @SERISS_EU SERISS (Synergies for Europe’s Research Infrastructures in the Social Sciences) aims to exploit synergies, foster collaboration and develop shared standards between Europe’s social science infrastructures in order to better equip these infrastructures to play a major role in addressing Europe’s grand societal challenges and ensure that European policymaking is built on a solid base of the highest-quality socio-economic evidence. The four year project (2015-19) is a collaboration between the three leading European Research Infrastructures in the social sciences – the European Social Survey (ESS ERIC), the Survey for Health Aging and Retirement in Europe (SHARE ERIC) and the Consortium of European Social Science Data Archives (CESSDA AS) – and organisations representing the Generations and Gender Programme (GGP), European Values Study (EVS) and the WageIndicator Survey. Work focuses on three key areas: Addressing key challenges for cross-national data collection, breaking down barriers between social science infrastructures and embracing the future of the social sciences. Please cite this deliverable as: Groenewold, G., Cabaco, S., Rød, L.M., Emery, T., (2019) Report on legal and ethical framework and strategies related to access, use, re-use, dissemination and preservation of administrative data. -
Statistical Inference: How Reliable Is a Survey?
Math 203, Fall 2008: Statistical Inference: How reliable is a survey? Consider a survey with a single question, to which respondents are asked to give an answer of yes or no. Suppose you pick a random sample of n people, and you find that the proportion that answered yes isp ˆ. Question: How close isp ˆ to the actual proportion p of people in the whole population who would have answered yes? In order for there to be a reliable answer to this question, the sample size, n, must be big enough so that the sample distribution is close to a bell shaped curve (i.e., close to a normal distribution). But even if n is big enough that the distribution is close to a normal distribution, usually you need to make n even bigger in order to make sure your margin of error is reasonably small. Thus the first thing to do is to be sure n is big enough for the sample distribution to be close to normal. The industry standard for being close enough is for n to be big enough so that 1 − p 1 − p n > 9 and n > 9 p p both hold. When p is about 50%, n can be as small as 10, but when p gets close to 0 or close to 1, the sample size n needs to get bigger. If p is 1% or 99%, then n must be at least 892, for example. (Note also that n here depends on p but not on the size of the whole population.) See Figures 1 and 2 showing frequency histograms for the number of yes respondents if p = 1% when the sample size n is 10 versus 1000 (this data was obtained by running a computer simulation taking 10000 samples). -
On Becoming a Pragmatic Researcher: the Importance of Combining Quantitative and Qualitative Research Methodologies
DOCUMENT RESUME ED 482 462 TM 035 389 AUTHOR Onwuegbuzie, Anthony J.; Leech, Nancy L. TITLE On Becoming a Pragmatic Researcher: The Importance of Combining Quantitative and Qualitative Research Methodologies. PUB DATE 2003-11-00 NOTE 25p.; Paper presented at the Annual Meeting of the Mid-South Educational Research Association (Biloxi, MS, November 5-7, 2003). PUB TYPE Reports Descriptive (141) Speeches/Meeting Papers (150) EDRS PRICE EDRS Price MF01/PCO2 Plus Postage. DESCRIPTORS *Pragmatics; *Qualitative Research; *Research Methodology; *Researchers ABSTRACT The last 100 years have witnessed a fervent debate in the United States about quantitative and qualitative research paradigms. Unfortunately, this has led to a great divide between quantitative and qualitative researchers, who often view themselves in competition with each other. Clearly, this polarization has promoted purists, i.e., researchers who restrict themselves exclusively to either quantitative or qualitative research methods. Mono-method research is the biggest threat to the advancement of the social sciences. As long as researchers stay polarized in research they cannot expect stakeholders who rely on their research findings to take their work seriously. The purpose of this paper is to explore how the debate between quantitative and qualitative is divisive, and thus counterproductive for advancing the social and behavioral science field. This paper advocates that all graduate students learn to use and appreciate both quantitative and qualitative research. In so doing, students will develop into what is termed "pragmatic researchers." (Contains 41 references.) (Author/SLD) Reproductions supplied by EDRS are the best that can be made from the original document. On Becoming a Pragmatic Researcher 1 Running head: ON BECOMING A PRAGMATIC RESEARCHER U.S. -
Springer Journal Collection in Humanities, Social Sciences &
ABCD springer.com Springer Journal Collection in Humanities, Social Sciences & Law TOP QUALITY More than With over 260 journals, the Springer Humanities, and edited by internationally respected Social Sciences and Law program serves scientists, researchers and academics from 260 Journals research and academic communities around the world-leading institutions and corporations. globe, covering the fields of Philosophy, Law, Most of the journals are indexed by major Sociology, Linguistics, Education, Anthropology abstracting services. Articles are searchable & Archaeology, (Applied) Ethics, Criminology by subject, publication title, topic, author or & Criminal Justice and Population Studies. keywords on SpringerLink, the world’s most Springer journals are recognized as a source for comprehensive online collection of scientific, high-quality, specialized content across a broad technological and medical journals, books and range of interests. All journals are peer-reviewed reference works. Main Disciplines: Most Downloaded Journals in this Collection Archaeology IF 5 Year IF Education and Language 7 Journal of Business Ethics 1.125 1.603 Ethics 7 Synthese 0.676 0.783 7 Higher Education 0.823 1.249 Law 7 Early Childhood Education Journal Philosophy 7 Philosophical Studies Sociology 7 Educational Studies in Mathematics 7 ETR&D - Educational Technology Research and Development 1.081 1.770 7 Social Indicators Research 1.000 1.239 Society Partners 7 Research in Higher Education 1.221 1.585 Include: 7 Agriculture and Human Values 1.054 1.466 UNESCO 7 International Review of Education Population Association 7 Research in Science Education 0.853 1.112 of America 7 Biology & Philosophy 0.829 1.299 7 Journal of Happiness Studies 2.104 Society for Community Research and Action All Impact Factors are from 2010 Journal Citation Reports® − Thomson Reuters. -
A Critical Review of Studies Investigating the Quality of Data Obtained with Online Panels Based on Probability and Nonprobability Samples1
Callegaro c02.tex V1 - 01/16/2014 6:25 P.M. Page 23 2 A critical review of studies investigating the quality of data obtained with online panels based on probability and nonprobability samples1 Mario Callegaro1, Ana Villar2, David Yeager3,and Jon A. Krosnick4 1Google, UK 2City University, London, UK 3University of Texas at Austin, USA 4Stanford University, USA 2.1 Introduction Online panels have been used in survey research as data collection tools since the late 1990s (Postoaca, 2006). The potential great cost and time reduction of using these tools have made research companies enthusiastically pursue this new mode of data collection. However, 1 We would like to thank Reg Baker and Anja Göritz, Part editors, for their useful comments on preliminary versions of this chapter. Online Panel Research, First Edition. Edited by Mario Callegaro, Reg Baker, Jelke Bethlehem, Anja S. Göritz, Jon A. Krosnick and Paul J. Lavrakas. © 2014 John Wiley & Sons, Ltd. Published 2014 by John Wiley & Sons, Ltd. Callegaro c02.tex V1 - 01/16/2014 6:25 P.M. Page 24 24 ONLINE PANEL RESEARCH the vast majority of these online panels were built by sampling and recruiting respondents through nonprobability methods such as snowball sampling, banner ads, direct enrollment, and other strategies to obtain large enough samples at a lower cost (see Chapter 1). Only a few companies and research teams chose to build online panels based on probability samples of the general population. During the 1990s, two probability-based online panels were documented: the CentER data Panel in the Netherlands and the Knowledge Networks Panel in the United States. -
SAMPLING DESIGN & WEIGHTING in the Original
Appendix A 2096 APPENDIX A: SAMPLING DESIGN & WEIGHTING In the original National Science Foundation grant, support was given for a modified probability sample. Samples for the 1972 through 1974 surveys followed this design. This modified probability design, described below, introduces the quota element at the block level. The NSF renewal grant, awarded for the 1975-1977 surveys, provided funds for a full probability sample design, a design which is acknowledged to be superior. Thus, having the wherewithal to shift to a full probability sample with predesignated respondents, the 1975 and 1976 studies were conducted with a transitional sample design, viz., one-half full probability and one-half block quota. The sample was divided into two parts for several reasons: 1) to provide data for possibly interesting methodological comparisons; and 2) on the chance that there are some differences over time, that it would be possible to assign these differences to either shifts in sample designs, or changes in response patterns. For example, if the percentage of respondents who indicated that they were "very happy" increased by 10 percent between 1974 and 1976, it would be possible to determine whether it was due to changes in sample design, or an actual increase in happiness. There is considerable controversy and ambiguity about the merits of these two samples. Text book tests of significance assume full rather than modified probability samples, and simple random rather than clustered random samples. In general, the question of what to do with a mixture of samples is no easier solved than the question of what to do with the "pure" types. -
European Attitudes to Climate Change and Energy: Topline Results from Round 8 of the European Social Survey
European Attitudes to Climate Change and Energy: Topline Results from Round 8 of the European Social Survey ESS Topline Issue Results Series9 2 European Attitudes to Climate Change and Energy This latest issue in our Topline Results we hope that this latest data will influence series examines public attitudes academic, public and policy debate in this towards climate change and energy for area. the first time in the ESS. The module We include two different topics in was selected for inclusion due to its each round of the survey to expand the academic excellence as well as the relevance of our data into new areas increasing relevance of this issue. For and to allow repetition if the case can be example the Paris Agreement made made to examine the same area again. by 195 United Nations Framework Everyone at the ESS is delighted with the Convention on Climate Change work of the Questionnaire Design Team (UNFCCC) countries in 2016 who led on the design of this module, underlines the salience of the topic. and who have written this excellent With many parts of Europe and the publication. world recording rising temperatures and Rory Fitzgerald experiencing more extreme weather, ESS ERIC Director the subject is a key grand challenge. City, University of London By assessing public opinion on climate change and the related issue of energy The authors of this issue: • Wouter Poortinga, Professor of Environmental Psychology, Cardiff University • Stephen Fisher, Associate Professor in Political Sociology, Trinity College, University of Oxford • Gisela -
Assessment of Socio-Demographic Sample Composition in ESS Round 61
Assessment of socio-demographic sample composition in ESS Round 61 Achim Koch GESIS – Leibniz Institute for the Social Sciences, Mannheim/Germany, June 2016 Contents 1. Introduction 2 2. Assessing socio-demographic sample composition with external benchmark data 3 3. The European Union Labour Force Survey 3 4. Data and variables 6 5. Description of ESS-LFS differences 8 6. A summary measure of ESS-LFS differences 17 7. Comparison of results for ESS 6 with results for ESS 5 19 8. Correlates of ESS-LFS differences 23 9. Summary and conclusions 27 References 1 The CST of the ESS requests that the following citation for this document should be used: Koch, A. (2016). Assessment of socio-demographic sample composition in ESS Round 6. Mannheim: European Social Survey, GESIS. 1. Introduction The European Social Survey (ESS) is an academically driven cross-national survey that has been conducted every two years across Europe since 2002. The ESS aims to produce high- quality data on social structure, attitudes, values and behaviour patterns in Europe. Much emphasis is placed on the standardisation of survey methods and procedures across countries and over time. Each country implementing the ESS has to follow detailed requirements that are laid down in the “Specifications for participating countries”. These standards cover the whole survey life cycle. They refer to sampling, questionnaire translation, data collection and data preparation and delivery. As regards sampling, for instance, the ESS requires that only strict probability samples should be used; quota sampling and substitution are not allowed. Each country is required to achieve an effective sample size of 1,500 completed interviews, taking into account potential design effects due to the clustering of the sample and/or the variation in inclusion probabilities. -
Summary of Human Subjects Protection Issues Related to Large Sample Surveys
Summary of Human Subjects Protection Issues Related to Large Sample Surveys U.S. Department of Justice Bureau of Justice Statistics Joan E. Sieber June 2001, NCJ 187692 U.S. Department of Justice Office of Justice Programs John Ashcroft Attorney General Bureau of Justice Statistics Lawrence A. Greenfeld Acting Director Report of work performed under a BJS purchase order to Joan E. Sieber, Department of Psychology, California State University at Hayward, Hayward, California 94542, (510) 538-5424, e-mail [email protected]. The author acknowledges the assistance of Caroline Wolf Harlow, BJS Statistician and project monitor. Ellen Goldberg edited the document. Contents of this report do not necessarily reflect the views or policies of the Bureau of Justice Statistics or the Department of Justice. This report and others from the Bureau of Justice Statistics are available through the Internet — http://www.ojp.usdoj.gov/bjs Table of Contents 1. Introduction 2 Limitations of the Common Rule with respect to survey research 2 2. Risks and benefits of participation in sample surveys 5 Standard risk issues, researcher responses, and IRB requirements 5 Long-term consequences 6 Background issues 6 3. Procedures to protect privacy and maintain confidentiality 9 Standard issues and problems 9 Confidentiality assurances and their consequences 21 Emerging issues of privacy and confidentiality 22 4. Other procedures for minimizing risks and promoting benefits 23 Identifying and minimizing risks 23 Identifying and maximizing possible benefits 26 5. Procedures for responding to requests for help or assistance 28 Standard procedures 28 Background considerations 28 A specific recommendation: An experiment within the survey 32 6. -
Survey Experiments
IU Workshop in Methods – 2019 Survey Experiments Testing Causality in Diverse Samples Trenton D. Mize Department of Sociology & Advanced Methodologies (AMAP) Purdue University Survey Experiments Page 1 Survey Experiments Page 2 Contents INTRODUCTION ............................................................................................................................................................................ 8 Overview .............................................................................................................................................................................. 8 What is a survey experiment? .................................................................................................................................... 9 What is an experiment?.............................................................................................................................................. 10 Independent and dependent variables ................................................................................................................. 11 Experimental Conditions ............................................................................................................................................. 12 WHY CONDUCT A SURVEY EXPERIMENT? ........................................................................................................................... 13 Internal, external, and construct validity .......................................................................................................... -
Support for Redistribution in an Age of Rising Inequality: New Stylized Facts and Some Tentative Explanations
VIVEKINAN ASHOK Yale University ILYANA KUZIEMKO Princeton University EBONYA WASHINGTON Yale University Support for Redistribution in an Age of Rising Inequality: New Stylized Facts and Some Tentative Explanations ABSTRACT Despite the large increases in economic inequality since 1970, American survey respondents exhibit no increase in support for redistribution, contrary to the predictions from standard theories of redistributive preferences. We replicate these results but further demonstrate substantial heterogeneity by demographic group. In particular, the two groups that have most moved against income redistribution are the elderly and African Americans. We find little evidence that these subgroup trends are explained by relative economic gains or growing cultural conservatism, two common explanations. We further show that the trend among the elderly is uniquely American, at least relative to other developed countries with comparable survey data. While we are unable to provide definitive evidence on the cause of these two groups’ declining redistributive support, we provide additional correlations that may offer fruitful directions for future research on the topic. One story consistent with the data on elderly trends is that older Americans worry that redistribution will come at their expense, in particular through cuts to Medicare. We find that the elderly have grown increasingly opposed to government provision of health insurance and that controlling for this tendency explains about 40 percent of their declin- ing support for redistribution. -
Evaluating Survey Questions Question
What Respondents Do to Answer a Evaluating Survey Questions Question • Comprehend Question • Retrieve Information from Memory Chase H. Harrison Ph.D. • Summarize Information Program on Survey Research • Report an Answer Harvard University Problems in Answering Survey Problems in Answering Survey Questions Questions – Failure to comprehend – Failure to recall • If respondents don’t understand question, they • Questions assume respondents have information cannot answer it • If respondents never learned something, they • If different respondents understand question cannot provide information about it differently, they end up answering different questions • Problems with researcher putting more emphasis on subject than respondent Problems in Answering Survey Problems in Answering Survey Questions Questions – Problems Summarizing – Problems Reporting Answers • If respondents are thinking about a lot of things, • Confusing or vague answer formats lead to they can inconsistently summarize variability • If the way the respondent remembers something • Interactions with interviewers or technology can doesn’t readily correspond to the question, they lead to problems (sensitive or embarrassing may be inconsistemt responses) 1 Evaluating Survey Questions Focus Groups • Early stage • Qualitative research tool – Focus groups to understand topics or dimensions of measures • Used to develop ideas for questionnaires • Pre-Test Stage – Cognitive interviews to understand question meaning • Used to understand scope of issues – Pre-test under typical field