Polls and Surveys

Total Page:16

File Type:pdf, Size:1020Kb

Load more

349 Polls and surveys Reporting on public opinion research requires rigorous inspection of a poll’s methodology, provenance and results. The mere existence of a poll is not enough to make it news. Do not feel obligated to report on a poll or survey simply because it meets AP’s standards. Poll results that seek to preview the outcome of an election must never be the lead, headline or single subject of any story. Pre-election horse race polling can and should inform reporting on political campaigns, but no matter how good the poll or how wide a candidate’s margin, results of pre-election polls always reflect voter opinion before ballots are cast. Voter opinions can change before Election Day, and they often do. When evaluating a poll or survey, be it a campaign poll or a survey on a topic unrelated to politics, the key question to answer is: Are its results likely to accurately reflect the opinion of the group being surveyed? Generally, for the answer to be yes, a poll must: — Disclose the questions asked, the results of the survey and the method in which it was conducted. — Come from a source without a stake in the outcome of its results. — Scientifically survey a random sample of a population, in which every member of that population has a known probability of inclusion. — Report the results in a timely manner. Polls that pass these tests are suitable for publication. Do not report on surveys in which the pollster or sponsor of research refuses to provide the information needed to answer these questions. Always include a short description of how a poll meets the standards, allowing readers and viewers to evaluate the results for themselves: The AP-NORC poll surveyed 1,020 adults from Dec. 7-11 using a sample drawn from NORC’s probability- based AmeriSpeak Panel, which is designed to be representative of the U.S. population. Some other key points: — Comparisons between polls are often newsworthy, especially those that show a change in public opinion over time. But take care when comparing results from different polling organizations, as difference in poll methods and question wording — and not a change in public opinion — may be the cause of differing results. Only infer that a difference between two polls is caused by a change in public opinion when those polls use the same survey methodology and question wording. — Some organizations publish poll averages or aggregates that attempt to combine the results of multiple polls into a single estimate in an effort to capture the overall state of public opinion about a campaign or issue. Averaging poll results does not eliminate error or preclude the need to 350 examine the underlying polls and assess their suitability for publication. In campaign polling, survey averages can provide a general sense of the state of a race. However, only those polls that meet these standards should be included in averages intended for publication, and it is often preferable to include the individual results of multiple recent surveys to show where a race stands. — Some pollsters release survey results to the first decimal place, which implies a greater degree of precision than is possible from scientific sampling. Poll results should always be rounded to whole numbers. Margins of sampling error can be reported to the first decimal place. — Take care to use accurate language when describing poll results. For example, only groups comprising more than 50 percent of the population can be said to be the majority. If the largest group includes less than 50 percent of the surveyed population, it is a plurality. See majority, plurality. — In most cases, poll and survey may be used interchangeably. Polls are not perfect When writing or producing stories that cite survey results, take care not to overstate the accuracy of the poll. Even a perfectly executed poll does not guarantee perfectly accurate results. It is possible to calculate the potential error of a poll of a random sample of a population, and that detail must be included in a story about a poll’s results: The margin of sampling error for all respondents is plus or minus 3.7 percentage points. See Margin of error later in this entry. POLLS AND SURVEYS AND POLLS SURVEYS AND POLLS Sampling error is not the only source of survey error, merely the only one that can be quantified using established and accepted statistical methods. Among other potential sources of error: the wording and order of questions, interviewer skill and refusal to participate by respondents randomly selected for a sample. As a result, total error in a survey may exceed the reported margin of error more often than would be predicted based on simple statistical calculations. Be careful when reporting on the opinions of a poll’s subgroup — women under the age of 30, for example, in a poll of all adults. Find out and consider the sample size and margin of error for that subgroup; the sampling error may be so large as to render any reported difference meaningless. Results from subgroups totaling less than 100 people should not be reported. Very large sample sizes do not preclude the need to rigorously assess a poll’s methodology, as they may be an indicator of an unscientific and unreliable survey. Often, polls with several thousand respondents are conducted via mass text message campaigns or website widgets and are not representative of the general population. There is no single established method of estimating error for surveys conducted online among people who volunteer to take part in surveys. While they may not report a margin of error, these surveys are still subject to error, uncertainty and bias. Margin of error A poll conducted via a scientific survey of a random sample of a population will have a margin of sampling error. This margin is expressed in terms of percentage points, not percent. 351 For example, consider a poll with a margin of error of 5 percentage points. Under ideal circumstances, its results should reflect the true opinion of the population being surveyed, within plus or minus 5 percentage points, 95 of every 100 times that poll is conducted. Sampling error is not the only source of error in a poll, but it is one that can be quantified. See the first section of this entry. The margin of error varies inversely to the poll’s sample size: The fewer people interviewed, the larger the margin of error. Surveys with 500 respondents or more are preferable. Evaluating the margin of error is crucial when describing the results of a poll. Remember that the survey’s margin of error applies to every candidate or poll response. Nominal differences between two percentages in a survey may not always be meaningful. Use these rules to avoid exaggerating the meaning of poll results and deciding when to report that a poll finds one candidate is leading another, or that one group is larger than another. — If the difference between two response options is more than twice the margin of error, then the poll shows one candidate is leading or one group is larger than another. — If the difference is at least equal to the margin of error, but no more than twice the margin of error, then one candidate can be said to be apparently leading or slightly ahead, or one group can be said to be slightly larger than another. — If the difference is less than the margin of error, the poll says a race is close or POLLS AND SURVEYS AND POLLS SURVEYS AND POLLS about even, or that two groups are of similar size. — Do not use the term statistical dead heat, which is inaccurate if there is any difference between the candidates. If the poll finds the candidates are exactly tied, say they are tied. For very close races that aren’t exact ties, the phrase essentially tied is acceptable, or use the phrases above. There is no single established method of estimating error for surveys conducted online among people who volunteer to take part in surveys. These surveys are still subject to error, uncertainty and bias. Evaluating polls and surveys When evaluating whether public opinion research is suitable for publication, consider the answers to the following questions. — Has the poll sponsor fully disclosed the questions asked, the results of the survey and the method in which it was conducted? Reputable poll sponsors and public opinion researchers will disclose the methodology used to conduct the survey, including the questions asked and the results to each, so that their survey may be subject to independent examination and analysis by others. Do not report on surveys in which the pollster or sponsor of research refuses to provide such information. Some public opinion researchers agree to publicly disclose their methodology as part of the American Association for Public Opinion Research’s transparency initiative. Participation does not mean polls from these researchers are automatically suitable for publication, only that they are likely to meet the test 352 for disclosure. A list of transparency initiative members can be found on the association’s website at: http://www.aapor.org/Standards-Ethics/Transparency- Initiative/Current-Members.aspx — Does the poll come from a source without a stake in the outcome of its results? Any poll suitable for publication must disclose who conducted and paid for the research. Find out the polling firm, media outlet or other organization that conducted the poll. Include this information in all poll stories, so readers and viewers can be aware of any potential bias: The survey was conducted for Inside Higher Ed by Gallup.
Recommended publications
  • Statistical Inference: How Reliable Is a Survey?

    Statistical Inference: How Reliable Is a Survey?

    Math 203, Fall 2008: Statistical Inference: How reliable is a survey? Consider a survey with a single question, to which respondents are asked to give an answer of yes or no. Suppose you pick a random sample of n people, and you find that the proportion that answered yes isp ˆ. Question: How close isp ˆ to the actual proportion p of people in the whole population who would have answered yes? In order for there to be a reliable answer to this question, the sample size, n, must be big enough so that the sample distribution is close to a bell shaped curve (i.e., close to a normal distribution). But even if n is big enough that the distribution is close to a normal distribution, usually you need to make n even bigger in order to make sure your margin of error is reasonably small. Thus the first thing to do is to be sure n is big enough for the sample distribution to be close to normal. The industry standard for being close enough is for n to be big enough so that 1 − p 1 − p n > 9 and n > 9 p p both hold. When p is about 50%, n can be as small as 10, but when p gets close to 0 or close to 1, the sample size n needs to get bigger. If p is 1% or 99%, then n must be at least 892, for example. (Note also that n here depends on p but not on the size of the whole population.) See Figures 1 and 2 showing frequency histograms for the number of yes respondents if p = 1% when the sample size n is 10 versus 1000 (this data was obtained by running a computer simulation taking 10000 samples).
  • E-Survey Methodology

    E-Survey Methodology

    Chapter I E-Survey Methodology Karen J. Jansen The Pennsylvania State University, USA Kevin G. Corley Arizona State University, USA Bernard J. Jansen The Pennsylvania State University, USA ABSTRACT With computer network access nearly ubiquitous in much of the world, alternative means of data col- lection are being made available to researchers. Recent studies have explored various computer-based techniques (e.g., electronic mail and Internet surveys). However, exploitation of these techniques requires careful consideration of conceptual and methodological issues associated with their use. We identify and explore these issues by defining and developing a typology of “e-survey” techniques in organiza- tional research. We examine the strengths, weaknesses, and threats to reliability, validity, sampling, and generalizability of these approaches. We conclude with a consideration of emerging issues of security, privacy, and ethics associated with the design and implications of e-survey methodology. INTRODUCTION 1999; Oppermann, 1995; Saris, 1991). Although research over the past 15 years has been mixed on For the researcher considering the use of elec- the realization of these benefits (Kiesler & Sproull, tronic surveys, there is a rapidly growing body of 1986; Mehta & Sivadas, 1995; Sproull, 1986; Tse, literature addressing design issues and providing Tse, Yin, Ting, Yi, Yee, & Hong, 1995), for the laundry lists of costs and benefits associated with most part, researchers agree that faster response electronic survey techniques (c.f., Lazar & Preece, times and decreased costs are attainable benefits, 1999; Schmidt, 1997; Stanton, 1998). Perhaps the while response rates differ based on variables three most common reasons for choosing an e-sur- beyond administration mode alone.
  • SAMPLING DESIGN & WEIGHTING in the Original

    SAMPLING DESIGN & WEIGHTING in the Original

    Appendix A 2096 APPENDIX A: SAMPLING DESIGN & WEIGHTING In the original National Science Foundation grant, support was given for a modified probability sample. Samples for the 1972 through 1974 surveys followed this design. This modified probability design, described below, introduces the quota element at the block level. The NSF renewal grant, awarded for the 1975-1977 surveys, provided funds for a full probability sample design, a design which is acknowledged to be superior. Thus, having the wherewithal to shift to a full probability sample with predesignated respondents, the 1975 and 1976 studies were conducted with a transitional sample design, viz., one-half full probability and one-half block quota. The sample was divided into two parts for several reasons: 1) to provide data for possibly interesting methodological comparisons; and 2) on the chance that there are some differences over time, that it would be possible to assign these differences to either shifts in sample designs, or changes in response patterns. For example, if the percentage of respondents who indicated that they were "very happy" increased by 10 percent between 1974 and 1976, it would be possible to determine whether it was due to changes in sample design, or an actual increase in happiness. There is considerable controversy and ambiguity about the merits of these two samples. Text book tests of significance assume full rather than modified probability samples, and simple random rather than clustered random samples. In general, the question of what to do with a mixture of samples is no easier solved than the question of what to do with the "pure" types.
  • The Effect of Sampling Error on the Time Series Behavior of Consumption Data*

    The Effect of Sampling Error on the Time Series Behavior of Consumption Data*

    Journal of Econometrics 55 (1993) 235-265. North-Holland The effect of sampling error on the time series behavior of consumption data* William R. Bell U.S.Bureau of the Census, Washington, DC 20233, USA David W. Wilcox Board of Governors of the Federal Reserve System, Washington, DC 20551, USA Much empirical economic research today involves estimation of tightly specified time series models that derive from theoretical optimization problems. Resulting conclusions about underly- ing theoretical parameters may be sensitive to imperfections in the data. We illustrate this fact by considering sampling error in data from the Census Bureau’s Retail Trade Survey. We find that parameter estimates in seasonal time series models for retail sales are sensitive to whether a sampling error component is included in the model. We conclude that sampling error should be taken seriously in attempts to derive economic implications by modeling time series data from repeated surveys. 1. Introduction The rational expectations revolution has transformed the methodology of macroeconometric research. In the new style, the researcher typically begins by specifying a dynamic optimization problem faced by agents in the model economy. Then the researcher derives the solution to the optimization problem, expressed as a stochastic model for some observable economic variable(s). A trademark of research in this tradition is that each of the parameters in the model for the observable variables is endowed with a specific economic interpretation. The last step in the research program is the application of the model to actual economic data to see whether the model *This paper reports the general results of research undertaken by Census Bureau and Federal Reserve Board staff.
  • 13 Collecting Statistical Data

    13 Collecting Statistical Data

    13 Collecting Statistical Data 13.1 The Population 13.2 Sampling 13.3 Random Sampling 1.1 - 1 • Polls, studies, surveys and other data collecting tools collect data from a small part of a larger group so that we can learn something about the larger group. • This is a common and important goal of statistics: Learn about a large group by examining data from some of its members. 1.1 - 2 Data collections of observations (such as measurements, genders, survey responses) 1.1 - 3 Statistics is the science of planning studies and experiments, obtaining data, and then organizing, summarizing, presenting, analyzing, interpreting, and drawing conclusions based on the data 1.1 - 4 Population the complete collection of all individuals (scores, people, measurements, and so on) to be studied; the collection is complete in the sense that it includes all of the individuals to be studied 1.1 - 5 Census Collection of data from every member of a population Sample Subcollection of members selected from a population 1.1 - 6 A Survey • The practical alternative to a census is to collect data only from some members of the population and use that data to draw conclusions and make inferences about the entire population. • Statisticians call this approach a survey (or a poll when the data collection is done by asking questions). • The subgroup chosen to provide the data is called the sample, and the act of selecting a sample is called sampling. 1.1 - 7 A Survey • The first important step in a survey is to distinguish the population for which the survey applies (the target population) and the actual subset of the population from which the sample will be drawn, called the sampling frame.
  • Sampling Methods It’S Impractical to Poll an Entire Population—Say, All 145 Million Registered Voters in the United States

    Sampling Methods It’S Impractical to Poll an Entire Population—Say, All 145 Million Registered Voters in the United States

    Sampling Methods It’s impractical to poll an entire population—say, all 145 million registered voters in the United States. That is why pollsters select a sample of individuals that represents the whole population. Understanding how respondents come to be selected to be in a poll is a big step toward determining how well their views and opinions mirror those of the voting population. To sample individuals, polling organizations can choose from a wide variety of options. Pollsters generally divide them into two types: those that are based on probability sampling methods and those based on non-probability sampling techniques. For more than five decades probability sampling was the standard method for polls. But in recent years, as fewer people respond to polls and the costs of polls have gone up, researchers have turned to non-probability based sampling methods. For example, they may collect data on-line from volunteers who have joined an Internet panel. In a number of instances, these non-probability samples have produced results that were comparable or, in some cases, more accurate in predicting election outcomes than probability-based surveys. Now, more than ever, journalists and the public need to understand the strengths and weaknesses of both sampling techniques to effectively evaluate the quality of a survey, particularly election polls. Probability and Non-probability Samples In a probability sample, all persons in the target population have a change of being selected for the survey sample and we know what that chance is. For example, in a telephone survey based on random digit dialing (RDD) sampling, researchers know the chance or probability that a particular telephone number will be selected.
  • Summary of Human Subjects Protection Issues Related to Large Sample Surveys

    Summary of Human Subjects Protection Issues Related to Large Sample Surveys

    Summary of Human Subjects Protection Issues Related to Large Sample Surveys U.S. Department of Justice Bureau of Justice Statistics Joan E. Sieber June 2001, NCJ 187692 U.S. Department of Justice Office of Justice Programs John Ashcroft Attorney General Bureau of Justice Statistics Lawrence A. Greenfeld Acting Director Report of work performed under a BJS purchase order to Joan E. Sieber, Department of Psychology, California State University at Hayward, Hayward, California 94542, (510) 538-5424, e-mail [email protected]. The author acknowledges the assistance of Caroline Wolf Harlow, BJS Statistician and project monitor. Ellen Goldberg edited the document. Contents of this report do not necessarily reflect the views or policies of the Bureau of Justice Statistics or the Department of Justice. This report and others from the Bureau of Justice Statistics are available through the Internet — http://www.ojp.usdoj.gov/bjs Table of Contents 1. Introduction 2 Limitations of the Common Rule with respect to survey research 2 2. Risks and benefits of participation in sample surveys 5 Standard risk issues, researcher responses, and IRB requirements 5 Long-term consequences 6 Background issues 6 3. Procedures to protect privacy and maintain confidentiality 9 Standard issues and problems 9 Confidentiality assurances and their consequences 21 Emerging issues of privacy and confidentiality 22 4. Other procedures for minimizing risks and promoting benefits 23 Identifying and minimizing risks 23 Identifying and maximizing possible benefits 26 5. Procedures for responding to requests for help or assistance 28 Standard procedures 28 Background considerations 28 A specific recommendation: An experiment within the survey 32 6.
  • American Community Survey Accuracy of the Data (2018)

    American Community Survey Accuracy of the Data (2018)

    American Community Survey Accuracy of the Data (2018) INTRODUCTION This document describes the accuracy of the 2018 American Community Survey (ACS) 1-year estimates. The data contained in these data products are based on the sample interviewed from January 1, 2018 through December 31, 2018. The ACS sample is selected from all counties and county-equivalents in the United States. In 2006, the ACS began collecting data from sampled persons in group quarters (GQs) – for example, military barracks, college dormitories, nursing homes, and correctional facilities. Persons in sample in (GQs) and persons in sample in housing units (HUs) are included in all 2018 ACS estimates that are based on the total population. All ACS population estimates from years prior to 2006 include only persons in housing units. The ACS, like any other sample survey, is subject to error. The purpose of this document is to provide data users with a basic understanding of the ACS sample design, estimation methodology, and the accuracy of the ACS data. The ACS is sponsored by the U.S. Census Bureau, and is part of the Decennial Census Program. For additional information on the design and methodology of the ACS, including data collection and processing, visit: https://www.census.gov/programs-surveys/acs/methodology.html. To access other accuracy of the data documents, including the 2018 PRCS Accuracy of the Data document and the 2014-2018 ACS Accuracy of the Data document1, visit: https://www.census.gov/programs-surveys/acs/technical-documentation/code-lists.html. 1 The 2014-2018 Accuracy of the Data document will be available after the release of the 5-year products in December 2019.
  • Survey Experiments

    Survey Experiments

    IU Workshop in Methods – 2019 Survey Experiments Testing Causality in Diverse Samples Trenton D. Mize Department of Sociology & Advanced Methodologies (AMAP) Purdue University Survey Experiments Page 1 Survey Experiments Page 2 Contents INTRODUCTION ............................................................................................................................................................................ 8 Overview .............................................................................................................................................................................. 8 What is a survey experiment? .................................................................................................................................... 9 What is an experiment?.............................................................................................................................................. 10 Independent and dependent variables ................................................................................................................. 11 Experimental Conditions ............................................................................................................................................. 12 WHY CONDUCT A SURVEY EXPERIMENT? ........................................................................................................................... 13 Internal, external, and construct validity ..........................................................................................................
  • Evaluating Survey Questions Question

    Evaluating Survey Questions Question

    What Respondents Do to Answer a Evaluating Survey Questions Question • Comprehend Question • Retrieve Information from Memory Chase H. Harrison Ph.D. • Summarize Information Program on Survey Research • Report an Answer Harvard University Problems in Answering Survey Problems in Answering Survey Questions Questions – Failure to comprehend – Failure to recall • If respondents don’t understand question, they • Questions assume respondents have information cannot answer it • If respondents never learned something, they • If different respondents understand question cannot provide information about it differently, they end up answering different questions • Problems with researcher putting more emphasis on subject than respondent Problems in Answering Survey Problems in Answering Survey Questions Questions – Problems Summarizing – Problems Reporting Answers • If respondents are thinking about a lot of things, • Confusing or vague answer formats lead to they can inconsistently summarize variability • If the way the respondent remembers something • Interactions with interviewers or technology can doesn’t readily correspond to the question, they lead to problems (sensitive or embarrassing may be inconsistemt responses) 1 Evaluating Survey Questions Focus Groups • Early stage • Qualitative research tool – Focus groups to understand topics or dimensions of measures • Used to develop ideas for questionnaires • Pre-Test Stage – Cognitive interviews to understand question meaning • Used to understand scope of issues – Pre-test under typical field
  • MRS Guidance on How to Read Opinion Polls

    MRS Guidance on How to Read Opinion Polls

    What are opinion polls? MRS guidance on how to read opinion polls June 2016 1 June 2016 www.mrs.org.uk MRS Guidance Note: How to read opinion polls MRS has produced this Guidance Note to help individuals evaluate, understand and interpret Opinion Polls. This guidance is primarily for non-researchers who commission and/or use opinion polls. Researchers can use this guidance to support their understanding of the reporting rules contained within the MRS Code of Conduct. Opinion Polls – The Essential Points What is an Opinion Poll? An opinion poll is a survey of public opinion obtained by questioning a representative sample of individuals selected from a clearly defined target audience or population. For example, it may be a survey of c. 1,000 UK adults aged 16 years and over. When conducted appropriately, opinion polls can add value to the national debate on topics of interest, including voting intentions. Typically, individuals or organisations commission a research organisation to undertake an opinion poll. The results to an opinion poll are either carried out for private use or for publication. What is sampling? Opinion polls are carried out among a sub-set of a given target audience or population and this sub-set is called a sample. Whilst the number included in a sample may differ, opinion poll samples are typically between c. 1,000 and 2,000 participants. When a sample is selected from a given target audience or population, the possibility of a sampling error is introduced. This is because the demographic profile of the sub-sample selected may not be identical to the profile of the target audience / population.
  • 2021 RHFS Survey Methodology

    2021 RHFS Survey Methodology

    2021 RHFS Survey Methodology Survey Design For purposes of this document, the following definitions are provided: • Building—a separate physical structure identified by the respondent containing one or more units. • Property—one or more buildings owned by a single entity (person, group, leasing company, and so on). For example, an apartment complex may have several buildings but they are owned as one property. Target population: All rental housing properties in the United States, circa 2020. Sampling frame: The RHFS sample frame is a single frame based on a subset of the 2019 American Housing Survey (AHS) sample units. The RHFS frame included all 2019 AHS sample units that were identified as: 1. Rented or occupied without payment of rent. 2. Units that are owner occupied and listed as “for sale or rent”. 3. Vacant units for rent, for rent or sale, or rented but not yet occupied. By design, the RHFS sample frame excluded public housing and transient housing types (i.e. boat, RV, van, other). Public housing units are identified in the AHS through a match with the Department of Housing and Urban Development (HUD) administrative records. The RHFS frame is derived from the AHS sample, which is itself composed of housing units derived from the Census Bureau Master Address File. The AHS sample frame excludes group quarters housing. Group quarters are places where people live or stay in a group living arrangement. Examples include dormitories, residential treatment centers, skilled nursing facilities, correctional facilities, military barracks, group homes, and maritime or military vessels. As such, all of these types of group quarters housing facilities are, by design, excluded from the RHFS.