349 Polls and surveys Reporting on public opinion research requires rigorous inspection of a poll’s methodology, provenance and results. The mere existence of a poll is not enough to make it news. Do not feel obligated to report on a poll or survey simply because it meets AP’s standards. Poll results that seek to preview the outcome of an election must never be the lead, headline or single subject of any story. Pre-election horse race polling can and should inform reporting on political campaigns, but no matter how good the poll or how wide a candidate’s margin, results of pre-election polls always reflect voter opinion before ballots are cast. Voter opinions can change before Election Day, and they often do. When evaluating a poll or survey, be it a campaign poll or a survey on a topic unrelated to politics, the key question to answer is: Are its results likely to accurately reflect the opinion of the group being surveyed? Generally, for the answer to be yes, a poll must: — Disclose the questions asked, the results of the survey and the method in which it was conducted. — Come from a source without a stake in the outcome of its results. — Scientifically survey a random sample of a population, in which every member of that population has a known probability of inclusion. — Report the results in a timely manner. Polls that pass these tests are suitable for publication. Do not report on surveys in which the pollster or sponsor of research refuses to provide the information needed to answer these questions. Always include a short description of how a poll meets the standards, allowing readers and viewers to evaluate the results for themselves: The AP-NORC poll surveyed 1,020 adults from Dec. 7-11 using a sample drawn from NORC’s probability- based AmeriSpeak Panel, which is designed to be representative of the U.S. population. Some other key points: — Comparisons between polls are often newsworthy, especially those that show a change in public opinion over time. But take care when comparing results from different polling organizations, as difference in poll methods and question wording — and not a change in public opinion — may be the cause of differing results. Only infer that a difference between two polls is caused by a change in public opinion when those polls use the same survey methodology and question wording. — Some organizations publish poll averages or aggregates that attempt to combine the results of multiple polls into a single estimate in an effort to capture the overall state of public opinion about a campaign or issue. Averaging poll results does not eliminate error or preclude the need to 350 examine the underlying polls and assess their suitability for publication. In campaign polling, survey averages can provide a general sense of the state of a race. However, only those polls that meet these standards should be included in averages intended for publication, and it is often preferable to include the individual results of multiple recent surveys to show where a race stands. — Some pollsters release survey results to the first decimal place, which implies a greater degree of precision than is possible from scientific sampling. Poll results should always be rounded to whole numbers. Margins of sampling error can be reported to the first decimal place. — Take care to use accurate language when describing poll results. For example, only groups comprising more than 50 percent of the population can be said to be the majority. If the largest group includes less than 50 percent of the surveyed population, it is a plurality. See majority, plurality. — In most cases, poll and survey may be used interchangeably. Polls are not perfect When writing or producing stories that cite survey results, take care not to overstate the accuracy of the poll. Even a perfectly executed poll does not guarantee perfectly accurate results. It is possible to calculate the potential error of a poll of a random sample of a population, and that detail must be included in a story about a poll’s results: The margin of sampling error for all respondents is plus or minus 3.7 percentage points. See Margin of error later in this entry. POLLS AND SURVEYS AND POLLS SURVEYS AND POLLS Sampling error is not the only source of survey error, merely the only one that can be quantified using established and accepted statistical methods. Among other potential sources of error: the wording and order of questions, interviewer skill and refusal to participate by respondents randomly selected for a sample. As a result, total error in a survey may exceed the reported margin of error more often than would be predicted based on simple statistical calculations. Be careful when reporting on the opinions of a poll’s subgroup — women under the age of 30, for example, in a poll of all adults. Find out and consider the sample size and margin of error for that subgroup; the sampling error may be so large as to render any reported difference meaningless. Results from subgroups totaling less than 100 people should not be reported. Very large sample sizes do not preclude the need to rigorously assess a poll’s methodology, as they may be an indicator of an unscientific and unreliable survey. Often, polls with several thousand respondents are conducted via mass text message campaigns or website widgets and are not representative of the general population. There is no single established method of estimating error for surveys conducted online among people who volunteer to take part in surveys. While they may not report a margin of error, these surveys are still subject to error, uncertainty and bias. Margin of error A poll conducted via a scientific survey of a random sample of a population will have a margin of sampling error. This margin is expressed in terms of percentage points, not percent. 351 For example, consider a poll with a margin of error of 5 percentage points. Under ideal circumstances, its results should reflect the true opinion of the population being surveyed, within plus or minus 5 percentage points, 95 of every 100 times that poll is conducted. Sampling error is not the only source of error in a poll, but it is one that can be quantified. See the first section of this entry. The margin of error varies inversely to the poll’s sample size: The fewer people interviewed, the larger the margin of error. Surveys with 500 respondents or more are preferable. Evaluating the margin of error is crucial when describing the results of a poll. Remember that the survey’s margin of error applies to every candidate or poll response. Nominal differences between two percentages in a survey may not always be meaningful. Use these rules to avoid exaggerating the meaning of poll results and deciding when to report that a poll finds one candidate is leading another, or that one group is larger than another. — If the difference between two response options is more than twice the margin of error, then the poll shows one candidate is leading or one group is larger than another. — If the difference is at least equal to the margin of error, but no more than twice the margin of error, then one candidate can be said to be apparently leading or slightly ahead, or one group can be said to be slightly larger than another. — If the difference is less than the margin of error, the poll says a race is close or POLLS AND SURVEYS AND POLLS SURVEYS AND POLLS about even, or that two groups are of similar size. — Do not use the term statistical dead heat, which is inaccurate if there is any difference between the candidates. If the poll finds the candidates are exactly tied, say they are tied. For very close races that aren’t exact ties, the phrase essentially tied is acceptable, or use the phrases above. There is no single established method of estimating error for surveys conducted online among people who volunteer to take part in surveys. These surveys are still subject to error, uncertainty and bias. Evaluating polls and surveys When evaluating whether public opinion research is suitable for publication, consider the answers to the following questions. — Has the poll sponsor fully disclosed the questions asked, the results of the survey and the method in which it was conducted? Reputable poll sponsors and public opinion researchers will disclose the methodology used to conduct the survey, including the questions asked and the results to each, so that their survey may be subject to independent examination and analysis by others. Do not report on surveys in which the pollster or sponsor of research refuses to provide such information. Some public opinion researchers agree to publicly disclose their methodology as part of the American Association for Public Opinion Research’s transparency initiative. Participation does not mean polls from these researchers are automatically suitable for publication, only that they are likely to meet the test 352 for disclosure. A list of transparency initiative members can be found on the association’s website at: http://www.aapor.org/Standards-Ethics/Transparency- Initiative/Current-Members.aspx — Does the poll come from a source without a stake in the outcome of its results? Any poll suitable for publication must disclose who conducted and paid for the research. Find out the polling firm, media outlet or other organization that conducted the poll. Include this information in all poll stories, so readers and viewers can be aware of any potential bias: The survey was conducted for Inside Higher Ed by Gallup.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-