Issues and Problems in Polling In 1936 the Literary Digest, a well-known magazine at the time, conducted a poll that predicted Alf Landon, Republican, as the landslide winner of the presidential election that year. There was a landslide winner, but it was Franklin Delano Roosevelt, who captured 523 of 531 electoral votes and 62.5 percent of the popular vote. The Literary Digest went out of business. What went wrong? Modern polling was relatively new at the time. The Literary Digest got its information by sending postcards to telephone and automobile owners as well as magazine subscribers and asking for their voting choice. But this was 1936. The country was in a severe economic depression. Millions of Americans were jobless and struggling to put food on the table. Many didn't have telephones and couldn't afford magazine subscriptions, much less a car. About 23 percent of those who received postcards answered. A majority were probably relatively prosperous and more likely to be Republicans. A significant majority picked Landon. The Literary Digest's biggest mistake was that it did not poll a random sample of the population, a key element in today's polling. A sample is a representative of a larger group and, if selected truly randomly and in enough numbers, will reflect with some accuracy the views of that larger group. It's comparable to a blood test for anemia or diabetes in which the doctor requires only a random sample in a small tube that will reflect a pattern similar to the rest of the blood in a person's body. Another Literary Digest error was to rely on responses well in advance of the election. There are always people who are undecided until the last minute and others who change their minds in the final days of a campaign. Dr. George Gallup became well known as a pollster at the time of the Roosevelt election because he correctly predicted the winner. But he, too, blundered in the 1948 campaign when he predicted a victory by Thomas Dewey over Harry Truman by five to 15 points. Truman won by more than four percentage points. Once again the problem was inaccurate sampling, undecided voters, and late voter shifts. Since those days the Gallup organization and other polling groups have had more than a half century to refine their techniques and produce more accurate results. Political polling is now usually done through computer-generated, randomly selected telephone calls of some 750 to 1,500 potential voters. But today almost everyone has a telephone. Polling organizations also adjust for such factors as geographic region, sex, race, marital status, and age. This adjustment, while necessary, is also a possible source of error. Pollsters continue their efforts right up to an election to get results from the undecided and to catch those who have changed their minds. But no matter how scientific they claim to be, polls that predict election results often still get them wrongóthough usually not as wrong as the Literary Digest was in 1936. Consider the results of polls just before election day on November 7, 2000. Eleven national poll results were released in the two days before the election. Only two got the actual vote right, showing Al Gore leading George W. Bush (CBS and Reuters/MSNBC/Zogby) by 45 percent to 44 percent and 48 percent to 46 percent. One showed the candidates tied (Harris). But eight showed Gore losing by as many as five percentage points, which would have meant a Bush landslide. The eight polls were wrong by as much as eight points, though some others were within stated "margins of error." Most pollsters will state a 95 percent confidence that their results will be within a 3 or 4 percent margin of error. This means that if an attempt were made to poll every adult in the nation with the same questions, in the same way, and at the same time as the actual poll was taken, the results would be plus or minus three or four percentage points 95 percent of the time. Which means, of course, the pollster is admitting that 5 percent of the time results may be off by more than three or four percentage points.) Now consider a poll released by the Washington Post and ABC News on November 14, one week after Election Day when the final results were still uncertain, that showed 45 percent of the public wanted Bush to become president, 44 percent preferred Gore, 6 percent favored neither, 4 percent had no opinion, and 1 percent wanted "other." But when the actual vote total from Election Day was completed, it showed Gore.....50,996,116 (48 percent) - 266 electoral votes Bush.....50,456,169 (48 percent) - 271 electoral votes Other.... (4 percent) Gore won the popular vote by 539,947 votes. In short, a major poll was wrong about the results of an election that had already taken place. Another vital factor that may undermine the accuracy of political polls is uncertainty about who will actually vote on Election Day. Almost anything can turn a likely voter into a non-voteróa rainy day, an appointment, limited interest, unfinished work. So readers of polls should take into serious consideration the word "likely" when pollsters refer to their respondents as "likely voters." During the 2000 presidential campaign the "expert" predictions were for a low turnout. This usually means that the most motivated voters, the best educated, and the richest will provide most of the votes. In this case, argues Anna Greenberg in The Nation, it also meant that the turnout would favor Republicans "since high socioeconomic status is associated with political participation and conservative political preferences. To accommodate these predictions, the polls screened tightly for those most likely to vote...and in some cases 'weighted up' the GOP share of the sample. All those adjustments meant that most of the national polls going into Election Day showed a 2- to 5-point Bush lead." (Anna Greenberg, "Why the Polls Were Wrong," The Nation, 12/14/2000) During the 2000 campaign, pollsters were also very much aware that they had overstated the size of Bill Clinton's lead over Bob Dole in the 1996 presidential election. One result was that they tended to ignore the big get-out-the-vote efforts by labor unions and civil rights organizations, both traditional supporters of the Democrats and thus of Gore. Another reason why so many pollsters were wrong in 2000 is that "media outlets are competing for market share in an ever-expanding universe of polling data," writes Greenberg. In short, there is lots of competition among polling organizations, and they tend to develop polling models (including the size and nature of the electorate) that allow them to claim that they are using the most scientific method for predicting the result of an election. It is difficult for them to change their model in the middle of a campaign based on new information about the electorate. "There is no insidious conspiracy to rig the polls in favor of one candidate or another. The national pollsters who partner with media outlets are respected survey researchers." But, for instance, in 2000 more people turned out to vote than pollsters expected, and this skewed their results. In 2004, as in 2000, polling is playing a major role in the presidential campaign. The media report and discuss the latest poll results endlesslyóand not just those on who's ahead. What do voters think about Kerry's personality? Bush's plan for dealing with Iraq or taxes? Which candidate has the better healthcare proposals? Which is more honest? Reporters and pundits analyze the slightest changes in what polls report about voter perceptions. These polls are as subject to error as those that claim to tell us who's ahead. Polls become tools for political manipulation. Political consultants for Bush and Kerry rely on the polls to propose shifts in what the candidates should say and where, hoping to influence not just voters but also the next set of polls. For poll results themselves can influence how and even whether people will vote. They can also play a big role in how successful fundraisers are, since positive poll results can help raise money for a candidate. In the meantime, poll results fluctuate daily. Bush's numbers go up after the Republican convention, then down after his first debate with Kerry. Polls say the candidates are deadlocked. They say Kerry is picking up steam. They say Bush is inching ahead. For the media, competition and conflict sell "news." They report the presidential campaign as if it were a horse race: Who's ahead in the polls? Why is so-and-so slipping behind? Can he catch up? What must he do and why? Who will make it to the finish line first? In the process, the issues and problems that the campaign is supposedly about can be lost. Despite their known limitations and history of inaccuracy, polls help drive the election process. At their best, they can highlight the issues that most concern people and illuminate how they are thinking and feeling. They can also be fodder for media trivialization. Michael Schwartz, an expert on polling, says there are three things worth remembering about polls: 1) Any individual poll can be off 15 percent. 2) Any collection of honestly conducted polls looked at together will show a very wide range of results and you won't be able to tell which of them is right. 3) Even the collective results of a large number of polls probably will not give you an accurate read on a close election. "From these three points comes the most important conclusion of all: Don't let the polls determine what you think and do." (Michael Schwartz, Professor of Sociology, State University of New York at Stony Brook. Schwartz has worked for 30 years measuring and analyzing public opinion.

1. Why was the Literary Digest poll of 1936 so wrong?

2. What do pollsters regard as key factors in "scientific" polling"?

3. Why, despite their best efforts, may the pollsters' results be inaccurate?

4. Why were the Gore-Bush poll results inaccurate?

5. How can polls be "tools for political manipulation"? By whom and why? "Fodder for media trivialization"? By whom and why?

6. What does Michael Schwarz regard as his most important conclusion? Why?