“President” Landon and the 1936 Literary Digest Poll
Total Page:16
File Type:pdf, Size:1020Kb
Downloaded from https://doi.org/10.1017/S014555320001035X Dominic Lusinchi https://www.cambridge.org/core “President” Landon and the 1936 Literary Digest Poll . IP address: Were Automobile and Telephone Owners to Blame? 170.106.40.139 The disastrous prediction of an Alf Landon victory in the 1936 presidential election , on by the Literary Digest poll is a landmark event in the history of American survey 01 Oct 2021 at 20:34:14 research in general and polling in particular. It marks both the demise of the straw poll, of which the Digest was the most conspicuous and well- regarded example, and the rise to prominence of the self- proclaimed “scientific” poll. Why did the Digest poll fail so miserably? One view has come to prevail over the years: because the Digest selected its sample primarily from telephone books and car registration lists and since these con- , subject to the Cambridge Core terms of use, available at tained, at the time, mostly well- to- do folks who would vote Republican, it is no wonder the magazine mistakenly predicted a Republican win. This “conventional explanation” has found its way into countless publications (scholarly and in the press) and college courses. It has been used to illustrate the disastrous effects of a poorly designed poll. But is it correct? Empirical evidence, in the form of a 1937 Gallup poll, shows that this “conventional explanation” is wrong, because voters with telephones and cars backed Franklin D. Roosevelt and because it was those who failed to participate in the poll (overwhelmingly supporters of Roosevelt) who were mainly responsible for the faulty prediction. Polling and survey research more generally are data collection technologies https://www.cambridge.org/core/terms widely used today in the social sciences. Many apprentices of these disci- plines will take a college course in research methods and be told what consti- tutes a good survey and how to recognize a bad one. Most likely, as an illus- Social Science History 36:1 (Spring 2012) DOI 10.1215/01455532-1461650 © 2012 by Social Science History Association . Downloaded from https://doi.org/10.1017/S014555320001035X 24 Social Science History tration of the latter category, the instructor will recount the story of the 1936 https://www.cambridge.org/core Literary Digest presidential poll: it predicted the wrong winner (Alf Lan- don, the Republican challenger), and, to top it all off, its forecast error was huge (nearly 20 points). They will also be told most probably that the reason for this fiasco was the magazine’s reliance on telephone directories and car registration lists to select its sample,1 thereby biasing it, in the midst of the Depression, in favor of the better- off portion of the electorate, which was bound to vote for the Republican nominee. IP address: Budding social scientists are not the only ones who will have heard this story. Just like baseball, polling is now entrenched in the American culture; 170.106.40.139 the general public is inundated with polls, especially during an election year. From time to time, this tale will appear in the press (e.g., Higgins 2010) and in other publications accessible to the lay public (e.g., Zunz 1998: 64). , on But is this account correct? And how did it come about? The answer 01 Oct 2021 at 20:34:14 to the first question will be the primary focus of this article. As for the sec- ond question, the explanation for what went wrong with the Digest poll was largely crafted by the new players in this arena: the so- called scientific poll takers (Archibald M. Crossley, George H. Gallup, and Elmo Roper). Indeed, the 1936 presidential election witnessed not only the demise of traditional , subject to the Cambridge Core terms of use, available at straw polls, chiefly as a result of the Digest poll’s massive blunder, but the emergence of “scientific” polling. The advent and development of these new polls was made possible by their use of novel sampling methods and by their relative success in predicting the outcome of the election but more specifi- cally by outperforming the Digest poll. Gallup, founder in 1935 of the American Institute of Public Opinion (AIPO), and his fellow pollsters (Crossley and Roper) had a major stake in this story. After their “triumph” over the Digest (Rusciano 2007: 315), they were particularly keen to explain the superiority of their approach and show why the old “methods” used by straw polls, of which the Digest was the most prestigious example, were fundamentally unsound. Crossley (1937: 27) put it most bluntly: “The Literary Digest method is outmoded.” In contrast, the https://www.cambridge.org/core/terms new “scientific” polling was capable, they believed, of rendering a true image of America. According to Roper (1940: 326), “Our purpose is to set up an America in microcosm.” The superior value of this methodology, as Gallup (1972a: 146) saw it, rested on what he called “scientific sampling proce- dures”2 that could yield a representative sample of the population of interest:3 specifically, it could produce a miniature version of the electorate.4 To pro- . Downloaded from https://doi.org/10.1017/S014555320001035X 1936 Literary Digest Poll 25 mote the new venture, Gallup had to demonstrate that straw polls in general, https://www.cambridge.org/core and the Digest in particular, were based on samples that were biased, that is, unrepresentative of the American electorate.5 All through his career Gallup reiterated his version of the failure of the 1936 Digest poll. His authority in the field of polling and survey research was so great that his interpretation has been repeated countless times to college students and others and has per- vaded textbooks and manuals on survey research methods and other schol- arly works as well as newspaper and magazine stories. Even in the face of . IP address: empirical evidence contradicting it (Squire 1988), this view persists. Building on previous investigations (Bryson 1976; Cahalan 1989; Caha- 170.106.40.139 lan and Meier 1939; Squire 1988), the focus of this article is to show that the available empirical evidence regarding the causes of this event does not support what has been called the “conventional explanation” (Erikson and , on Tedin 1981: 953). It will become clear that the Digest’s original list was no 01 Oct 2021 at 20:34:14 impediment to predict the correct winner. The reason for the magazine’s mistaken prognosis was that those who participated in the poll (the respon- dents) turned out heavily in favor of the Republican challenger, whereas most of the incumbent president’s supporters failed to return their straw ballots— this is referred to as nonresponse bias.6 , subject to the Cambridge Core terms of use, available at This article proceeds as follows. First, I describe how the conventional explanation emerged in the aftermath of the poll’s blunder and how pervasive this explanation is in the scholarly literature and in other publications that refer to this event. Then, I examine previous attempts to use the only avail- able empirical evidence about the Digest poll’s failure, and I reanalyze this evidence with techniques advocated by Adam J. Berinsky (2006) when survey results originate from quota- controlled polls of the 1930s. I also look at the response rate of various sociodemographic groups. Next, I discuss the short- comings of the Gallup poll of 1937 and see if they negate the conclusions of this reanalysis. Finally, I offer some general remarks about theDigest poll and propose lessons that can be drawn from it by social science researchers. https://www.cambridge.org/core/terms The Genesis of the Conventional Explanation and Its Diffusion A few days before the 1936 presidential election, in which the incumbent Roosevelt was being challenged by Landon, the Literary Digest, a weekly magazine with a surprisingly good track record in predicting the outcomes . Downloaded from https://doi.org/10.1017/S014555320001035X 26 Social Science History of previous electoral contests,7 published the results of its presidential poll: https://www.cambridge.org/core it predicted that Landon would defeat Roosevelt—the former would receive 54 percent of the total popular vote, while the incumbent president would get only 41 percent. In fact, Roosevelt obtained 61 percent of the vote (US Bureau of the Census 1970: 354). Although this severe blunder destroyed the credibility of the magazine, which went out of business two years later, it became a major milestone in the history of polling and survey research in the United States. IP address: What happened? The magazine mailed out in the neighborhood of 10 million “ballots” (postcards) based on a number of sources: the rosters of 170.106.40.139 clubs and associations, voter registration rolls, occupational records, clas- sified mail- order and city directories (Literary Digest 1936a: 3–4). But the bulk of its list was based on telephone directories and registers of automobile , on owners.8 Of the 10 million cards sent out, nearly 2.4 million were returned, 01 Oct 2021 at 20:34:14 for a response rate of roughly 24 percent. How is it possible that with a sample of that size the magazine blundered so badly? Much has been written about this event over the years, and many are those who have referred to it. For the purpose of this study, these commen- tators can be grouped into three categories. First, there are the protagonists: , subject to the Cambridge Core terms of use, available at those who lived through the event and had a direct stake in explaining what had gone wrong. These include the “scientific” pollsters (Crossley, Gallup, and Roper) who promoted the explanation that was to define the event for years to come.