Polit Behav DOI 10.1007/s11109-007-9035-8 ORIGINAL PAPER Measuring Exposure to Political Advertising in Surveys Daniel Stevens Ó Springer Science+Business Media, LLC 2007 Abstract Research on the influence of negative political advertising in America is characterized by fundamentally conflicting findings. In recent years, however, survey research using estimates of exposure based on a combination of self-reported television viewing habits and Campaign Media Analysis Group data (a database of all advertisements broadcast on national and cable television in the top 75 media markets) has argued that exposure to negative political advertising boosts interest in the campaign and turnout. This paper examines the measurement properties of self- reports of television viewing. I argue that the errors from common survey formats may both be nonrandom and larger than previously acknowledged. The nonrandom error is due to the tendency of politically knowledgeable individuals to be more sensitive to question format. Thus the inferences drawn about the relationship be- tween political knowledge, exposure to negative ads, and political behavior are also sensitive to the measures used to estimate exposure. I demonstrate, however, that one commonly used measure of exposure—the log of estimated exposure—is not only more theoretically defensible but also alleviates some of the more serious problems due to measurement error. Keywords Political advertising Á Measurement error Á Self-reported television viewing Á Survey research The influence of political advertising campaigns in American elections, especially the negativity of ads, remains a subject of great interest and controversy. Geer (2006, 10) notes that in the last two months of the 2000 election, the Associated Press alone filed 95 stories on negative campaigning, while Brooks’ (2006) more D. Stevens (&) Department of Politics, University of Exeter, Cornwall Campus, Penryn, Cornwall, TR10 9EZ, England e-mail: [email protected] 123 Polit Behav comprehensive search uncovered 410 articles on the relationship between campaign tone and turnout from 1994 to 2005. The flurry of results has not resulted in consistent findings, however. Lau, Sigelman, Heldman, and Babbitt’s (1999) meta- analysis, which examined findings from studies of the effects of negative ads between 1984 and 1999, uncovered a panoply of conflicting results and arguments. While scholars agree that negativity is increasingly prevalent in campaigns, the flow of research findings since Lau et al. remains contradictory (Clinton & Lapinski, 2004; Goldstein & Freedman, 2002a, Goldstein & Freedman, 2002b; Kahn & Kenney, 2004; Stevens, 2005). Some argue that the conflict is due to different research designs. As Martin puts it (2004, 545–546), ‘‘Evidence supporting the idea that negative campaigning discourages voter turnout comes primarily through experimental research, whereas evidence supporting the idea that negative campaigning encourages voter turnout comes from survey research.’’ This assumes that disagreement stems from differences in the set-up of experiments and survey research; it ignores problems within particular research designs. This paper argues that there are sizable problems with survey research estimates of the influence of political advertising that prompt doubt about the rosy conclusions characterized by Martin. It examines survey estimates of ad exposure, in particular, state-of-the-art measures using Campaign Media Analysis Group (CMAG) data, which capture the frequency of all political ads in the top 75 media markets in the United States1. Researchers using these measures claim that they have fewer disadvantages of survey estimates of ad exposure while possessing several advantages over experiments. I examine the nature of errors in self-reports of television viewing habits, the foundation of the state-of-the-art measures, and discuss their impact on the inferences that are drawn about the relationship between exposure to advertising and political behavior via a two part multi-method research design. The first part is a within-subjects experiment that allows comparison of actual exposure to television with survey estimates of exposure. The experiment permits a more direct assessment of the properties of survey measures of exposure than alternatives, such as exploring their construct validity or ‘‘predictive power’’ (Ridout, Shah, Goldstein, & Franz 2004). In the second part of the study, I examine American National Election Study (ANES) data in which respondents were asked about their viewing habits in three different question formats to see whether the variation in individuals’ estimates parallels the findings from the diaries. Finding that it does, I then demonstrate that there are systematic individual-level character- istics, such as political knowledge, that drive discrepancies in estimates of television watching based on different question formats and that such sensitivity to question wording has an important impact on estimates of the influence of political advertising. To be sure, survey researchers are aware of the inevitability of measurement error in estimating exposure to advertising. However, this paper shows that the error in survey estimates may be both larger than assumed and, more importantly, correlated with individual-level characteristics associated with key dependent variables such as the propensity to vote and interest in the campaign. These systematic errors in measurement undoubtedly bias estimates of the influence of ads. This affects 1 More recent CMAG data cover the top 100 media markets. 123 Polit Behav observed relationships and the inferences thereby drawn. Fortunately I am able to demonstrate also that one of the more theoretically defensible, though not always used, measures of ad exposure can alleviate many of these problems. Survey Measures of Ad Exposure There are three main approaches to estimating exposure to advertising in surveys. One does not attempt to measure individual exposure, instead examining the association between aggregate level variation in negative advertising and survey measures (Finkel & Geer, 1998; Lau & Pomper, 2001). A second approach relies on some form of self-reported exposure to advertising (Wattenberg & Brians, 1999; West, 1994). The problem with these measures, as Ansolabehere, Iyengar, and Simon (1999) demonstrate, is endogeneity with key dependent variables. Individuals who recall being exposed to advertising are also more likely to vote, for example. The current state-of-the-art measure, employing CMAG data, avoids the endogeneity problem because respondents are simply asked about their television viewing habits. It also lacks the weaknesses of aggregate estimates of exposure or measures derived from ads made rather than ads aired (Freedman & Goldstein, 1999; Ridout et al., 2004). CMAG data tell us where and when ads were aired in the largest 75 television markets (covering roughly three-quarters of the United States population). These data are then combined with reports of television viewing habits, generally taken from American National Election Studies (ANES). If we know when and how often an individual watches television, from the ANES data, and the ads that were aired at that time, from the CMAG data, we can estimate the ads an individual is likely to have seen by multiplying one by the other. For example, an individual in Market A who watches television for three of the four hours between 4 pm and 8 pm (i.e., 75 percent of those hours), during which 100 political ads were aired, would be estimated to have been exposed to 75 ads. An individual in Market B with the same viewing habits but where 20 political ads were aired at that time would be estimated to have been exposed to 15 ads.2 Using these individuals’ reported viewing habits from other times of day and weekends then allows us to build a database of individual exposure to campaign advertising. Resulting estimates are sensitive both to whether individuals were likely to be watching television when ads were aired and to variation by television market. For example, regardless of how many ads are aired, if an individual does not watch television she will not have been exposed to any ads, and the estimate will be that she was exposed to no advertising.3 Similarly, an avid television watcher in a 2 This calculation is based on self-reported viewing habits at particular times of day, or ‘‘dayparts.’’ As discussed below, estimates of exposure using CMAG data have also based television viewing habits on how often individuals claim to watch particular shows. Allthese methods, however, are based on the same principle of multiplying self-reported viewing by ads aired (see Ridout et al., 2004). 3 Of course, she may still be indirectly affected by advertising: through media coverage, or if the candidates’ ads become an issue of discussion in the campaign. However, that is beyond what can be explored here and beyond the range of most work using CMAG data. 123 Polit Behav market without competitive races may see less political advertising than an infrequent television watcher in a market with competitive races. These estimates thus greatly improve the main deficiency of survey measures of ad exposure, the clear knowledge of exposure, while easily trumping experiments when it comes to external validity. Freedman and Goldstein (and their collaborators), in particular, have written several carefully argued articles using CMAG data (Freedman, Franz, & Goldstein, 2004; Freedman & Goldstein, 1999; Freedman, Goldstein,
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages26 Page
-
File Size-