<<

NRJXXX10.1177/0739532918775685Newspaper Research JournalHettinga, Appelman, Otmar, Posada and Thompson 775685research-article2018

Article

Newspaper Research Journal 14–­1 © 2018 NOND of AEJMC Comparing and Reprints and permissions: sagepub.com/journalsPermissions.nav https://doi.org/10.1177/0739532918775685DOI: 10.1177/0739532918775685 contrasting journals.sagepub.com/home/nrj corrected errors at four

By Kirstie Hettinga, Alyssa Appelman, Christopher Otmar, Alesandria Posada and Anne Thompson

Abstract A content analysis of corrections (N = 507) from four influential newspapers—the Times, , The Journal and the —shows that they correct errors similar to each other in terms of location, type, impact and objectivity. Results are interpreted through democratic theory and are used to suggest ways for copy editors to most effectively proofread and fact-check.

Keywords corrections, quantitative, content analysis, chi-square, print and online newspapers, the , democratic theory, newspaper and online news division, norms and routines, copy editing

n 1998, John Russial asked whether eliminating copy desks would invite trouble— or more specifically, if it would affect quality in newsrooms.1 Since then, the news Iindustry has continued to shrink. The American Society of News Editors reported

Hettinga is an assistant professor, California Lutheran University. Appelman is an assistant professor and Thompson is an adjunct instructor, both in the Department of Communication, Northern Kentucky University. Otmar is an MA student in communication, San Diego State University. Posada is a California middle school teacher. Hettinga is the corresponding author: [email protected]. 2 Newspaper Research Journal 00(0) that the workforce of the copy desk was cut nearly in half between 2002 and 2012.2 Specifically, the number of copy editors in newsrooms dropped from 10,676 in 2002 to 5,675 in 2012.3 More newsrooms are also seeking to consolidate or outsource copy editing.4 Recently, combined one editing center from with another in Arizona.5 moved some wire positions, including copy editors, to Florida in 2009.6 This decrease in copy editors—“the last line of defense in terms of maintaining accu- racy,”7 according to copy desk chief Hank Glamann, who worked in newsrooms includ- ing The Plain Dealer in Cleveland and the Chronicle—raises questions about errors in modern news media. Journalists and analysts have been assessing whether the lack of editors has resulted in an increase in errors. Washington Post ombudsman Andrew Alexander reported that, “Growing numbers of readers are contacting the ombudsman to complain about typos and small errors” and noted that the number of copy editors in the three years preceding decreased “from about 75 to 43 through buyouts or voluntary departures.”8 In 2007, then public editor of the Orlando Sentinel Manning Pynn said, “August, September and October have accounted, thus far, for significantly more correc- tions of internally generated errors than the newspaper averaged in that three-month period during the prior five years.”9 He argued, “With fewer people to do [editing] now, less of that important work gets done, and the result is more published errors.”10 The Columbia Journalism Review discussed both columns and observed that the decrease in copy editors coincided with increased content: “Copy editors used to focus on a print edition. Now they have to deal with breaking news for the Web site, , and other online content. Fewer copy editors are doing more work than ever before.”11 This potential for error also might be exacerbated by the 24-hour news cycle, as well as the increased use of as a news platform. With a news media climate positioned to invite error, it is critical to assess errors and their subsequent corrections. Such analyses could aid remaining editors in focusing their attention to common kinds of error that frequently result in corrections. This study examines four influential publications to determine what types of errors they are correcting. Assessing multiple publications increases this study’s potential gen- eralizability and allows the authors to address several previously unanswered questions: Have different corrections policies led to different types of corrections? Does article content or audience affect the impact of corrected errors? This study tests a new code- book created by Appelman and Hettinga12 and provides a snapshot of corrections in textual news media by exploring similarities and differences in those corrected errors.

Literature Review In 2014, the American Press Institute reported that roughly 75 percent “of Americans get news at least daily”13 and that people trusted information they got from news out- lets more than information they got secondhand.14 In addition, the most trusted news sources were legacy media (e.g., newspapers, radio and local TV news).15 However, overall trust in news media remains historically low, according to a 2015 poll.16

Corrections and the Democratic Theory of the Press Citizens rely on news media to provide them the information they need to be free and self-governing. This falls under Dahl’s criterion of “enlightened understanding,” Hettinga, Appelman, Otmar, Posada and Thompson 3

which is part of his theory of democratic process.17 Dahl said that people should have an “opportunity to acquire an understanding of . . . matters”18 and that actions that sup- press information are, thus, counterintuitive to a democracy. Scheuer wrote that knowledge is most significant in democracies because “at least in theory and law, it is more widely diffused among the citizenry than elsewhere. Journalism is the most immediate and accessible source of such knowledge.”19 Subsequently, Scheuer con- tended that journalism quality and the quality of democracy are linked. As such, it is essential that news media amend the record when they make a mis- take. While Maier found that “sources overwhelmingly said they did not seek correc- tions because the errors were considered inconsequential,”20 Nemeth and Sanders found that an increase in corrections “may have improved [the New York Times’] repu- tation for fairness and accountability.”21 Scheuer wrote, “Inaccuracy . . . is among the easiest of journalist sins to detect and correct.”22 Because corrections are a critical mechanism for news media’s pursuit of accuracy, they are a relevant and significant area of study.

Studying Influential Publications News media research often focuses on large circulation publications because they have larger amounts of content and are more likely to have searchable archives, both of which also help track trends over time. Elite newspapers also tend to have large circulations, which Lacy and Fico found could be related to higher quality.23 As far back as 1955, Breed wrote that small-town editors knew “that the New York Times employs many experienced specialists” and felt a sense of satisfaction in emu- lating them,24 although little research has explored whether small newspapers adopt policies and procedures from larger ones, especially considering staffing and cost issues. In all, previous research has used large newspapers because they have “a national constituency as well as a ‘corrections track record.’”25 This corrections track record is reflected, in part, by the standardization of corrections policies. The intermedia-agenda power of large and/or national media on each other and smaller media is especially well known. Golan in 2006 documented that the New York Times and the three major broadcast networks tended to cover the same nations and stories.26 Reese and Danielian said newspapers were quick to follow the agenda set by elite counterparts and that “the print media, and specifically the New York Times, set the agenda for the television networks.”27

Corrections Policies The Stylebook, a widely used guidebook for journalism style and accuracy, includes several corrections-related policies in its “Statement on News Values and Principles.” These include style notes about how to write corrections, as well as notes about their importance: “Staffers must notify supervisory editors as soon as possible of errors or potential errors, whether in their work or that of a colleague. . . . When we’re wrong, we must say so as soon as possible.”28 Craig Silverman of “Regret the Error” noted that formalized corrections practices emerged in the 1970s, and he cited the New York Times as an early example of a pub- lication with a standalone policy.29 The New York Times periodically updates its 4 Newspaper Research Journal 00(0) policy.30 Its style guide, which can easily be found through an Internet search, reads in part: “. . . recognizes an ethical responsibility to correct all its factual errors, large and small (even misspellings of names), promptly and in a prominent reserved space in the paper.”31 A simple search also yields a corrections policy for the Washington Post. By comparison, it is fairly difficult to find information regarding corrections and submissions on the Los Angeles Times’ and ’s websites. The “L.A. Times Ethics Guidelines” is available, yet it includes note, without linking to the referenced policy: “The Times’ corrections policy spells out in greater detail our procedures for handling complaints, corrections and retraction demands.”32 The reader representative for the Los Angeles Times wrote in an email, “The Times’ corrections guidelines aren’t available publicly.”33 Similarly, The Wall Street Journal’s policy was not found online, and attempts to contact the office were unsuccessful. The accessibility of such policies at the New York Times and the Washington Post may explain their selection for previous corrections research.

Research Questions This study examines four influential newspapers—the New York Times, the Washington Post, The Wall Street Journal and the Los Angeles Times—to determine what types of errors they are correcting. It provides a snapshot of corrections in text- based news media and explores similarities and differences in those corrected errors by addressing the following:

RQ1: Do correction characteristics differ based on newspaper?

RQ2: Across newspapers, what is the relationship between corrected-error type, location, impact and objectivity?

Method Sample The researchers created a six-constructed-week sample of corrections, based on the method proposed and validated by Luke, Caburnay and Cohen.34 All dates from 2010 to 2014 were entered into a spreadsheet, and a random number generator was used to choose six Mondays, six Tuesdays and so on. All corrections published on these days were pulled from ProQuest National Newspaper Core for the four sample daily news- papers, which were among the top 10 circulation papers named by Agility PR Solutions as of 2017.35 This resulted in a sample of 638 corrections: 243 from the New York Times, 132 from the Washington Post, 151 from The Wall Street Journal and 112 from the Los Angeles Times. Hettinga, Appelman, Otmar, Posada and Thompson 5

Coding and Intercoder Reliability For reliability testing, a subsample was pulled based on Lacy and Riffe.36 They sug- gest a sample of at least 125 when a 95 percent level of probability is appropriate, coders are looking at “meanings of content,” rather than “straightforward counting,” and the total sample is between 500 and 1,000 content units.37 Because the researchers wanted to ensure that all publication styles were accounted for, the subsample included the same number of corrections from each newspaper. The first 32 corrections for each newspaper (for a total of 128) were compiled and coded by five coders. Krippendorff’s alpha values were calculated using Hayes Macro.38 Qualitative responses from coders were also considered, and the codebook was revised for clarity and specificity. The revised codebook with parenthetical clarifications was sent to coders, and a second training meeting was held. It was determined that the group was achieving suf- ficient agreement. The up-to-date version of the codebook is discussed below, along with the original Krippendorff’s alpha values. The remaining 510 corrections were divided among the three coders who were not involved in codebook creation. Three replicated corrections were found and removed. The coding scheme changed from the first to second iteration because of the discussed clarifications, so the ones used for intercoder reliability could not be included in the final sample. In total, then, the final coded sample was N = 507.

Codebook The unit of analysis for this study was the correction. Some corrections fixed mul- tiple mistakes, so coders were asked to note the number of fixed mistakes in each cor- rection but to only analyze the first. The corrections were coded based on a codebook created and tested by Appelman and Hettinga.39 Their codebook updated classification schemes from Charnley’s 1936 work40 and Tillinghast’s 1982 work41 to account for modern media content. In addition to exploring similarities and differences in corrections, then, this study also tested the applicability of the new codebook. Because this analysis was part of that larger project intended to verify the codebook’s effectiveness, additional coding was done that is not reported here. For this analysis, researchers examined four coding categories: location, type, impact and objectivity. (Intercoder reliability data are listed for only the first round of coding because there was no coder overlap in final corrections assigned.)

Location This code represents where on the page the error occurred. The initial intercoder reliability levels were fairly low (Krippendorff’s α = .55). After discussion and clarifi- cation, this measure was amended to include the following descriptions:

1 = Article (any substantial body of text, includes obituaries, sports, reviews, etc.), 2 = (title of the article), 3 = Photo (actual photo is incorrect—wrong photo is published, etc.), 4 = Cutline (photo caption—a sentence or two that explains the image), 5 = Byline/credit (who wrote the article or took the photo), 6 Newspaper Research Journal 00(0)

6 = Infographic/graphic/chart/table/listing/calendar/sidebar/map (nonphoto visuals) 7 = Other

For example, the following correction from the New York Times was coded as 5 because it represented an error located in a byline or credit:

A picture with an obituary on Wednesday about the jazz musician and bandleader Buddy Collette carried an incorrect credit in some editions. The photograph of Mr. Collette performing at in 1997 was taken by Alan Nahigian not by Vince Bucci/Getty Images.42

Type This code describes the type of mistake that was corrected. The initial intercoder reliability levels were acceptable (Krippendorff’s α = .62). After discussion and clari- fication, this measure was introduced with the following instructions and included the following options:

Go through the list, and if the error is not 1-6, then consider options 7-10. Use 11 when not 1-10.

1 = Misquotes (a quote is incorrect, attributed to the wrong person, etc.), 2 = Personal reference (relates to one person, name is incorrect or misspelled personal history, familial relationships, demographics, nationality, ages, descriptions, job history), 3 = Numbers (money, sizes, attendance), 4 = Geography (locations, physical addresses, countries, directions, distance), 5 = Dates/times, 6 = Contact information (telephone numbers, email addresses, websites), 7 = Other imprecise organization reference (names of companies// groups), 8 = Other imprecise legal/government reference (e.g., incorrect court ruling, wrong government agency identified), 9 = Other imprecise science/technology reference (e.g., battery incorrect), 10 = Other imprecise sports/entertainment reference (e.g., wrong name of con- cert, book, title misspelled, wrong action or movement described in sports) 11 = Other

For example, the following correction from The Wall Street Journal was coded as 4 because it represented a geographic error: “Salem is the capital of the state of . A U.S. News article Friday on of city-hall jobs to nearby counties incor- rectly identified Eugene as the state’s capital.”43

Impact This code describes the potential impact of the original error on society. The initial intercoder reliability levels were low (Krippendorff’s α = .24), but a discussion as to the meaning of the measure alleviated the confusion. It was clarified that this measure Hettinga, Appelman, Otmar, Posada and Thompson 7 referred to the societal impact of the corrected error for the general readership, not the impact for the specific people involved. The misspelling of a person’s name, for exam- ple, might be high impact for that person, but it would likely be low impact for most other people. After discussion, it was determined that the measure could remain as is, despite its potential for subjectivity. It is discussed further in the “Limitations” section. The measure included the following options:

1 = Low impact, may affect perception 2 = Some impact, affects noncrucial thinking/decision making 3 = High impact, affects action

For example, the following correction from the Los Angeles Times was coded as 1 because it represented a low-impact error:

In the May 29 Section A, the caption for a photo that accompanied a column about a new smartphone app for Malibu beach access identified the people pictured on ATVs as security guards. In fact, the photo showed a security guard on the far left and two Los Angeles County Sheriff’s deputies.44

Objectivity This code describes the nature of the corrected error. The initial intercoder reliabil- ity levels were acceptable (Krippendorff’s α = .59), so this measure remained constant for final coding. It included the following options:

1 = Objective (factual, information that could be verified, found to be right or wrong) 2 = Subjective (error of meaning—misinterpretation, vagueness, nonspecificity)

For example, the following correction from the Washington Post was coded as 2 because it represented a subjective error:

A May 29 A-section article about major Democratic donors signing on to help Hillary Rodham Clinton in 2016 incorrectly described the involvement of Emily’s List founder Ellen Malcolm. Malcolm wrote a testimonial praising Clinton for Ready for Hillary, which the super PAC is using to solicit funds on its Web site, but Malcolm is not actively raising money for the group.”45

Results RQ1 Location Across publications, the most common corrected-error location was the article (N = 426, 84.0 percent). This also was true within each newspaper. However, analysis shows a significant difference between newspaper and location, χ2 (18, N = 507) = 34.90, p = .010, Cramer’s V = .15, p = .01, especially when the article-based 8 Newspaper Research Journal 00(0)

Table 1 Location of Corrected Errors across Publications

Location

Byline/ Nonphoto Article Headline Photo Cutline Credit Visualsa Other Total

NYT 178 0 2 15 5 9 2 211 LAT 65 4 1 1 1 3 4 79 WSJ 97 2 4 10 0 3 2 118 WP 86 0 3 4 1 5 0 99 Total 426 6 10 30 7 20 8 507

Note. The errors were coded from the New York Times (NYT), the Los Angeles Times (LAT), The Wall Street Journal (WSJ) and the Washington Post (WP). aNonphoto visuals include infographics, graphics, charts, tables, listings, calendars, sidebars and maps. corrections were removed: χ2 (15, N = 81) = 32.40, p = .006, Cramer’s V = .365, p = .006. Corrected errors not in articles were most likely to be in cutlines in the New York Times and The Wall Street Journal, in and other in The Los Angeles Times and in nonphoto visuals in the Washington Post. Table 1 shows this distribution.

Type Across newspapers, the most common corrected-error type was personal reference (N = 176, 34.7 percent). This also was true within each newspaper. However, analysis shows a significant difference between newspaper and type, χ2 (30, N = 507) = 42.91, p = .06, Cramer’s V = .17, p = .06, especially when the personal-reference corrections were removed: χ2 (27, N = 331) = 42.33, p = .03, Cramer’s V = .21, p = .03. Corrected errors not about personal references or other were most likely to be about misquotes in the Los Angeles Times and about numbers in the other three. Table 2 shows this distribution.

Impact Across newspapers, most corrections were low impact (N = 442, 87.2 percent). This also was true within each newspaper. Analysis does not show a significant dif- ference between newspaper and impact, χ2 (6, N = 507) = 4.708, p = .58, Cramer’s V = .07, p = .58. Table 3 shows this distribution, to illustrate the fact that most are low impact.

Objectivity Across newspapers, results showed that most corrections were objective (N = 463, 91.3 percent). This also was true within each newspaper. Analysis does not show a significant difference between newspaper and objectivity, χ2 (3, N = 507) = 2.63, p = .45, Cramer’s V = .07, p = .45. Table 4 shows this distribution, to illustrate the fact that most are objective. Hettinga, Appelman, Otmar, Posada and Thompson 9

Table 2 Types of Corrected Errors across Publications

Type

Personal Dates/ Contact Misquotes References Numbers Geography Times Information Othera Total

NYT 8 70 29 17 13 2 72 211 LAT 9 33 3 6 4 0 24 79 WSJ 3 41 16 6 6 0 46 118 WP 10 32 18 8 5 1 25 99 Total 30 176 66 37 28 3 167 507

Note. NYT = the New York Times, LAT = the Los Angeles Times, WSJ = The Wall Street Journal and WP = the Washington Post. aOther includes five separate categories: imprecise organization reference, imprecise legal/government reference, imprecise science/technology reference, imprecise sports/entertainment reference and other.

RQ2 Analysis shows a significant relationship between error type and location, χ2 (60, N = 507) = 101.47, p = .001, Cramer’s V = .183, p = .001. Further analysis shows that all types are most common in articles (n = 426), but of the ones not in articles (n = 81), most were related to personal references (n = 38), other (n = 12) and numbers (n = 11). Analysis also shows a significant relationship between error type and impact, χ2 (20, N = 507) = 86.79, p < .001, Cramer’s V = .29, p < .001. Further analysis shows that all types are most likely to be low impact (n = 442), but of the ones with at least some impact (n =65), most were related to numbers (n = 15), geography (n = 9), dates/ times (n = 9) and other imprecise legal/government references (n = 9). Analysis also shows a significant relationship between location and impact, χ2 (12, N = 507) = 35.0, p < .001, Cramer’s V = .186, p < .001. Further analysis shows that all locations are most likely to be low impact (n = 442), but of the ones with at least some impact (n =65), most were found in articles (n = 50) and nonphoto visuals (n = 10). Analysis also shows a significant relationship between error type and objectivity, χ2 (10, N = 507) = 48.50, p < .001, Cramer’s V = .309, p < .001. Further analysis shows that all types are most likely to be objective (n = 463), but that subjective corrections (n = 44) were most likely to be misquotes (n = 9), other (n = 8) and personal references (n = 7). Analysis showed no significant difference between impact and objectivity, χ2 (2, N = 507) = 4.111, p = .128, Cramer’s V = .09, p = .128; or between objectivity and location, χ2 (6, N = 507) = 2.652, p = .851, Cramer’s V = .07, p = .851

Summary of results Corrections across all newspapers were mostly in articles, related to personal refer- ences, low impact and objective. Corrected errors not in articles were mostly in cut- lines in the New York Times and The Wall Street Journal, in headlines and other in the 10 Newspaper Research Journal 00(0)

Table 3 Impact of Corrected Errors across Publications

Low Impact Some Impact High Impact Total

NYT 176 32 3 211 LAT 71 7 1 79 WSJ 106 11 1 118 WP 89 9 1 99 Total 442 59 6 507 Note. Impact describes the potential impact of the original error on society. Low impact means may affect perception, Some impact means affects noncrucial thinking/decision-making, and High impact means affects action. NYT = New York Times, LAT = the Los Angeles Times, WSJ = The Wall Street Journal and WP = the Washington Post.

Los Angeles Times and in nonphoto visuals in the Washington Post. Corrected errors not about personal references or other were mostly about misquotes in the Los Angeles Times and about numbers in the rest. In addition, there were significant relationships between error type, location, impact and objectivity. All types of corrected errors were most common in articles, but those not in articles were mostly related to personal ref- erences. Similarly, all types were most likely to be low impact, but those with at least some impact were mostly related to numbers and found in articles and nonphoto visu- als. Finally, all types were most likely to be objective, but subjective ones were mostly misquotes.

Discussion Practical and Theoretical Implications This study expands on previous research by increasing the number of newspapers examined. Looking at multiple newspapers allows for comparisons and somewhat increases this study’s potential generalizability, although the vast majority of U.S. newspapers are smaller and local. Overall, this study finds strong similarities across newspapers. The corrected errors were mostly in articles, personal references, low impact and objective. This suggests that, despite differences in content and policies, influential publications are taking similar steps, and potentially facing similar chal- lenges, in their effort to amend the public record. This study also served as a test of a new codebook created and tested by Appelman and Hettinga.46 The results have different possible interpretations. The newspapers’ differences could be of interest for journalists. For example, after personal references, the most common type of corrected errors at the Los Angeles Times were misquotes; this could mean that editors at the Los Angeles Times may want to focus on quotes. Likewise, cutlines were the second most common location of corrected errors for the New York Times and The Wall Street Journal, so greater attention to those elements when proof- reading may be beneficial to those outlets. Other findings that may be useful to professionals include the relationships between variables. This study found that while articles were the most common location, Hettinga, Appelman, Otmar, Posada and Thompson 11

Table 4 Objectivity of Corrected Errors across Publications

Objective Subjective Total

NYT 192 19 211 LAT 69 10 79 WSJ 109 9 118 WP 93 6 99 Total 463 44 507 Note. Objectivity refers to the nature of the error: Objective (factual, information that could be verified, found to be right or wrong) or Subjective (error of meaning—misinterpretation, vagueness, nonspecificity). NYT = New York Times, LAT = the Los Angeles Times, WSJ = The Wall Street Journal and WP = the Washington Post.

corrections about other locations were most likely to be personal references. Similarly, corrections were likely to be objective, but the subjective ones were most likely to be misquotes. Perhaps emphasis should be placed on quotes and personal references above other types. Theoretically, news media are thought to serve a critical function in democracy, which necessitates a focus on accuracy. The findings suggest that media’s functions are not being sufficiently served, though the present study cannot completely address this. The fact that most corrected errors in the sample were low impact could mean that these particular publications are making minor mistakes more often than more signifi- cant errors that could require retractions or editors’ notes, which this research does not address. In fact, the impact results mirror a previous study focused solely on the New York Times.47 Perhaps the decrease in copy editing staff 48 has caused publications to make more careless mistakes, resulting in more low-impact corrections. Contrastingly, it may not be that publications are making more low-impact mistakes than high-impact mistakes; it might mean they are correcting more low-impact mistakes. Perhaps staffs’ transparency concerns resulted in fixing low-impact errors at the expense of more significant ones. Either way, the emphasis on low-impact errors could mean more trivial corrections are overshadowing those with higher impact. Here, the role of standard editors and ombudsmen may come into play. The role of ombudsmen is somewhat controversial,49 and critics may argue that minor corrections are an attempt to seem responsive to public concerns without addressing larger issues. Standard editors and their influence on corrections may warrant future research. This study also suggests similarities among the high-impact corrections, which could help journalists. The mistakes most likely to affect readers’ ability to make decisions are those relating to numbers, date/times, geography and other imprecise legal/government references. Interestingly, many of these errors are identified by Silverman’s accuracy checklist.50 The corrections addressing inaccurate legal and government references are of particular note under the concept of enlightened understanding, as Scheuer writes, “the general democratic responsibilities of news media include informing people on fac- tual matters relevant to their civic duties.”51 To serve the democratic function of the press, reporters could use a checklist similar to Silverman’s, and editors could pay spe- cial attention to the topics identified as those most likely to affect decision making. 12 Newspaper Research Journal 00(0)

Limitations and Future Research This research examined only four newspapers, and while these are industry leaders, smaller newspapers (which comprise more than 90 percent of all U.S. newspapers) probably have their own norms and policies, thereby greatly limiting generalizability. Community newspapers might receive additional reader feedback or might have smaller staff, both of which could affect corrections. Conversely, smaller newspapers may have less formalized processes for reporting and correcting errors, which could lead to fewer corrections. A previous study found that published corrections were lim- ited on college newspapers’ websites,52 while another found that smaller publications had substantial errors.53 Future research comparing community and national newspa- pers could address this concern. Methodologically, some categories were significantly more common than others. This poses potential analytical problems, especially in comparing different unpopular codes. A second concern was the low intercoder reliability on the impact measure. Discussion among coders resolved the issue; however, future research using this mea- sure should discuss the measure carefully during coder training. The measure does leave some potential for subjectivity, based on the coder. For example, a less well-off person might think overstating a product’s value by $5 is high-impact, but a wealthier person might consider it to be low impact. Future research that refines this measure could help address this issue. Incorporating another methodology, such as interviewing journalists, could add richness to this study, which was focused more on identifying and quantifying the phenomenon, rather than explaining and interpreting it. In addition, this study’s practical implications for journalists, while useful, do not mean that certain news outlets are less diligent than others. Some publications could actually be making more mistakes but are neglecting to correct them, that is, the number of corrections does not necessarily indicate the number of mistakes. This is of particular concern online, where some publications might correct mistakes without noting the revisions. This study analyzed corrections independent of content, which could cause further misinterpretation. This study only looked at the number and types of corrections, not the number and types of content. It could be, for example, that The Wall Street Journal ran more corrections about cutlines simply because it publishes more photos. Perhaps the New York Times ran more corrections simply because it publishes the most content. This method also meant the amount of content was not compared with the number of errors. A future study analyzing content-prevalence in addition to corrections-­ prevalence could address these concerns.

Conclusion Despite these cautions, results show that influential newspapers are publishing similar corrections, with slight differences in terms of type and location. The quantity and breadth of corrections suggest that, despite differences in content and policies, and despite cuts in copy-editing staff, newspapers are still putting some effort into accu- racy and, in doing so, working to serve the democratic function of the news media.

Editors’ Note This article was accepted for publication under the editorship of Sandra H. Utt and Elinor Kelley Grusin. Hettinga, Appelman, Otmar, Posada and Thompson 13

Notes 1. John Russial, “ Copy Desks, Hello Trouble?” Newspaper Research Journal 19, no. 2 (Spring 1998): 2-17. 2. Andrew Beaujon, “Copy Editors ‘Have Been Sacrificed More than Any Other Newsroom Category,’” Poynter, February 6, 2013, . 3. Beaujon. “Copy Editors ‘Have Been Sacrificed More than Any Other Newsroom Category.’” 4. Jeff Sonderman, “Copy Editing, Page Design Jobs to Be Outsourced at Toronto Star,” Poynter, March 5, 2013, . 5. Staff Reports, “Gannett, USA TODAY NETWORK Will Consolidate Corpus Christi Editing, Design Center,” . 6. Anthony Clark, “Some N.Y.T. News Service Jobs Moving to Gainesville,” Ocala.com. November 13, 2009, . 7. Sharyn Wizda, “Copy Desk Blues,” American Journalism Review 19, no. 7 (1997): 36. 8. Andrew Alexander, “Declining Editing Staff Leads to Rise in Errors,” The Washington Post, July 5, 2009, sec. Opinions, . 9. Manning Pynn, “Errors Expose Need for Editing,” The Orlando Sentinel, October 28, 2007, . 10. Pynn, “Errors Expose Need for Editing.” 11. Craig Silverman, “The Editing Equation: Fewer Copy Editors + Fewer Reporters + More Work = Trouble,” Columbia Journalism Review, July 10, 2009, . 12. Alyssa Appelman and Kirstie E. Hettinga, “Error Message: Creation and Validation of a Revised Codebook for Analysis of Newspaper Corrections” (paper presented at AEJMC San Francisco, CA, August 2015). 13. American Press Institute, “The Personal News Cycle: How Americans Choose to Get News,” March 17, 2014, 1, . 14. American Press Institute, “The Personal News Cycle,” 4. 15. American Press Institute, “The Personal News Cycle,” 11. 16. Rebecca Riffkin, “Americans’ Trust in Media Remains at Historical Low,” Gallup.com, . 17. Robert Dahl, Democracy and Its Critics ( Press, 1989). 18. Dahl, Democracy and Its Critics, 112. 19. Jeffrey Scheuer, The Big Picture: Why Democracies Need Journalistic Excellence (Routledge, 2008), xi. 20. Scott R. Maier, “Setting the Record Straight,” Journalism Practice 1, no. 1 (January 2007): 40. 21. Neil Nemeth and Craig Sanders, “Number of Corrections Increase at Two National Newspapers,” Newspaper Research Journal 30, no. 3 (Summer 2009): 90-104. 22. Scheuer, The Big Picture: Why Democracies Need Journalistic Excellence, 66. 23. Steven Lacy and Frederick Fico, “The Link between Newspaper Content Quality and Circulation,” Newspaper Research Journal 12, no. 2 (1991): 46-57. 24. Warren Breed, “Newspaper ‘Opinion Leaders’ and Processes of Standardization,” Journalism & Mass Communication Quarterly 32, no. 3 (1955): 282. 25. Steve M. Barkin and Mark R. Levy, “All the News that’s Fit to Correct: Corrections in the Times and the Post,” Journalism Quarterly 60, no. 2 (Summer 1983): 220. 26. Guy Golan, “Inter-media Agenda Setting and Global News Coverage: Assessing the Influence of the New York Times on Three Network Television Evening News Programs,” Journalism Studies 7, no. 2 (2006): 329. 27. Stephen D. Reese and Lucig H. Danielian, “Intermedia Influence and the Drug Issue: Converging on Cocaine,” in Agenda Setting Readings on Media, Public Policy and Policymaking, ed. David Protess and Maxwell McCombs (Hillsdale, New Jersey: Lawrence Erlbaum Associates, 1991), 247. 28. David Minthorn, Sally Jacobsen, and Paula Froke, eds., The Associated Press Stylebook and Briefing on Media Law 2015 (New York: The Associated Press, 2015), 311. 14 Newspaper Research Journal 00(0)

29. Craig Silverman, Regret the Error: How Media Mistakes Pollute the Press and Imperil Free Speech (Sterling Publishing Company, Inc., 2007). 30. Nemeth and Sanders, “Number of Corrections Increase at Two National Newspapers,” 100. 31. Margaret Sullivan, “Q&A on the Times’s Correction Policy,” 2006, . 32. Los Angeles Times, “L.A. Times Ethics Guidelines,” Latimes.com, 2014, . 33. Deirdre Edgar, email message to author, September 30, 2016. 34. Douglas A. Luke, Charlene A. Caburnay, and Elisia L. Cohen, “How Much Is Enough? New Recommendations for Using Constructed Week Sampling in Newspaper Content Analysis of Health Stories,” Communication Methods and Measures 5, no. 1 (2011): 76-91. 35. “Top 15 U.S. Newspapers by Circulation,” 2017, . 36. Stephen Lacy and Daniel Riffe, “Sampling Error and Selecting Intercoder Reliability Samples for Nominal Content Categories,” Journalism & Mass Communication Quarterly 73, no. 4 (1996): 963-973. 37. Lacy and Riffe, “Sampling Error and Selecting Intercoder Reliability Samples for Nominal Content Categories,” 969. 38. Andrew F. Hayes and Klaus Krippendorff, “Answering the Call for a Standard Reliability Measure for Coding Data,” Communication Methods and Measures 1, no. 1 (2007): 77-89. 39. Appelman and Hettinga, “Error Message.” 40. Mitchell V. Charnley, “Preliminary Notes on a Study of Newspaper Accuracy,” Journalism Quarterly 13, no. 4 (1936): 394-401. 41. William A. Tillinghast, “Newspaper Errors: Reporters Dispute Most Source Claims,” Newspaper Research Journal 3, no. 4 (July 1982): 15-23. 42. “Corrections,” The New York Times, October 1, 2010, . 43. “Corrections & Amplifications,” The Wall Street Journal, July 21, 2012, . 44. “For the Record,” The Los Angeles Times, May 31, 2013, . 45. “National,” The Washington Post, May 31, 2013, . 46. Appelman and Hettinga, “Error Message.” 47. Kirstie E. Hettinga and Alyssa Appelman, “Corrections of Newspaper Errors Have Little Impact,” Newspaper Research Journal 35, no. 1 (Winter 2014): 51-63. 48. Beaujon, “Copy Editors ‘Have Been Sacrificed More than Any Other Newsroom Category.’” 49. Christopher Meyers, “Creating an Effective Newspaper Ombudsman Position.” Journal of Ethics 15, no. 4 (December 1, 2000): 248-256. doi:10.1207/S15327728JMME1504_4. 50. Steve Buttry, “My Version of Craig Silverman’s Accuracy Checklist,” The Buttry Diary, January 4, 2011, . 51. Scheuer, The Big Picture: Why Democracies Need Journalistic Excellence, 27. 52. Kirstie E. Hettinga, Rosie Clark, and Alyssa Appelman, “Exploring the Use of Corrections on College Newspapers’ Websites,” College Media Review 53 (2016): 4-17. 53. Donica Mensing and Merlyn Oliver, “Editors at Small Newspapers Say Error Problems Serious,” Newspaper Research Journal 26, no. 4 (Fall 2005): 6-21.