<<

Highly Partisan and Blatantly Wrong: Analyzing Publishers’ Critiques of ’s Reviewed Claims

Emma Lurie Eni Mustafaraj School of Information Department of Computer Science University of , Berkeley Wellesley College emma [email protected] [email protected]

Abstract

Google’s reviewed claims feature was an early attempt to incorporate additional credibility signals from fact-checking onto the search results page. The feature, which appeared when users searched for the name of a subset of news publishers, was criticized by dozens of publishers for its errors and alleged anti- conservative bias. By conducting an audit of news publisher search results and focusing on the critiques of publishers, we find that there is a lack of consensus between fact-checking Figure 1: The SERP of with the re- stakeholders that may be important viewed claims feature circled as it appeared in January to address in future iterations of public facing 2018. Reviewed Claims component contained fact- fact-checking tools. In particular, we find that checks of articles produced by that news publisher. a lack of transparency coupled with a lack of consensus on what makes a fact-check relevant to a news article led to the breakdown of re- to explore both conceptual misunderstandings be- viewed claims. tween stakeholders as well as inconsistencies aris- ing from the technical implementation of the fea- 1 Introduction ture. In November 2017, as part of an expanded effort The online fact-checking ecosystem is a com- to provide users with context about news publish- plex sociotechnical that includes technol- ers, Google released reviewed claims (see Figure 1 ogy platforms, news publishers, fact-checking or- and 2). The reviewed claims feature displayed ganizations, and online information seekers. As third-party fact-checks about content produced by platforms try to limit harmful misinformation on a subset of news publishers. This feature was pub- their platform and fact-checkers work to increase licized as a meaningful aid to information seekers the reach of their fact-check articles, there have to identify misinformation.3 This is one of several been a number of collaborations, some of which ways that Google tried to highlight fact-checks in 1 are ongoing, and others that can be considered search results, in accordance with best practices in 2 failed experiments. We focus on one “failed” ex- media literacy (Wineburg and McGrew, 2017). periment, that of Google’s reviewed claims. Fo- However, several news publishers complained cusing on the breakdown (Mulligan and Griffin, about this feature in January 2018. They al- 2018; Akrich, 1992; Bucher, 2017) or failure of leged that Google was displaying partisan bias and a specific feature like reviewed claims enables us claimed that several of the fact-checks listed in 1https://www.blog.google/topics/journalism- their reviewed claims components were not rele- news/building-trust-online-partnering-international-fact- vant to the associated content. Soon after, Google checking-network/ 2https://medium.com/facebook-design/ 3https://www.blog.google/products/ designing-against-misinformation- search/learn-more-about-publishers- e5846b3aa1e2 google/

64 Proceedings of the 2020 Truth and Trust Online (TTO 2020), pages 64–72, Virtual, October 16-17, 2020. scope of the the RC feature and 2) a more fo- cused and qualitative thematic analysis of 45 on- line news articles about the reviewed claims fea- ture. Our major takeaway from this case study is that the implementation of algorithmic solutions in support of fact-checking is a complex process that Figure 2: The Reviewed Claim feature attributes this requires and deserves the kind of transparency that claim to The Daily Caller. However, the PolitiFact fact- is so valued by the fact-checking community. Our check does not mention Daily Caller. This is an exam- findings contribute by expanding the conversation ple of an algorithmically assigned fact-check. around the efficacy of fact-checking interventions in the informational ecosystem. removed reviewed claims from the search engine results page (SERP), citing “bugs” in the feature’s 2 Auditing reviewed claims implementation.4 To situate publishers’ claims about the reviewed Reviewed claims (RC) is a worthwhile case claims feature, we report our findings from a Jan- study for several reasons. First, RC is an ex- uary 2018 audit of news publisher SERPs. In sum- ample of a major platform using fact-checks as mary, we identify 59 publisher names that dis- a level credibility signal (i.e. displaying played a RC tab when their names were searched. fact-checks for publishers not individual articles) These RC tabs were collected incidentally in a for information seekers. Researchers (Wineb- larger audit of the state of partisan and unreliable urg et al., 2016; Zhang et al., 2018) have called news publisher SERPs. While our data collection for greater context and information cues for in- was not designed to be an exhaustive list of pub- formation seekers to evaluate the the credibility lishers with a RC tab, it does provide the most of surfaced content. Second, purportedly with- complete picture of the reviewed claims feature out informing fact-checking , news to date. In Table 1, we report the 59 publish- publishers, or information seekers, Google lever- ers with the reviewed claims tab, accompanied by aged automated fact-checking techniques. Auto- their bias/accuracy label from BuzzFeed News or mated fact-checking, which uses algorithms to in- Fact-check. While we find that more crease the scale and reach of fact-checks, raises publishers on the “right” had RC tabs, there were additional questions about the values, such as a 5 more “left” sites than what was alleged by conser- commitment to transparency that undergird fact- vative publishers. checking. Third, while publishers often raised Our manual examination of the 171 unique fact- poorly supported claims of anti-conservative bias, checks attributed to the 59 publishers leads us to publishers also raised substantive concerns about conclude that Google used an algorithm to match particular stories that were surfaced in the re- existing fact-checks (that did not name the pub- viewed claims panel. These are of particular inter- lisher) to articles by that publisher. This prac- est in this paper as they offer the opportunity for tice is called claim matching and is an automated various stakeholders to build deeper understand- fact-checking technique that matches articles or ing of the challenges that will confront future au- statements (i.e. claims) to existing fact-checks. tomated fact-checking features. The following re- The use of this method is significant because search question guides our work: what examples Google did not disclose the use of automated fact- and issues did publishers raise to contest the re- checking in its documentation7 8 about the feature. viewed claims feature? We explore this question by 1) a data 2.1 Methods: Auditing RC collection effort6 that measured the prevalence and In January 2018, as part of an audit of news pub- 4https://www.poynter.org/fact-checking/ lisher SERPs, we discovered that 59 publishers 2018/blame-bugs-not-partisanship-for- google-wrongly-appending-a-fact-check- claims to-the-daily-caller/ 7https://support.google.com/websearch/ 5https://www.ifcncodeofprinciples.poynter.org/answer/7568277?hl=en know-more/the-commitments-of-the-code- 8https://www.blog.google/products/ of-principles search/learn-more-about-publishers- 6https://github.com/emmalurie/reviewed- google/

65 List of websites to which Google assigned a reviewed claims tab Bias or Accuracy (ordered by Alexa Rank) Breitbart, The Daily Caller, , WND, , , Free Republic, The Conservative Tree House, OAN Network, Big League Politics, The Political Insider, Frontpage , American Greatness, American Renaissance, Bearing Arms, Red State Watcher, Right Truthfeed, 100 Percent Fed Up, Freedom Outpost, Commentary Magazine, The Millennium Report, VDARE, Sparta Report, En Volve, Conservative Fighters, Silence is Consent, America’s Freedom Fighters, Freedom Daily, American News, American Conservative Herald, The New York Evening. Upworthy, Palmer Report, Democratic Underground, Counterpunch, Rightwingwatch, Bipartisan Report, True Activist, OpEd News, American Left Herald Tribune, Occupy Democrats, Egberto Willies, If You Only News, American News X, Resistance Report Zero Hedge, Before It’s News, Natural News, Above Top Secret, Collective Conspiracy / Evolution, Your News Wire, Investment Watch Blog, Awareness Act, Activist / Post, Renegade Tribune, The Common Sense Show, Fellowship of the minds, Intellihub, 21st Century Wire

Table 1: The list of all 59 websites with a reviewed claims (RC) tab in our January 2018 dataset, grouped by political bias or factual accuracy. The labels come from BuzzFeed or Media Bias/Fact Check. contained a reviewed claims component in their might be under counting the news publishers with SERP. These 1,150 publisher SERPs were sourced a knowledge panel. For established, well-known from prominent aggregated lists in the research publishers, Google correctly inferred the news and media literacy community.9 10 11 12 13 The publisher website. combined list of websites was designed to be a Each RC tab contained links to the fact-check comprehensive survey of active partisan or unre- and original article (see Figure 2). The first author liable U.S. publishers in late 2017. In total, the manually reviewed the 171 unique fact-checks that aggregated dataset amounts to unique 1,150 web- appeared on the 59 RC tabs to extract: 1) the site domain names. ClaimReview markup and 2) the mentions of all We use Selenium, a web browser news publishers and URLs identified as claimants tool, to issue queries to Google search. The in the fact-check article. queries sent to the browser were the domain names without the extension (e.g. “dailycaller” from 2.2 Results: Auditing RC “dailycaller.com”). Due to 1) the queries being domain names (e.g. “dailycaller” instead of “daily Of the 569 domain names searches that surfaced caller”), 2) some websites having stopped opera- news publishers as their first link, 59 SERPs had a tion, and 3) some publishers having names that RC tab with 221 fact-checks in total (171 unique are hard to disambiguate (antiwar.com), only in fact-checks). See Table 1 for the list of the 59 pub- 50% of results is the corresponding news pub- lishers. We found that 10 fact-checking organiza- lisher website ranked 1st on the SERP. Thus, we tions wrote the fact-checks that appeared in the re- viewed claims tabs, but .com had 66% 9 https://github.com/BuzzFeedNews/2017- (113 out of 171) of the fact-checks. There were 08-partisan-sites-and--pages 10https://www.politifact.com/punditfact/ 20 different ratings of the claims made article/2017/apr/20/-guide- in the fact-checking articles (e.g. false, pants on fake-news-websites-and-what-they/ fire, unproven, mixture etc.) and surfaced in each 11http://d279m997dpfwgl.cloudfront.net/ wp/2016/11/Resource-False-Misleading- claim description in RC. Clickbait-y-and-Satirical-%E2%80%9CNews% We then focused on the 128 (58%) fact-checks E2%80%9D-Sources-1.pdf that did not mention or link to the source that had 12https://github.com/BuzzFeedNews/2017- 12-fake-news-top-50/tree/master/data published the claim being fact-checked (that is, 13(Starbird, 2017) neither the text nor metadata of the fact-check arti-

66 cle mentioned the source). This fact serves as key Caller (DC). evidence that Google used a claim matching algo- rithm to identify articles that would be relevant to “...The third-party “fact-checking” orga- fact-checks. Google researchers published work nization says the “claim” in a DC arti- detailing how such a system would work (Wang cle that special Counsel Robert Mueller et al., 2018). Additionally, Google attributed the is hiring people that “are all Hillary failure of RC on problems with its “algorithm.” Clinton supporters” is misleading, if not Together, these three facts provide us with confi- false. The problem is that TheDC’s ar- dence that Google used claim matching in the re- ticle makes no such claim. Their cited viewed claims feature. For the rest of the paper language doesn’t even appear in the ar- we assume that Google integrated automated fact- ticle. Worse yet, there was no language checking into reviewed claims. trying to make it seem that the investiga- tion into the Trump administration and 3 Publisher Critiques of RC Russia is entirely comprised of Clinton donors. The story simply contained the 3.1 Methods: Thematic Analysis news: Mueller hired a The purpose of reporting the audit study is to pro- donor to aid the investigation into Presi- vide the necessary background to evaluate the pub- dent .”14 lishers’ critiques of reviewed claims. In March 2019, we collected 45 online news articles written In a follow up article by the DC, there is a about the RC feature. To collect these articles we quote from the third-party fact-checking organiza- searched Google with the search term “reviewed tion in response to the Daily claims” + publisher name for each of the 59 pub- Caller’s claims: lishers. We limited the date of the article from November 2017- March 2018. We then repeated “We went back and double-checked the the same search with “reviewed claims” to iden- story and the information submitted to tify articles from additional publishers. 18 of the Google, and The Daily Caller was not articles originated from a domain that had a re- mentioned at all, even in links,” Kris- viewed claims tab. Almost none of these 18 came tine Coratti, vice president of commu- from “left” sources. The remaining 27 came from nications at WaPo, told TheDCNF. “We other publishers that did not have a RC tab. All of clearly labeled the source, so I cannot the articles were written in January 2018. speak to how The Daily Caller ended up We conducted a thematic analysis (Braun and being erroneously listed as the source of 15 Clarke, 2006) of the data. We are particularly in- the fact-checked quote in this case.” terested in latent themes in the articles that reveal Google removed this particular fact check from more than the surface level arguments the publish- DC’s reviewed claims tab after their complaint ers were making. While we were aware of existing was published. Nevertheless, several important is- research and theories, we coded using an inductive sues are surfaced in this example. First is that the approach that did not start with any preconceived Washington Post cannot explain the appearance notions of what we would find in the data. of their fact-check on reviewed claims. This im- 3.2 Disputing fact-checks plies that a breakdown in communication between Google and fact-checking partners about the de- Before reporting key findings from our thematic tails of the feature. analysis, we present two excerpts of the critiques Second is the issue of defining what is a relevant of RC that highlight the types of issues surfaced fact-check. For political misinformation, the ideal by our analysis. fact-check, in terms of information seeker prefer- 3.2.1 Example 1: Who said that Robert ence and corrective effect, is a combination of a Mueller hired all Clinton supporters? 14https://dailycaller.com/2018/01/09/ The first is an excerpt from the article “Google’s -new-fact-check-feature-almost- exclusively-targets-conservative-sites/ New Fact-Check Feature Almost Exclusively Tar- 15https://dailycaller.com/2018/01/11/wapo-we-didnt- gets Conservative Sites,” written by the Daily attack-the-daily-caller-and-dont-know-why-google-is/

67 truth scale rating and longer explanation with ad- feature. Below is one of the examples they raise to ditional context (Amazeen et al., 2015). critique the fact-checks on reviewed claims. While both the Washington Post and DC agree that the fact-check claim that was evaluated on the “Out of many thousands of pieces pub- truth scale was not in the the DC article, the DC lished by The Federalist over the past article lists multiple members of Mueller’s team four years, a single one mentions the that donated to Democratic campaigns and then name Eileen Wellstone. That article, de- quotes casting aspersions on the in- tailing the sordid of , dependence of the Mueller investigation because mentions her name exactly once: “An- of these hires. The Washington Post fact-check other woman, Eileen Wellstone, claimed provides context on the question “Do Russia probe Clinton raped her while he was at Ox- attorneys’ donations to Democrats threaten their ford University in the late 1960s. independence?” This leaves us with three moti- For some reason, in this “reviewed vating questions for evaluating whether the fact- claim” against The Federalist, Google check is relevant to the news article. sends the reader to a Snopes fact-check First, does the original statement have to ap- that argues that Clinton wasn’t expelled pear verbatim in the article to be a relevant claim from Oxford over this alleged rape — a match? Claim matching (Hassan et al., point I concede sounds completely accu- 2017; Wang et al., 2018) have operated as if the rate and is also an assertion that no one answer to this question is no, but the Washington has ever made in this publication. Post response does not reflect that understanding. Graves finds that many fact-checking organiza- So the question is, does Google tag ev- tions do not list all claimants because it is unreal- ery article that relays accusations of sex- istic and costly for fact-checking organizations to ual misconduct or rape as “unproven,” continually update all claimants especially when or just the ones against Bill Clinton? Or the media ecosystem recycles so many persistent is the mention of Wellstone specifically claims (Graves, 2017). worthy of a claim? The Wellstone case Second, does the claim that is in the metadata has not only been cited in all types of and rated in the fact-check have to appear in the publications...[including] by The Wash- 16 news article? The DC as well as other publishers ington Post’s own fact-checker.” seem to operate under that belief. This seems rea- sonable, but many fact-checks address a specific This example also raises multiple points of claim and then give additional context that is ap- potential contention for fact-checking ecosytem plicable to a broader set of claims. stakeholders. One is about ambiguous fact-check This leads to the third question: is it sufficient labels, where the claim in the fact-check article is (with regard to a relevant claim match) for a fact- rated as “unproven” rather than “false.” This issue check to address the context of a misleading arti- extends beyond this example, as The Daily Caller cle, even if that is not fully reflected on the claim and the Daily Wire both contested fact-checks that listed in the metadata? The Washington Post fact- rated their claims as “mixed.” check provides a detailed explanation about why However the Google documentation for RC ex- campaign contributions are not evidence of bias plains that the feature “...simply indicates that a in Department of Justice investigations, which is publisher is writing about topics that attract the at- the assumption at the core of the DC article. Put tention of third-party fact-checking. A publisher another way, changing the claim the Washington could consistently be fact-checked by third-parties Post embedded in the metadata, but not altering as accurate and have the RC tab appear in their 17 the content of the article, may change whether the Knowledge Panel.” fact-check is relevant to the DC article. Another issue surfaced by the Federalist is that because they have only written a single article 3.2.2 Example 2: We didn’t say it, and we only said something similar one time! 16https://thefederalist.com/2018/01/ 10/googles-new-factchecker-is-partisan- The Federalist, a U.S. right wing news publisher, garbage/ wrote an additional critique of the reviewed claims 17https://support.google.com/websearch/answer/7568277

68 about Eileen Wellstone, it is not worthy of appear- As shown in Table 1, a number of liberal sites ing on the RC tab. This requirement of having to like Palmer Report, Democrats Underground, and repeatedly write about a topic is not raised at all in Upworthy had reviewed claims. However, more the Google documentation for reviewed claims or right-leaning sites had the feature. At the same by fact-checkers. A distinct but related argument time, many major conservative media sources like that the Federalist raises is that their article details the , , and many allegations against Bill Clinton, and Eileen Washington Times, did not have a RC tab. Wellstone was only mentioned in passing in a sin- This notion of anti-conservative bias is what gle sentence. In the previous example from the sociologist Arlie Hochschild describes as a deep Daily Caller, the critique was that the fact-check story. A deep story is defined as a metaphor-based was overly broad to the article’s specific claim, but narrative that is a “feels-as-if story” (Hochschild, here the issue is reversed: the news article was re- 2016). Both Hoschchild (Hochschild, 2016) and porting on a pattern and the fact-check is about a Polletta (Polletta and Callahan, 2019) find that single disputed claim. these deep stories unfold in an episodic way, so that spreaders of the deep story need only to refer 3.3 Results: Major Themes to specific episodes to remind listeners of the en- 3.3.1 Anti-conservative bias: A deep story tire deep stories. Nicole Hemmer (Hemmer, 2016) explains how conservative media created the story Throughout the news articles we analyzed, we find of the anti-conservative bias in mainstream media that the belief that Google discriminates against in the early 1960s, and Alice Marwick goes further conservatives was a major point in almost all of the to argue that anti-conservative bias in articles. Many of the articles are framed around is a growing and significant part of the overall anti- Google’s partisanship rather than the specifics conservative bias deep story (Marwick, 2018). of the feature itself. For most articles, other “episodes” were mentioned in passing. In partic- 3.3.2 Conflicting narratives of responsibility ular, there were many references to the firing of In the publishers’ critiques of reviewed claims, James Damore, the Google software engineer who there were conflicting narratives about which ac- wrote a memo criticizing Google’s diversity poli- tors were responsible for the reviewed claims cies in the same article as the reviewed claims fea- breakdown. Answers included Google, “an algo- ture. To be clear, James Damore did not work on rithm,” and individual fact-checkers. any element of fact-checking at Google, however A number of publishers, particularly those in- he had filed a lawsuit around the same time as the vested in the deep story of anti-conservative bias Daily Caller first publicized its complaints about discussed “Google’s fact-checking.” Of course, the RC feature. In the most detailed articles, the Google, like Facebook and YouTube, has tried to evidence of anti-conservative bias was that “lib- signal stakeholders that they are not the arbiters eral” peer publishers did not have the reviewed of truth, but rather neutral platforms (Gillespie, claims tab, resulting in an imbalanced, biased fea- 2010), so they employ non-partisan third-party ture. For example, the Daily Caller writes: fact-checkers to assess the validity of claims. On a help page for reviewed claims, Google suggests “When searching for a media outlet that if a publisher disagrees with one of the re- that leans right, like The Daily Caller viewed claims they should know that: (TheDC), Google gives users details on the sidebar, including what topics “Reviewed claims are made by publish- the site typically writes about, as well ers that fact check other publishers us- as a section titled ‘reviewed claims’... ing the Fact Check markup and have while equally partisan sites like Vox, been algorithmically determined to be ThinkProgress, Slate, The Huffington authoritative. If a publisher believes a Post, Daily Kos, Salon, Vice and Mother reviewed claim is incorrect, Google rec- 18 Jones are spared.” ommends they contact the fact-checker that wrote the review. Publishers can 18https://dailycaller.com/2018/01/09/ googles-new-fact-check-feature-almost- also use the link in the Knowl- exclusively-targets-conservative-sites/ edge Panel to report claims they believe

69 are inaccurate.19 want to answer this question...the fact that Google isn’t saying how this works Despite Google’s documentation, many pub- will just fuel more suspicion.” lishers were either unaware of the distinction be- tween Google featuring third-party fact-checks The messaging from Google may have led and fact-checking themselves, or did not find the some publishers place the blame on allegedly distinction meaningful as evidence, by framing the “partisan” fact-checkers. Few publishers dis- story around anti-conservative bias and referring missed fact-checking organizations as a whole, but to “Google’s fact-checking.” rather criticized specific fact-checking organiza- Google explained to the Daily Caller and Poyn- tions. The fact-checking organizations were cri- ter that the incorrect stories listed in Daily Caller’s tiqued for employing “rabid anti-Trumpers,” be- RC tab were due to a bug in a Google algorithm. ing “ideologically motivated” or having a “liberal By casting responsibility to the algorithm, there bent.” Snopes, Politifact, and Climate Feedback is a implication that the mistake is accidental and were the most criticized fact-checking organiza- not related to partisanship. It also raises questions, tions. While these attacks on fact-checking orga- given that Google had previously directed publish- nizations did not start or end with RC, since some ers disputing fact-checks to fact-checking organi- of fact-checkers’ core values are transparency and zations rather than itself. non-partisanship, this obfuscation of the technical While we did not find any evidence that Google implementation of the feature as well as the source was trying to target conservative voices, saying of the breakdown could have backfired on fact- that there was a bug in the algorithm is not a sat- checking organizations (Marwick, 2018, p. 508) isfactory explanation of the breakdown. With this public facing explanation that does not match their 4 Discussion & Conclusion documentation, there is almost no transparency for Fact-checking organizations prize transparency stakeholders into what has gone wrong. If the and expertise. Adding claim matching algo- problem is, in fact, a bug in their algorithm, then it rithms into fact-checking operations strains both may be reasonable to say that publishers problem of these values. This doesn’t preclude automat- with reviewed claims may in fact be “Google’s ing fact-checking from existing, but rather neces- automated fact-checking” rather than who Google sitates transparency in process and communication fact-checks. with stakeholders. While conversations between Gizmodo, a technology website that did not fact-checkers and platforms are happening behind have a RC tab but commented on the controversy, closed doors, fact-checking exists to benefit infor- places the cause of the breakdown on Google’s mation seekers. Ensuring that there is public trans- lack of transparency: parency and accountability seems essential to the “...transparency is a problem for success of fact-checking initiatives. Google’s reviewed claims widget. 4.1 Limitations Google never likes talking about the way its algorithms make decisions, The use case, affordances, and limitations of the and a spokesperson for the company reviewed claims and our analysis must be ac- declined to comment when asked how knowledged. Our manual review of every col- the decision is made to display the lected reviewed claim revealed a multiple exam- panel on some outlets’ knowledge ples where we felt confident saying that a reason- panels and not others...Additionally, the able person would not think the fact-check was knowledge panel has a feedback button relevant to the RC matched source article. We at the bottom that allows anyone to do not want to portray a correctly implemented flag false information...But this feature RC feature as a silver bullet solution to misin- raises the question of who reviews the formation on search engines. Moreover, we did feedback submissions. Google didn’t not evaluate with any user studies, so we cannot comment on how beneficial information seekers 19 https://support.google.com/ found (or would find) RC. We want to point out websearch/answer/7568277?p= news publishers kp&visit id= though that to read the details of the RC feature 637318238132485391-326966634&rd=1 (as shown in Figure 1 and 2), one had to click on

70 the tab header “Reviewed Claims” in the Knowl- ers that publish many articles were less likely to edge Panel. Many Google users may not have have the RC tab. It also means that the truthiness been aware that the feature existed. Additionally, of a fact-check evaluated claim was not incorpo- we acknowledge the hostility of a number of U.S. rated into their assessment. So, a claim evalu- and international publishers toward fact-checking. ated as true by a fact-checker was treated the same Our close reading of the arguments of the various way as a claim rated as “pants on fire.” Google publishers surfaced arguments that were not sup- accounts for this in their documentation of RC, ported by evidence. where they explain that RC does not mean that there is an accuracy problem with the publisher, 4.2 Discussion Topics and Open Research but unless one has read the documentation closely, Questions this is not the intuitive interpretation of the feature. We have attempted to highlight substantive ex- Given that fact-checkers use various “truth” amples that raise credible issues as well as rep- scales, it may be difficult to implement a “cut- resentative complaints about the RC feature and off” value for inclusion on an RC tab (e.g. “mix- larger fact-checking ecosystem. Even though the ture” vs. “unproven”, etc.). On the other hand, reviewed claims feature is no longer on the Google only 10 fact-checkers were included in the RC tab, results page, these questions are still important which seems like a manageable scale to collabo- for the fact-checking ecosystem to discuss. The rate with individual stakeholders about how to in- public-facing nature of RC allows for the interro- terpret their non-standardized fact-check labels. gation by external stakeholders. This is a unique The nature of the claim: If an article mentions opportunity, as other claim matching tools, such a fact-checked claim in passing, does it warrant a as those developed by Full Fact and the Duke Re- claim match? The fact-checks on the Reviewed porters Lab, are internal tools still in development Claim panel appear at the article level (see Fig- (Graves, 2018). ure 2); however, the specific claim is also high- Below are issues raised by publishers that are lighted within the card, so it’s ambiguous to the either understudied in the research community or information seeker. Publishers like the Federalist mismatches between Google’s public statements contested the idea that if a claim was mentioned in about the feature and publishers’ critiques. passing rather than being the core argument of the Necessity for more transparency: Google was piece, it was insufficient to warrant a fact-check. not transparent about 1) the criteria for displaying the RC tab on a publisher’s Knowledge Panel and Moreover, as discussed in Example 2 (3.2.2), 2) how fact-checks were selected to be included there is also the question about whether a fact- in the RC tab. Additionally, fact-checking part- check of a deep story or larger narrative can be ners, like the Washington Post, were seemingly applied to a specific instance of that story. Deep unaware about the feature when asked to respond stories, such as the one that “the Clintons are cor- to the Daily Caller. This lack of transparency in- rupt”, which contains episodes such as Clinton hibits accountability beyond the removal of the RC Foundation dealings, Clinton’s email server, Clin- tab. While perhaps the removal of the RC feature ton influence on the Mueller investigation, Clin- benefited affected publishers, it is unclear if the ton was a rapist, and others, permeate a substantial removal of RC benefited information seekers. part of the fact-checks that appear on the reviewed Criteria for inclusion: At best, reviewed claims across many publishers. In the first exam- claims was a feature to signal to information seek- ple, the Washington Post fact-check that the Daily ers that a news publisher required a closer look. In Caller criticizes starts by evaluating a statement by an anonymous Google statement to Poynter after President Trump and then broadens to discuss the the feature was removed, it was revealed that the larger narrative of corruption in the Mueller inves- presence of reviewed claims was calculated based tigation. Google removed that story after the Daily on the ratio between total articles published and Caller complained, implying that the DC’s analy- fact-checked articles.20. This means that publish- sis of the relevance of the claim was correct. The purpose (Nyhan and Reifler, 2015) and epistemol- 20 https://www.poynter.org/fact-checking/ ogy (Amazeen, 2015; Uscinski and Butler, 2013) 2018/blame-bugs-not-partisanship-for- google-wrongly-appending-a-fact-check- of fact-checking has been the subject of substantial to-the-daily-caller/ debate, and transparency and stakeholder buy-in

71 seems important to the success of future features. Francesca Polletta and Jessica Callahan. 2019. Deep stories, nostalgia narratives, and fake news: Sto- rytelling in the trump era. In Politics of mean- References ing/meaning of politics, pages 55–73. Springer. Madeleine Akrich. 1992. The de-scription of technical Kate Starbird. 2017. Examining the alternative me- objects. dia ecosystem through the production of alternative narratives of mass shooting events on twitter. In Michelle A Amazeen. 2015. Revisiting the epistemol- ICWSM, pages 230–239. ogy of fact-checking. Critical Review, 27(1):1–22. Joseph E Uscinski and Ryden W Butler. 2013. The Michelle A Amazeen, Emily Thorson, Ashley Mud- epistemology of fact checking. Critical Review, diman, and Lucas Graves. 2015. A comparison of 25(2):162–180. correction formats: The effectiveness and effects of rating scale versus contextual corrections on misin- Xuezhi Wang, Cong Yu, Simon Baumgartner, and Flip formation. American Press Institute. Downloaded Korn. 2018. Relevant document discovery for fact- April, 27:2015. checking articles. In The Web Conference 2018, pages 525–533. Virginia Braun and Victoria Clarke. 2006. Using the- Sam Wineburg and Sarah McGrew. 2017. Lateral read- matic analysis in psychology. Qualitative research ing: Reading less and learning more when evaluat- in psychology, 3(2):77–101. ing digital information. Taina Bucher. 2017. The algorithmic imaginary: ex- Sam Wineburg, Sarah McGrew, Joel Breakstone, and ploring the ordinary affects of facebook algorithms. Teresa Ortega. 2016. Evaluating information: The Information, communication & , 20(1):30– cornerstone of civic online reasoning. Stanford Dig- 44. ital Repository. Retrieved January, 8:2018. Tarleton Gillespie. 2010. The politics of ‘platforms’. Amy X Zhang, Aditya Ranganathan, Sarah Emlen New media & society, 12(3):347–364. Metz, Scott Appling, Connie Moon Sehat, Norman Gilmore, Nick B Adams, Emmanuel Vincent, Jen- Lucas Graves. 2017. Anatomy of a fact check: Ob- nifer Lee, Martin Robbins, et al. 2018. A struc- jective practice and the contested epistemology of tured response to misinformation: Defining and an- fact checking. Communication, Culture & Critique, notating credibility indicators in news articles. In 10(3):518–537. Companion Proceedings of the The Web Conference 2018, pages 603–612. Lucas Graves. 2018. Factsheet: Understanding the promise and limits of automated fact-checking. Institute for the Study of Journalism.

Naeemul Hassan, Fatma Arslan, Chengkai Li, and Mark Tremayne. 2017. Toward automated fact- checking: Detecting check-worthy factual claims by claimbuster. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1803–1812.

Nicole Hemmer. 2016. Messengers of the right: Con- servative media and the transformation of American politics. University of Pennsylvania Press.

Arlie Russell Hochschild. 2016. The ecstatic edge of politics: and donald trump. Contempo- rary Sociology, 45(6):683–689.

Alice Marwick. 2018. Why do people share fake news? a sociotechnical model of media effects. George- town Law Technology Review, 2:474–512.

Deirdre K Mulligan and Daniel S Griffin. 2018. Re- scripting search to respect the right to truth. George- town Law Technology Review.

Brendan Nyhan and Jason Reifler. 2015. Estimating fact-checking’s effects. Arlington, VA: American Press Institute.

72