Declare: Debunking Fake News and False Claims Using Evidence-Aware Deep Learning

Total Page:16

File Type:pdf, Size:1020Kb

Declare: Debunking Fake News and False Claims Using Evidence-Aware Deep Learning DeClarE: Debunking Fake News and False Claims using Evidence-Aware Deep Learning Kashyap Popat1, Subhabrata Mukherjee2, Andrew Yates1, Gerhard Weikum1 1Max Planck Institute for Informatics, Saarbrucken,¨ Germany 2Amazon Inc., Seattle, USA fkpopat,ayates,[email protected], [email protected] Abstract State of the Art and Limitations: Prior work on “truth discovery” (see Li et al.(2016) for survey) 1 Misinformation such as fake news is one of largely focused on structured facts, typically in the big challenges of our society. Research on the form of subject-predicate-object triples, or on automated fact-checking has proposed meth- social media platforms like Twitter, Sina Weibo, ods based on supervised learning, but these etc. Recently, methods have been proposed to as- approaches do not consider external evidence apart from labeled training instances. Recent sess the credibility of claims in natural language approaches counter this deficit by considering form (Popat et al., 2017; Rashkin et al., 2017; external sources related to a claim. However, Wang, 2017), such as news headlines, quotes from these methods require substantial feature mod- speeches, blog posts, etc. eling and rich lexicons. This paper overcomes The methods geared for general text input ad- these limitations of prior work with an end-to- dress the problem in different ways. On the one end model for evidence-aware credibility as- hand, methods like Rashkin et al.(2017); Wang sessment of arbitrary textual claims, without any human intervention. It presents a neural (2017) train neural networks on labeled claims network model that judiciously aggregates sig- from sites like PolitiFact.com, providing credibil- nals from external evidence articles, the lan- ity assessments without any explicit feature mod- guage of these articles and the trustworthiness eling. However, they use only the text of ques- of their sources. It also derives informative tionable claims and no external evidence or inter- features for generating user-comprehensible actions that provide limited context for credibil- explanations that makes the neural network ity analysis. These approaches also do not offer predictions transparent to the end-user. Exper- any explanation of their verdicts. On the other iments with four datasets and ablation studies show the strength of our method. hand, Popat et al.(2017) considers external evi- dence in the form of other articles (retrieved from 1 Introduction the Web) that confirm or refute a claim, and jointly assesses the language style (using subjectivity lex- Motivation: Modern media (e.g., news feeds, mi- icons), the trustworthiness of the sources, and the croblogs, etc.) exhibit an increasing fraction of credibility of the claim. This is achieved via a misleading and manipulative content, from ques- pipeline of supervised classifiers. On the upside, tionable claims and “alternative facts” to com- this method generates user-interpretable explana- pletely faked news. The media landscape is be- tions by pointing to informative snippets of evi- coming a twilight zone and battleground. This so- dence articles. On the downside, it requires sub- cietal challenge has led to the rise of fact-checking stantial feature modeling and rich lexicons to de- and debunking websites, such as Snopes.com tect bias and subjectivity in the language style. and PolitiFact.com, where people research claims, Approach and Contribution: To overcome the manually assess their credibility, and present their limitations of the prior works, we present De- verdict along with evidence (e.g., background ar- ClarE2, an end-to-end neural network model for ticles, quotations, etc.). However, this manual ver- assessing and explaining the credibility of arbi- ification is time-consuming. To keep up with the 1As fully objective and unarguable truth is often elusive scale and speed at which misinformation spreads, or ill-defined, we use the term credibility rather than “truth”. we need tools to automate this debunking process. 2Debunking Claims with Interpretable Evidence 22 Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 22–32 Brussels, Belgium, October 31 - November 4, 2018. c 2018 Association for Computational Linguistics trary claims in natural-language text form. Our 2 End-to-end Framework for Credibility approach combines the best of both families of Analysis prior methods. Similar to Popat et al.(2017), De- ClarE incorporates external evidence or counter- Consider a set of N claims hCni from the respec- evidence from the Web as well as signals from the tive origins/sources hCSni, where n 2 [1;N]. language style and the trustworthiness of the un- Each claim Cn is reported by a set of M arti- derlying sources. However, our method does not cles hAm;ni along with their respective sources require any feature engineering, lexicons, or other hASm;ni, where m 2 [1;M]. Each corresponding manual intervention. Rashkin et al.(2017); Wang tuple of claim and its origin, reporting articles and (2017) also develop an end-to-end model, but De- article sources – hCn;CSn;Am;n; ASm;ni forms ClarE goes far beyond in terms of considering ex- a training instance in our setting, along with the ternal evidence and joint interactions between sev- credibility label of the claim used as ground-truth eral factors, and also in its ability to generate user- during network training. Figure1 gives a pictorial interpretable explanations in addition to highly overview of our model. In the following sections, accurate assessments. For example, given the we provide a detailed description of our approach. “the gun epidemic natural-language input claim 2.1 Input Representations is the leading cause of death of young African- American men, more than the next nine causes put The input claim Cn of length l is represented as d together” by Hillary Clinton, DeClarE draws on [c1; c2; :::; cl] where cl 2 < is the d-dimensional evidence from the Web to arrive at its verdict cred- word embedding of the l-th word in the input ible, and returns annotated snippets like the one claim. The source/origin of the claim CSn is rep- in Table6 as explanation. These snippets, which resented by a ds-dimensional embedding vector ds contain evidence in the form of statistics and as- csn 2 < . sertions, are automatically extracted from web ar- A reporting article Am;n consisting of k to- ticles from sources of varying credibility. kens is represented by [am;n;1; am;n;2; :::; am;n;k], d Given an input claim, DeClarE searches for web where am;n;k 2 < is the d-dimensional word articles related to the claim. It considers the con- embedding vector for the k-th word in the report- text of the claim via word embeddings and the ing article Am;n. The claim and article word em- (language of) web articles captured via a bidirec- beddings have shared parameters. The source of tional LSTM (biLSTM), while using an attention the reporting article ASm;n is represented as a ds- ds mechanism to focus on parts of the articles accord- dimensional vector, asm;n 2 < . For the sake ing to their relevance to the claim. DeClarE then of brevity, we drop the notation subscripts n and aggregates all the information about claim source, m in the following sections by considering only a web article contexts, attention weights, and trust- single training instance – the input claim Cn from worthiness of the underlying sources to assess the source CSn, the corresponding article Am;n and claim. It also derives informative features for in- its sources ASm;n given by: hC; CS; A; ASi. terpretability, like source embeddings that capture 2.2 Article Representation trustworthiness and salient words captured via at- tention. Key contributions of this paper are: To create a representation of an article, which may capture task-specific features such as whether it • Model: An end-to-end neural network model contains objective language, we use a bidirectional which automatically assesses the credibility Long Short-Term Memory (LSTM) network as of natural-language claims, without any hand- proposed by Graves et al.(2005). A basic LSTM crafted features or lexicons. cell consists of various gates to control the flow of • Interpretability: An attention mechanism in information through timesteps in a sequence, mak- our model that generates user-comprehensible ing LSTMs suitable for capturing long and short explanations, making credibility verdicts range dependencies in text that may be difficult transparent and interpretable. to capture with standard recurrent neural networks • Experiments: Extensive experiments on four (RNNs). Given an input word embedding of to- datasets and ablation studies, demonstrating kens haki, an LSTM cell performs various non- effectiveness of our method over state-of-the- linear transformations to generate a hidden vector art baselines. state hk for each token at each timestep k. 23 Claim Word Claim Source Embeddings Embedding Concatenate Dense Layer avg Attention Weights X Softmax X avg Credibility X Score Article Word Inner Product Softmax/ Embeddings Linear Bidirectional Dense Dense LSTM Layer Layer Article Source Concatenate Embedding Features Figure 1: Framework for credibility assessment. Upper part of the pipeline combines the article and claim embeddings to get the claim specific attention weights. Lower part of the pipeline captures the article representation through biLSTM. Attention focused article representation along with the source embeddings are passed through dense layers to predict the credibility score of the claim. We use bidirectional LSTMs in place of stan- words therein: dard LSTMs. Bidirectional LSTMs capture both 1 X the previous timesteps (past features) and the fu- c¯ = cl l ture timesteps (future features) via forward and l backward states respectively. Correspondingly, We combine this overall representation of the there are two hidden states that capture past and claim with each article term: future information that are concatenated to form −! − the final output as: hk = [hk; hk].
Recommended publications
  • IN AMERICAN POLITICS TODAY, EVEN FACT-CHECKERS ARE VIEWED THROUGH PARTISAN LENSES by David C
    IN AMERICAN POLITICS TODAY, EVEN FACT-CHECKERS ARE VIEWED THROUGH PARTISAN LENSES by David C. Barker, American University What can be done about the “post-truth” era in American politics? Many hope that beefed-up fact-checking by reputable nonpartisan organizations like Fact Checker, PolitiFact, FactCheck.org, and Snopes can equip citizens with the tools needed to identify lies, thus reducing the incentive politicians have to propagate untruths. Unfortunately, as revealed by research I have done with colleagues, fact-checking organizations have not provided a cure-all to misinformation. Fact-checkers cannot prevent the politicization of facts, because they themselves are evaluated through a partisan lens. Partisan Biases and Trust of Fact Checkers in the 2016 Election In May of 2016, just a couple of weeks before the California Democratic primary, my colleagues and I conducted an experiment that exposed a representative sample of Californians to the statement: “Nonpartisan fact-checking organizations like PolitiFact rate controversial candidate statements for truthfulness.” We also exposed one randomized half of the sample to the following statement and image: “Each presidential candidate's current PolitiFact average truthfulness score is placed on the scale below.” The Politifact image validated the mainstream narrative about Donald Trump’s disdain for facts, and also reinforced Bernie Sanders’s tell-it-like-it-is reputation. But it contradicted conventional wisdom by revealing that Clinton had been the most accurate candidate overall (though the difference between the Clinton and Sanders ratings was not statistically significant). What did we expect to find? Some observers presume that Republicans are the most impervious to professional fact-checking.
    [Show full text]
  • Starr Forum: Russia's Information War on America
    MIT Center for Intnl Studies | Starr Forum: Russia’s Information War on America CAROL Welcome everyone. We're delighted that so many people could join us today. Very SAIVETZ: excited that we have such a timely topic to discuss, and we have two experts in the field to discuss it. But before I do that, I'm supposed to tell you that this is an event that is co-sponsored by the Center for International Studies at MIT, the Security Studies program at MIT, and MIT Russia. I should also introduce myself. My name is Carol Saivetz. I'm a senior advisor at the Security Studies program at MIT, and I co-chair a seminar, along with my colleague Elizabeth Wood, whom we will meet after the talk. And we co-chair a seminar series called Focus on Russia. And this is part of that seminar series as well. I couldn't think of a better topic to talk about in the lead-up to the US presidential election, which is now only 40 days away. We've heard so much in 2016 about Russian attempts to influence the election then, and we're hearing again from the CIA and from the intelligence community that Russia is, again, trying to influence who shows up, where people vote. They are mimicking some of Donald Trump's talking points about Joe Biden's strength and intellectual capabilities, et cetera. And we've really brought together two experts in the field. Nina Jankowicz studies the intersection of democracy and technology in central and eastern Europe.
    [Show full text]
  • Fact Or Fiction?
    The Ins and Outs of Media Literacy 1 Part 1: Fact or Fiction? Fake News, Alternative Facts, and other False Information By Jeff Rand La Crosse Public Library 2 Goals To give you the knowledge and tools to be a better evaluator of information Make you an agent in the fight against falsehood 3 Ground rules Our focus is knowledge and tools, not individuals You may see words and images that disturb you or do not agree with your point of view No political arguments Agree 4 Historical Context “No one in this world . has ever lost money by underestimating the intelligence of the great masses of plain people.” (H. L. Mencken, September 19, 1926) 5 What is happening now and why 6 Shift from “Old” to “New” Media Business/Professional Individual/Social Newspapers Facebook Magazines Twitter Television Websites/blogs Radio 7 News Platforms 8 Who is your news source? Professional? Personal? Educated Trained Experienced Supervised With a code of ethics https://www.spj.org/ethicscode.asp 9 Social Media & News 10 Facebook & Fake News 11 Veles, Macedonia 12 Filtering Based on: Creates filter bubbles Your location Previous searches Previous clicks Previous purchases Overall popularity 13 Echo chamber effect 14 Repetition theory Coke is the real thing. Coke is the real thing. Coke is the real thing. Coke is the real thing. Coke is the real thing. 15 Our tendencies Filter bubbles: not going outside of your own beliefs Echo chambers: repeating whatever you agree with to the exclusion of everything else Information avoidance: just picking what you agree with and ignoring everything else Satisficing: stopping when the first result agrees with your thinking and not researching further Instant gratification: clicking “Like” and “Share” without thinking (Dr.
    [Show full text]
  • Disinformation, and Influence Campaigns on Twitter 'Fake News'
    Disinformation, ‘Fake News’ and Influence Campaigns on Twitter OCTOBER 2018 Matthew Hindman Vlad Barash George Washington University Graphika Contents Executive Summary . 3 Introduction . 7 A Problem Both Old and New . 9 Defining Fake News Outlets . 13 Bots, Trolls and ‘Cyborgs’ on Twitter . 16 Map Methodology . 19 Election Data and Maps . 22 Election Core Map Election Periphery Map Postelection Map Fake Accounts From Russia’s Most Prominent Troll Farm . 33 Disinformation Campaigns on Twitter: Chronotopes . 34 #NoDAPL #WikiLeaks #SpiritCooking #SyriaHoax #SethRich Conclusion . 43 Bibliography . 45 Notes . 55 2 EXECUTIVE SUMMARY This study is one of the largest analyses to date on how fake news spread on Twitter both during and after the 2016 election campaign. Using tools and mapping methods from Graphika, a social media intelligence firm, we study more than 10 million tweets from 700,000 Twitter accounts that linked to more than 600 fake and conspiracy news outlets. Crucially, we study fake and con- spiracy news both before and after the election, allowing us to measure how the fake news ecosystem has evolved since November 2016. Much fake news and disinformation is still being spread on Twitter. Consistent with other research, we find more than 6.6 million tweets linking to fake and conspiracy news publishers in the month before the 2016 election. Yet disinformation continues to be a substantial problem postelection, with 4.0 million tweets linking to fake and conspiracy news publishers found in a 30-day period from mid-March to mid-April 2017. Contrary to claims that fake news is a game of “whack-a-mole,” more than 80 percent of the disinformation accounts in our election maps are still active as this report goes to press.
    [Show full text]
  • Covid-19, Free Speech, Hate Speech: Implications for Journalism Teaching
    COVID-19, FREE SPEECH, NEWS REPORTING 1 Running head: COVID-19, FREE SPEECH, NEWS REPORTING Covid-19, Free Speech, Hate Speech: Implications for Journalism Teaching Jerry Crawford1, Anastasia Kononova2, Mia Moody-Ramirez3 and Carolyn Bronstein4 Abstract The essay stems from conversations and research from AEJMC’s “Recommended EthicaL ProfessionaL Freedom & ResponsibiLity GuideLines.” It is from this perspective that we address the handling of free speech during the Covid-19 pandemic and how threats to free speech, particularly during raciaLLy charged periods, may affect teaching in journaLism and mass communication programs. Further, we discuss instances in which free speech is used during a crisis to contribute to raciaL and ethnic stereotypes and exacerbate sociopoliticaL divisions. We incLude background and context on the pandemic and how the media covered it; implications of misinformation and free speech; sourcing and citizen journaLism; and various types of hate speech. 1 Associate Professor at William Allen White School of Journalism and Mass Communication, The University of Kansas 2 Assistant Professor in the Department of Advertising + Public Relations, Michigan State University 3 Professor in the Department of Journalism, PuBlic Relations & New Media, Baylor University 4 Professor of Media Studies in the College of Communication, DePaul University COVID-19, FREE SPEECH, NEWS REPORTING 2 The year into the current Covid-19 pandemic has brought irreversible changes to the globaL community. This pandemic presents an important opportunity for journaLism and communication educators to consider the interplay among misinformation, free speech, hate speech and cLassroom teaching – whether in-person or virtuaL. The pandemic is not only a heaLth concern, but it is aLso a politicaLLy divisive topic that has been debated from various perspectives.
    [Show full text]
  • Media Manipulation and Disinformation Online Alice Marwick and Rebecca Lewis CONTENTS
    Media Manipulation and Disinformation Online Alice Marwick and Rebecca Lewis CONTENTS Executive Summary ....................................................... 1 What Techniques Do Media Manipulators Use? ....... 33 Understanding Media Manipulation ............................ 2 Participatory Culture ........................................... 33 Who is Manipulating the Media? ................................. 4 Networks ............................................................. 34 Internet Trolls ......................................................... 4 Memes ................................................................. 35 Gamergaters .......................................................... 7 Bots ...................................................................... 36 Hate Groups and Ideologues ............................... 9 Strategic Amplification and Framing ................. 38 The Alt-Right ................................................... 9 Why is the Media Vulnerable? .................................... 40 The Manosphere .......................................... 13 Lack of Trust in Media ......................................... 40 Conspiracy Theorists ........................................... 17 Decline of Local News ........................................ 41 Influencers............................................................ 20 The Attention Economy ...................................... 42 Hyper-Partisan News Outlets ............................. 21 What are the Outcomes? ..........................................
    [Show full text]
  • Snopes Digest
    View in Your Browser MEMBERS ONLY Snopes Digest April 28, 2020 • Issue #9 1 2 Behind the Snopes Doreen Marchionni, vice president of editorial, explains why our investigations share a particular theme. 3 In Case You Missed It The most popular and most important stories on Snopes.com lately. 4 Snopes-worthy Reads Good stories we’ve shared amongst ourselves recently. Issue #9 edited by Brandon Echter and Bond Huberman 1. Misinformation at Scale brandonechter, we here at Snopes have sussed out thousands of dubious claims, some shared innocuously and others pushed for more nefarious reasons. But some of the most puzzling we’ve seen are those distributed by practitioners who know how to make it seem like there’s a crowd where there is none. Whether it’s fake accounts inflating Facebook groups or outlets creating new accounts to get around bans, the use of bots, automation, and impersonation for the sake of fraud is widespread on the internet. It’s called coordinated inauthentic behavior, and you don’t have to look far for examples. In this issue of the Snopes Digest, we’re shining a light on the tactics employed to push political agendas, scam unsuspecting users, and manipulate people — and until the platforms hosting this activity put a stop to it, Snopes will be there to track them down. Stay vigilant, Team Snopes We Want to Hear from You What have you experienced during the coronavirus pandemic? How are you holding up? We want to hear how our readers are living through this crisis. Tell Us Your Story Snopes-tionary Speak like an insider! Each newsletter, we’ll explain a term or piece of fact- checking lingo that we use on the Snopes team.
    [Show full text]
  • Resources Compiled by Nicole A. Cooke, for the Fake News Workshop Presented at the Ischool at the University of Illinois - February 1, 2017
    Resources compiled by Nicole A. Cooke, for the Fake News Workshop presented at the iSchool at the University of Illinois - February 1, 2017 Media Matters for America http://mediamatters.org Media Matters for America is a Web-based, not-for-profit, 501(c)(3) progressive research and information center dedicated to comprehensively monitoring, analyzing, and correcting conservative misinformation in the U.S. media. The News Literacy Project (NLP) http://www.thenewsliteracyproject.org/ The News Literacy Project (NLP) is a nonpartisan national education nonprofit that works with educators and journalists to teach middle school and high school students how to sort fact from fiction in the digital age. NLP provides these students with the essential sKills they need to become smart, active consumers of news and information and engaged, informed citizens. Center for News Literacy http://www.centerfornewsliteracy.org/ News Literacy is a curriculum developed at Stony BrooK University in New York over the past decade. It is designed to help students develop critical thinking sKills in order to judge the reliability and credibility of information, whether it comes via print, television or the Internet. This is a particularly important sKill in the Digital Age, as everyone struggles to deal with information overload and the difficulty in determining the authenticity of reports. In the Stony BrooK model, students are taught to evaluate information primarily by analyzing news as well as new forms of information that are often mistaKen for journalism. Snopes http://www.snopes.com/info/aboutus.asp The snopes.com website was founded by David MiKKelson, who lives and worKs in the Los Angeles area.
    [Show full text]
  • Redbubble Limited
    REDBUBBLE LIMITED Australian Code of Practice on Disinformation and Misinformation Initial Report 21 May 2021 ____________________________________________________________________________________________ Executive summary Redbubble is an artist marketplace dedicated to giving independent artists a meaningful way to sell their creations. Redbubble recognises the harm that misinformation/disinformation can cause to the general public. The Redbubble Community and Content Guidelines prohibit participants in the marketplace from uploading this type of content and our dedicated Controversial Content team actively monitors for designs that violate our standards. Redbubble has signed up to the obligations under the Disinformation Code as part of its commitment to stopping the spread of misinformation and disinformation. This report details the measures that Redbubble takes in this area. ____________________________________________________________________________________________ 1 ____________________________________________________________________________________________ Background Business and Content Context Founded in 2006, the Redbubble Limited (ASX:RBL) owns and operates the leading global online marketplace, Redbubble.com, powered by independent artists. The Redbubble marketplace is driven by the user-generated content of contributing artists. The Redbubble community of passionate artists sell their designs on high-quality, everyday products such as apparel, stationery, housewares, bags, wall art and facemasks. Through the Redbubble marketplace, these independent artists are able to profit from their creativity and reach a new universe of adoring fans. For customers of the marketplace, it is the ultimate in self-expression. A simple but meaningful way to show the world who they are and what they care about. Approach to Disinformation and Misinformation Whilst artistic freedom and inclusivity is a cornerstone of Redbubble’s mission, Redbubble recognises the harm that arises from the spread of Misinformation and Disinformation.
    [Show full text]
  • Get the Facts: Finding Truthful News
    1 Get the Facts: Finding Truthful News “A lie can get halfway around the world before the truth can even get its boots on.” – Mark Twain What is fake news? Fake news is rampant in the digital age. For the scope of this class, “fake news” encompasses misinformation, or, any information that is false, partly false, or presented in a skewed or biased manner. Fake headlines are often exaggerated and inflammatory in order to attract “clicks” and visits to a specific site, which generates revenue. Social media is also enabling the spread fake news and misinformation; news that seems “real” or “true” is often shared faster than experts can fact-check. Fake news is also a subset of the wider “bad news” trend, “which also encompasses many forms of shoddy, unresearched, error-filled, and deliberately misleading reporting that do a disservice to everyone.”1 Remember: misinformation and misleading information doesn’t only take the form of news! You can apply the same techniques discussed in class to advertisements, images, sponsorships, satires, and individual social media posts, among many other information outlets. Class Outcomes: By the end of class today, you should be able to: - Identify misleading or false information, including satire, using telltale signs and signifiers. - Use a number of online tools to verify facts and information. - Use due diligence to inspect, verify, and evaluate news sources before sharing or propagating them. - Understand the difference between weak sense and strong sense critical thinking, and use both of them! 1 Kiely, E. & Robertson, L. (n.d.) How to Spot Fake News.
    [Show full text]
  • Resource Guide September 2020 the Information in This Resource Guide Was Compiled from Available Sources and Is Provided for Discussion Purposes Only
    League of Women Voters of Montgomery County, MD, Inc. Resource Guide September 2020 The information in this Resource Guide was compiled from available sources and is provided for discussion purposes only. It does not represent a position of LWVMC and is not intended for reproduction. FALSE AND MISLEADING NEWS INTERFERES WITH HOW WE THINK AND ACT DURING ELECTIONS I. The League’s Long History of Educating Voters Anyone researching the League of Women Voters’ history will discover that from its inception the League has been devoted to educating voters. In 1920 as Carrie Chapman Catt watched the states ratify the 19th Amendment one after the other, she announced the creation of the League of Women Voters, which would make possible a "mighty political experiment." 20 million women could carry out their new responsibilities as voters because the League would educate them on the voting process and on how to participate in the political arena. Throughout its 100-year history, the League has built its reputation as a non-partisan provider of election information characterized by integrity and accuracy. Since 1946 LWVMC has published a Voters’ Guide as a service to the community. As early as the 1990s, the national League began providing a dedicated website for voter information. In 2006 the LWVUS launched a new generation of online voter education with VOTE411.org, a “one-stop shop” for election-related information. Today VOTE411 provides both general and state-specific nonpartisan resources to the voting public, including a nationwide polling place locator, a ballot look-up tool, candidate positions on issues and more.
    [Show full text]
  • Fake News Covid 19
    FAKE NEWS & COVID-19 LEARN TO IDENTIFY & AVOID FAKE NEWS WHAT IS FAKE NEWS? INFO THAT IS UNRELIABLE AND UNSUPPORTED: DISINFORMATION (deliberately misleading) MISINFORMATION (unintentionally misleading) HOW DO WE SPOT IT? HEADLINES ARE NOT ARTICLES Headlines are meant to grab our attention and can be designed to mislead. Keep this in mind while reading and especially before sharing. SOURCE MATTERS Seasoned reporters and publications list their sources. If there are no sources or unreliable sources, question the piece. KNOW THE AUTHOR Take the time to learn about the author’s credentials. Is the author using reliable sources? CHECK THE DATE Double check the date and time of the article. Is this old news? Especially important with how rapidly COVID-19 information is changing. WHAT’S THE POINT? Understand the intent behind the article. Most articles are written to inform, educate or entertain. Be aware of articles written to further political agendas or to sell something. IDENTIFY BIAS - YOURS & THEIRS Be aware of author and source bias. Be aware of yours - we all have them! Read multiple sources to get the bigger picture. CONSULT THE EXPERTS COVID-19 Experts Fake News Experts • World Health Organization: • Snopes: who.int snopes.com STAY • Center for Disease Control: • Politifact: cdc.gov politifact.com • Johns Hopkins Interactive Map: • Factcheck.org: SAFE! coronavirus.jhu.edu/map.html factcheck.org • IL Dept. of Public Health: • Media Bias Chart: dph.illinois.gov adfontesmedia.com • WI Dept. of Health: • Washington Post Fact Checker: dhs.wisconsin.gov washingtonpost.com/news/fact-checker.
    [Show full text]