Vulnerable to Misinformation? Verifi!

Vulnerable to Misinformation? Verifi!

Vulnerable to Misinformation? Verifi! Alireza Karduni Svitlana Volkova Samira Shaikh Isaac Cho Dustin L Arendt Wenwen Dou Ryan Wesslen Pacific Northwest National UNC-Charlotte, Department of Sashank Santhanam Labarotory Computer Science UNC-Charlotte, Department of Richland, Washington Charlotte, North Carolina Computer Science {svitlana.volkova,dustin.arendt}@ {sshaikh2,wdou1}@uncc.edu Charlotte, North Carolina pnnl.gov {akarduni,icho1,rwesslen,ssantha1}@ uncc.edu ABSTRACT practitioners from multiple disciplines including political science, We present Verifi2, a visual analytic system to support the inves- psychology and computer science are grappling with the effects of tigation of misinformation on social media. Various models and misinformation and devising means to combat it [14, 44, 52, 53, 63]. studies have emerged from multiple disciplines to detect or un- Although hardly a new phenomenon, both anecdotal evidence derstand the effects of misinformation. However, there is still a and systematic studies that emerged recently have demonstrated lack of intuitive and accessible tools that help social media users real consequences of misinformation on people’s attitudes and distinguish misinformation from verified news. Verifi2 uses state- actions [2, 46]. There are multiple factors contributing to the wide- of-the-art computational methods to highlight linguistic, network, spread prevalence of misinformation online. On the content gen- and image features that can distinguish suspicious news accounts. eration end, misleading or fabricated content can be created by By exploring news on a source and document level in Verifi2, users actors with malicious intent. In addition, the spread of such mis- can interact with the complex dimensions that characterize mis- information can be aggravated by social bots. On the receiving information and contrast how real and suspicious news outlets end, cognitive biases, including confirmation bias, that humans ex- differ on these dimensions. To evaluate Verifi2, we conduct inter- hibit and the echo chamber effect on social media platforms make views with experts in digital media, communications, education, individuals more susceptible to misinformation. and psychology who study misinformation. Our interviews high- Social media platforms have become a place for many to consume light the complexity of the problem of combating misinformation news. A Pew Research survey on “Social Media Use in 2018” [4] and show promising potential for Verifi2 as an educational tool on estimates a majority of Americans use Facebook (68%) and YouTube misinformation. (73%). In fact, 88% younger adults (between the age of 18 and 29) indicate that they visit any form of social media daily. However, CCS CONCEPTS despite being social media savvy, younger adults are alarmingly vulnerable when it comes to determining the quality of information • Human-centered computing → User interface design; So- online. According to the study by the Stanford History Education cial media; Visual analytics. Group, the ability of younger generations ranging from middle KEYWORDS school to college students that grew up with the internet and social media in judging the credibility of information online is bleak [63]. Misinformation, Social Media, Visual Analytics, Fake News The abundance of online information is a double-edged sword - ACM Reference Format: “[it] can either make us smarter and better informed or more ignorant Alireza Karduni, Isaac Cho, Ryan Wesslen, Sashank Santhanam, Svitlana and narrow-minded”. The authors of the report further emphasized Volkova, Dustin L Arendt, and Samira Shaikh, Wenwen Dou. 2019. Vulnera- that the outcome “depends on our awareness of the problem and our ble to Misinformation? Verifi!. In 24th International Conference on Intelligent educational response to it”. arXiv:1807.09739v2 [cs.HC] 17 Mar 2019 User Interfaces (IUI ’19), March 17–20, 2019, Marina del Ray, CA, USA. ACM, Addressing the problem of misinformation requires raising aware- New York, NY, USA, Article 4, 12 pages. https://doi.org/10.1145/3301275. 3302320 ness of a complex and diverse set of factors not always visible in a piece of news itself, but in its contextual information. To this 1 INTRODUCTION aim, we propose Verifi2, a visual analytic system that enables users to explore news in an informed way by presenting a variety of The rise of misinformation online has a far-reaching impact on the factors that contribute to its veracity, including the source’s usage lives of individuals and on our society as a whole. Researchers and of language, social interactions, and topical focus. The design of Publication rights licensed to ACM. ACM acknowledges that this contribution was Verifi2 is informed by recent studies on misinformation [32, 63], authored or co-authored by an employee, contractor or affiliate of the United States government. As such, the Government retains a nonexclusive, royalty-free right to computational results on features contributing to the identification publish or reproduce this article, or to allow others to do so, for Government purposes of misinformation on Twitter [61], and empirical results from user only. experiments [5]. IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA Our work makes the following contributions: © 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-6272-6/19/03...$15.00 https://doi.org/10.1145/3301275.3302320 IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA A. Karduni et al. • Verifi2 is one of the first visual interfaces that present fea- existing methods for identifying, detecting, and combating misin- tures shown to separate real vs. suspicious news [61], thus formation. raising awareness of multiple features that can inform the evaluation of news account veracity. 2.1 Production of misinformation: • To help users better study the social interaction of differ- How and why misinformation is created is a complex topic, and ent news sources, Verifi2 introduces a new social network studying it requires careful consideration of the type and intent of visualization that simultaneously presents accounts’ direct misinformation outlets, as well as different strategies they use to interactions and their topical similarities. create and propagate “effective” articles. Several intents and rea- • Verifi2 leverages state-of-the-art computational methods to sons can be noted on why misinformation exists. These reasons and support the investigation of misinformation by enabling intents include economic (e.g., generating revenue through inter- the comparison of how real and suspicious news accounts net advertisements and clicks), ideological (e.g., partisan accounts differ on language use, social network connections, entity focusing on ideology rather than truth), or political factors (e.g., mentions, and image postings. government produced propaganda, meddling with other countries • We provide usage scenarios illustrating how Verifi2 supports political processes) [11, 57, 58]. Driven by these intents, certain reasoning about misinformation on Twitter. We also provide news outlets use different strategies to produce and diffuse misin- feedback from experts in education/library science, psychol- formation. ogy, and communication studies as they discuss how they News media that produce misinformation often follow topics envision using such system within their work on battling and agendas of larger mainstream partisan news media, but in some misinformation and the issues they foresee. instances, they have the potential to set the agenda for mainstream news accounts [59]. Furthermore, they have a tendency to have nar- 2 PRODUCTION AND IDENTIFICATION OF row focuses on specific types of information that can manipulate MISINFORMATION specific populations. Often, we can observe extreme ideological bi- ases towards specific political parties, figures, or ideologies11 [ , 55]. Misinformation and its many related concepts (disinformation, ma- Some outlets take advantage of the psychological climate of audi- linformation, falsehoods, propaganda, etc.) have become a topic of ences by focusing on fearful topics or inciting anger and outrage great interest due to its effect on political and societal processes of in audiences [11]. Moreover, misinformation accounts tend to take countries around the world in recent years [29, 40, 50]. However, it advantage of cognitive framing of the audience by focusing on spe- is hardly a new phenomenon. At various stages in history, societies cific moral visions of certain groups of people adhere to[11, 18, 30]. have dealt with propaganda and misinformation coming from vari- They also filter information to focus on specific subsets of news, ous sources including governments, influential individuals, main- often focusing on specific political figures or places [8, 9]. stream news media, and more recently, social media platforms [12]. Misinformation is not always present just in text. Biases and It has been shown that misinformation indeed has some agenda news framing are often presented in images rather than text to con- setting power with respect to partisan or non-partisan mainstream vey messages [21]. The power of images is that they can contain news accounts or can even change political outcomes [13, 59]. Mis- implicit visual propositioning [7]. Images have been used to convey information is a broad and loosely defined concept and it consists racial stereotypes and are powerful tools to embed ideological mes- of various types of

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us