Fake News, Filter Bubbles, Post-Truth and Trust

Total Page:16

File Type:pdf, Size:1020Kb

Fake News, Filter Bubbles, Post-Truth and Trust Fake news, filter bubbles, post-truth and trust A study across 27 countries FAKE NEWS, POST-TRUTH AND FILTER BUBBLES Public 3 The average person in T Total 65% 25% 1 US 77% 14% [COUNTRY] lives in their 2 India 74% 21% 3 Malaysia 72% 23% own ‘bubble’ on the 4 Sweden 71% 19% 5 Turkey 71% 24% internet, mostly 6 Australia 70% 19% 7 Great Britain 70% 19% connecting with people 8 Mexico 70% 25% 9 Canada 69% 20% like themselves and 10 Serbia 69% 24% 11 Argentina 68% 25% looking for opinions they 12 Chile 67% 29% 13 Hungary 67% 19% already agree with. 14 South Africa 67% 26% 15 Peru 66% 30% 16 South Korea 66% 22% 17 Poland 64% 22% 18 Belgium 62% 22% 19 Spain 62% 24% 20 China 61% 27% 21 Brazil 60% 30% 22 Russia 60% 30% KEY: 23 Saudi Arabia 59% 29% Agree Disagree 24 France 57% 25% 25 Germany 55% 28% 26 Italy 50% 37% 27 Japan 44% 35% Base: 13,500 adults across 27 countries (500 GB) 4 © Ipsos. T Total 34% 56% I live in my own bubble on 1 India 56% 39% 2 China 54% 34% the internet, mostly 3 Malaysia 53% 43% 4 Saudi Arabia 49% 39% connecting with people 5 South Korea 44% 45% 6 Peru 40% 53% like myself and looking for 7 Brazil 38% 52% 8 South Africa 37% 58% opinions I already agree 9 Turkey 37% 58% 10 Chile 34% 60% with. 11 Poland 34% 55% 12 Australia 32% 57% 13 Hungary 32% 57% 14 Russia 32% 62% 15 US 32% 56% 16 Canada 31% 61% 17 Mexico 31% 64% 18 Belgium 30% 57% 19 Great Britain 30% 61% 20 Spain 30% 57% 21 Serbia 29% 64% 22 Japan 27% 53% KEY: 23 France 25% 59% Agree Disagree 24 Italy 25% 62% 25 Argentina 23% 70% 26 Sweden 23% 71% 27 Germany 22% 64% Base: 13,500 adults across 27 countries (500 GB) 5 © Ipsos. We’re much more likely to think others live in filter bubble than admit we do… +/- Net T Total +31 1 Sweden +48 2 Argentina +45 This chart compares what 3 US +45 people say for others and 4 Great Britain +40 5 Serbia +40 what they say for 6 Mexico +39 7 Australia +38 themselves on how much 8 Canada +38 they live in a bubble, from 9 Hungary +35 10 Turkey +34 previous two slides… 11 Germany +33 12 Chile +33 13 Belgium +32 …and shows how many 14 Spain +32 15 France +32 more people think that 16 South Africa +30 17 Poland +30 others live in an internet 18 Russia +28 bubble than admit that 19 Peru +26 20 Italy +25 for themselves 21 South Korea +22 22 Brazil +22 23 Malaysia +19 24 India +18 25 Japan +17 26 Saudi Arabia +10 27 China +7 Public 6 T Total 41% 48% I am confident that the 1 Hungary 69% 23% 2 Malaysia 60% 34% average person in 3 Saudi Arabia 60% 29% 4 China 58% 34% [COUNTRY] can tell real 5 India 55% 39% 6 Spain 54% 27% news from ‘fake news’. 7 Peru 46% 48% 8 Chile 44% 50% 9 Mexico 44% 51% 10 Serbia 44% 49% 11 Russia 40% 49% 12 Turkey 40% 56% 13 Canada 39% 48% 14 Poland 39% 47% 15 Brazil 38% 51% 16 France 38% 45% 17 Argentina 37% 53% 18 South Korea 37% 48% 19 Australia 36% 51% 20 South Africa 34% 59% 21 Belgium 32% 53% 22 Germany 29% 53% KEY: 23 US 29% 62% Great Britain 28% 59% Agree Disagree 24 25 Italy 27% 61% 26 Japan 26% 52% 27 Sweden 26% 64% Base: 13,500 adults across 27 countries (500 GB) 7 © Ipsos. T Total 63% 25% I am confident that I can 1 Turkey 77% 19% 2 Chile 76% 20% tell real news from ‘fake 3 Peru 76% 21% 4 Serbia 76% 15% news’ (entirely made up 5 Malaysia 74% 20% 6 South Africa 73% 20% stories or facts). 7 Argentina 72% 19% 8 Mexico 72% 23% 9 India 70% 25% 10 Saudi Arabia 70% 18% 11 Poland 69% 17% 12 Brazil 68% 22% 13 Hungary 68% 23% 14 Australia 67% 19% 15 Great Britain 66% 21% 16 US 65% 24% 17 Canada 64% 24% 18 Sweden 64% 25% 19 China 63% 26% Belgium 20 60% 27% Russia 58% 31% 21 France 54% 28% 22 Italy 52% 37% KEY: 23 Germany 47% 31% 24 Agree Disagree South Korea 45% 36% 25 Spain 39% 38% 26 Japan 30% 47% 27 Base: 13,500 adults across 27 countries (500 GB) 8 © Ipsos. I can identify fake news but the rest of my country can’t… +/- Net T Total +22 1 South Africa +39 2 Great Britain +38 This chart compares what 3 Sweden +38 people say for others and 4 Turkey +37 5 US +36 what they say for 6 Argentina +35 7 Chile +32 themselves on being able 8 Serbia +32 to identify fake news from 9 Australia +31 10 Brazil +30 previous two slides… 11 Peru +30 12 Poland +30 13 Belgium +28 …and shows how many 14 Mexico +28 15 Canada +25 more people think the 16 Italy +25 17 Germany +18 average person can’t 18 Russia +18 identify fake news 19 France +16 20 India +15 compared with them 21 Malaysia +14 22 Saudi Arabia +10 (except in Spain and 23 South Korea +8 Hungary) 24 China +5 25 Japan +4 26 Hungary -1 27 Spain -15 Public 9 T Total 58% 28% I think I’m better at 1 Turkey 73% 23% 2 Peru 72% 23% spotting ‘fake news’ than 3 Chile 69% 23% 4 Hungary 69% 19% the average person in 5 India 69% 24% 6 Serbia 68% 17% [COUNTRY]. 7 Saudi Arabia 66% 21% 8 Malaysia 65% 26% 9 South Africa 65% 26% 10 Mexico 64% 29% 11 Brazil 62% 28% 12 Poland 60% 22% 13 Argentina 59% 28% 14 Australia 59% 23% 15 US 59% 27% 16 Russia 58% 31% 17 Great Britain 56% 24% 18 Canada 54% 28% 19 Sweden 54% 25% 20 Spain 53% 26% 21 Belgium 52% 29% 22 South Korea 50% 31% KEY: 23 China 49% 36% 24 Italy Agree Disagree 49% 41% 25 France 47% 33% 26 Germany 43% 34% 27 Japan 30% 47% Base: 13,500 adults across 27 countries (500 GB) 10 © Ipsos. T Total 60% 30% The average person in 1 Peru 71% 25% 2 Serbia 70% 25% [COUNTRY] doesn’t care 3 Turkey 69% 27% 4 US 68% 23% about facts about politics 5 Argentina 67% 26% and society anymore, they 6 Great Britain 66% 24% 7 India 66% 30% just believe what they 8 Chile 65% 32% 9 Australia 64% 25% want. 10 Hungary 64% 24% 11 Mexico 64% 33% 12 Poland 64% 23% 13 Belgium 62% 26% 14 Germany 62% 24% 15 South Africa 61% 35% 16 Canada 60% 29% 17 Sweden 60% 31% 18 Malaysia 58% 38% 19 Spain 56% 30% 20 Brazil 55% 36% 21 France 54% 27% 22 Russia 53% 39% KEY: 23 South Korea 53% 37% Saudi Arabia 50% 36% Agree Disagree 24 25 China 49% 44% Japan 26 49% 30% Italy 27 48% 44% Base: 13,000 adults across 26 countries (500 GB) 11 © Ipsos. T Total 59% 29% I am confident I have a 1 Turkey 76% 20% 2 India 75% 20% better understanding of 3 Chile 74% 22% 4 Peru 72% 24% social realities like 5 Mexico 71% 24% immigration levels and 6 China 68% 21% 7 Malaysia 68% 25% crime rates than the 8 Argentina 67% 23% 9 Serbia 66% 23% average person in 10 South Africa 66% 27% 11 Brazil 65% 25% [COUNTRY]. 12 Saudi Arabia 64% 21% 13 Poland 60% 23% 14 Great Britain 58% 27% 15 Australia 57% 28% 16 Spain 57% 24% 17 Hungary 56% 32% 18 US 55% 33% 19 Sweden 52% 35% 20 Canada 51% 35% 21 Belgium 50% 34% 22 Russia 50% 36% KEY: 23 Germany 48% 30% 24 South Korea Agree Disagree 48% 34% 25 Italy 47% 38% 26 France 42% 38% 27 Japan 28% 51% Base: 13,500 adults across 27 countries (500 GB) 12 © Ipsos. FAKE NEWS EXPOSURE AND MEANING Public 14 T Total 60% 28% How often, if at all, do 1 Argentina 82% 16% 2 Serbia 79% 14% you think you see stories 3 Turkey 79% 17% 4 Mexico 77% 21% where news organisations 5 Brazil 73% 19% 6 India 72% 24% have deliberately said 7 Peru 72% 25% 8 Hungary 71% 17% something that isn’t true? 9 Malaysia 69% 24% 10 South Africa 68% 26% 11 Chile 66% 31% 12 Poland 65% 22% 13 US 61% 26% 14 Italy 59% 28% 15 Russia 59% 30% 16 Australia 57% 29% 17 France 54% 26% 18 Great Britain 54% 30% 19 Saudi Arabia 51% 32% 20 Spain 51% 30% 21 Belgium 49% 36% 22 Canada 48% 36% KEY: 23 Sweden 44% 42% Very/Fairly often Not very often/Never 24 China 43% 47% 25 South Korea 39% 52% 26 Japan 36% 41% 27 Germany 30% 42% Base: 13,500 adults across 27 countries (500 GB) 15 © Ipsos. I have falsely believed a T Total 48% 40% 1 Brazil 62% 29% news story was real until I 2 Saudi Arabia 58% 29% 3 South Korea 58% 29% found out it was fake. 4 Peru 57% 39% 5 Spain 57% 29% 6 China 56% 34% 7 India 55% 37% 8 Poland 55% 33% 9 Sweden 55% 29% 10 Chile 54% 41% 11 Mexico 54% 43% 12 Argentina 51% 41% 13 Malaysia 50% 45% 14 South Africa 50% 41% 15 Russia 49% 39% 16 Canada 48% 39% 17 Serbia 48% 39% 18 Australia 46% 39% 19 US 46% 42% 20 Belgium 45% 39% 21 Germany 45% 38% 22 France 43% 40% KEY: 23 Hungary 35% 51% Agree Disagree 24 Japan 34% 46% 25 Great Britain 33% 54% 26 Turkey 33% 63% 27 Italy 29% 62% Base: 13,500 adults across 27 countries (500 GB) 16 © Ipsos.
Recommended publications
  • Bursting the Filter Bubble
    BURSTINGTHE FILTER BUBBLE:DEMOCRACY , DESIGN, AND ETHICS Proefschrift ter verkrijging van de graad van doctor aan de Technische Universiteit Delft, op gezag van de Rector Magnificus prof. ir. K. C. A. M. Luyben, voorzitter van het College voor Promoties, in het openbaar te verdedigen op woensdag, 16 September 2015 om 10:00 uur door Engin BOZDAG˘ Master of Science in Technische Informatica geboren te Malatya, Turkije. Dit proefschrift is goedgekeurd door: Promotors: Prof. dr. M.J. van den Hoven Prof. dr. ir. I.R. van de Poel Copromotor: dr. M.E. Warnier Samenstelling promotiecommissie: Rector Magnificus, voorzitter Prof. dr. M.J. van den Hoven Technische Universiteit Delft, promotor Prof. dr. ir. I.R. van de Poel Technische Universiteit Delft, promotor dr. M.E. Warnier Technische Universiteit Delft, copromotor Independent members: dr. C. Sandvig Michigan State University, USA Prof. dr. M. Binark Hacettepe University, Turkey Prof. dr. R. Rogers Universiteit van Amsterdam Prof. dr. A. Hanjalic Technische Universiteit Delft Prof. dr. ir. M.F.W.H.A. Janssen Technische Universiteit Delft, reservelid Printed by: CPI Koninklijke Wöhrmann Cover Design: Özgür Taylan Gültekin E-mail: [email protected] WWW: http://www.bozdag.nl Copyright © 2015 by Engin Bozda˘g All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic or mechanical, includ- ing photocopying, recording or by any information storage and retrieval system, without written permission of the author. An electronic version of this dissertation is available at http://repository.tudelft.nl/. PREFACE For Philip Serracino Inglott, For his passion and dedication to Information Ethics Rest in Peace.
    [Show full text]
  • A Longitudinal Analysis of Youtube's Promotion of Conspiracy Videos
    A longitudinal analysis of YouTube’s promotion of conspiracy videos Marc Faddoul1, Guillaume Chaslot3, and Hany Farid1,2 Abstract Conspiracy theories have flourished on social media, raising concerns that such content is fueling the spread of disinformation, supporting extremist ideologies, and in some cases, leading to violence. Under increased scrutiny and pressure from legislators and the public, YouTube announced efforts to change their recommendation algorithms so that the most egregious conspiracy videos are demoted and demonetized. To verify this claim, we have developed a classifier for automatically determining if a video is conspiratorial (e.g., the moon landing was faked, the pyramids of Giza were built by aliens, end of the world prophecies, etc.). We coupled this classifier with an emulation of YouTube’s watch-next algorithm on more than a thousand popular informational channels to obtain a year-long picture of the videos actively promoted by YouTube. We also obtained trends of the so-called filter-bubble effect for conspiracy theories. Keywords Online Moderation, Disinformation, Algorithmic Transparency, Recommendation Systems Introduction social media 21; (2) Although view-time might not be the only metric driving the recommendation algorithms, YouTube By allowing for a wide range of opinions to coexist, has not fully explained what the other factors are, or their social media has allowed for an open exchange of relative contributions. It is unarguable, nevertheless, that ideas. There have, however, been concerns that the keeping users engaged remains the main driver for YouTubes recommendation engines which power these services advertising revenues 22,23; and (3) While recommendations amplify sensational content because of its tendency to may span a spectrum, users preferably engage with content generate more engagement.
    [Show full text]
  • Arxiv:1811.12349V2 [Cs.SI] 4 Dec 2018 Content for Different Purposes in Very Large Scale
    Combating Fake News with Interpretable News Feed Algorithms Sina Mohseni Eric D. Ragan Texas A&M University University of Florida College Station, TX Gainesville, FL [email protected] eragan@ufl.edu Abstract cations of personalized data tracking for the dissemination and consumption of news has caught the attention of many, Nowadays, artificial intelligence algorithms are used for tar- especially given evidence of the influence of malicious so- geted and personalized content distribution in the large scale as part of the intense competition for attention in the digital cial media accounts on the spread of fake news to bias users media environment. Unfortunately, targeted information dis- during the 2016 US election (Bessi and Ferrara 2016). Re- semination may result in intellectual isolation and discrimi- cent reports show that social media outperforms television as nation. Further, as demonstrated in recent political events in the primary news source (Allcott and Gentzkow 2017), and the US and EU, malicious bots and social media users can the targeted distribution of erroneous or misleading “fake create and propagate targeted “fake news” content in differ- news” may have resulted in large-scale manipulation and ent forms for political gains. From the other direction, fake isolation of users’ news feeds as part of the intense competi- news detection algorithms attempt to combat such problems tion for attention in the digital media space (Kalogeropoulos by identifying misinformation and fraudulent user profiles. and Nielsen 2018). This paper reviews common news feed algorithms as well as methods for fake news detection, and we discuss how news Although online information platforms are replacing the feed algorithms could be misused to promote falsified con- conventional news sources, personalized news feed algo- tent, affect news diversity, or impact credibility.
    [Show full text]
  • Voorgeprogrammeerd
    Voorgeprogrammeerd Voorgeprogrammeerd.indd 1 28-2-2012 14:19:13 Voorgeprogrammeerd.indd 2 28-2-2012 14:19:13 Voorgeprogrammeerd Hoe internet ons leven leidt Redactie: Christian van ’t Hof Jelte Timmer Rinie van Est Boom Lemma uitgevers Den Haag 2012 Voorgeprogrammeerd.indd 3 28-2-2012 14:19:14 Omslagontwerp: Textcetera, Den Haag Foto omslag: Shutterstock Opmaak binnenwerk: Textcetera, Den Haag © 2012 Rathenau | Boom Lemma uit gevers Behoudens de in of krachtens de Auteurswet gestelde uitzonderingen mag niets uit deze uitgave worden verveelvoudigd, opgeslagen in een geautomatiseerd gegevens- bestand, of openbaar gemaakt, in enige vorm of op enige wijze, hetzij elektronisch, mechanisch, door fotokopieën, opnamen of enige andere manier, zonder vooraf- gaande schriftelijke toestemming van de uitgever. Voor zover het maken van reprografische verveelvoudigingen uit deze uitgave is toe- gestaan op grond van artikel 16h Auteurswet dient men de daarvoor wettelijk ver- schuldigde vergoedingen te voldoen aan de Stichting Reprorecht (Postbus 3051, 2130 KB Hoofddorp, www.reprorecht.nl). Voor het overnemen van (een) gedeelte(n) uit deze uitgave in bloemlezingen, readers en andere compilatiewerken (art. 16 Auteurswet) kan men zich wenden tot de Stichting PRO (Stichting Publicatie- en Reproductierechten Organisatie, Postbus 3060, 2130 KB Hoofddorp, www.cedar.nl/pro). No part of this book may be reproduced in any form, by print, photoprint, microfilm or any other means without written permission from the publisher. ISBN 978-90-5931-797-0 NUR 800 www.boomlemma.nl www.rathenau.nl Voorgeprogrammeerd.indd 4 28-2-2012 14:19:14 Voorwoord Het Rathenau Instituut stimuleert de discussie over technologische ont- wikkelingen die veel impact hebben op de samenleving.
    [Show full text]
  • Algorithms, Filter Bubbles and Fake News
    ALGORITHMS, FILTER BUBBLES AND FAKE NEWS BY DOMINIK GRAU, Managing Director, Beuth Edited by Jessica Patterson Below the surface of digital platforms lies a deep sea of algorithms. Hidden in plain sight, they shape how we consume content and the way content or services are recommended to us. Publishers need to understand the growing influence of algorithms and the digital filter bubbles they create. THE NATURE OF ALGORITHMS Algorithms help publishers, platforms and brands to amplify and personalize content on almost all digital channels. Most of today’s content management systems and recommendation engines are driven by tens of thousands of algorithms. On the most basic level, algorithms are “sets of defined steps structured to process instructions/data to produce an output” according to National University of Ireland Professor Rob Kitchin. Kitchin, who recently published a paper on ‘Thinking critically about and researching algorithms,’ states that “algorithms can be conceived in a number of ways – technically, computationally, mathematically, politically, culturally, economically, contextually, materially, philosophically, ethically.”1 Dartmouth professors Tom Cormen and Devin Balkcom recently offered a specific example of the binary nature of algorithms: “Binary search is an efficient algorithm for finding an item from a sorted list of items. It works by repeatedly dividing in half the portion of the list that could contain the item, until you've narrowed down the possible locations to just one.”2 Just like mathematical formulae, algorithms compute predefined metrics and calculations, step by step. Their code has no bias, they “know” or “understand” nothing, yet they compute everything that has been implemented as a rule or path.
    [Show full text]
  • Characterizing Search-Engine Traffic to Internet Research Agency Web Properties
    Characterizing Search-Engine Traffic to Internet Research Agency Web Properties Alexander Spangher Gireeja Ranade Besmira Nushi Information Sciences Institute, University of California, Berkeley and Microsoft Research University of Southern California Microsoft Research Adam Fourney Eric Horvitz Microsoft Research Microsoft Research ABSTRACT The Russia-based Internet Research Agency (IRA) carried out a broad information campaign in the U.S. before and after the 2016 presidential election. The organization created an expansive set of internet properties: web domains, Facebook pages, and Twitter bots, which received traffic via purchased Facebook ads, tweets, and search engines indexing their domains. In this paper, we focus on IRA activities that received exposure through search engines, by joining data from Facebook and Twitter with logs from the Internet Explorer 11 and Edge browsers and the Bing.com search engine. We find that a substantial volume of Russian content was apolit- ical and emotionally-neutral in nature. Our observations demon- strate that such content gave IRA web-properties considerable ex- posure through search-engines and brought readers to websites hosting inflammatory content and engagement hooks. Our findings show that, like social media, web search also directed traffic to IRA Figure 1: IRA campaign structure. Illustration of the struc- generated web content, and the resultant traffic patterns are distinct ture of IRA sponsored content on the web. Content spread from those of social media. via a combination of paid promotions (Facebook ads), un- ACM Reference Format: paid promotions (tweets and Facebook posts), and search re- Alexander Spangher, Gireeja Ranade, Besmira Nushi, Adam Fourney, ferrals (organic search and recommendations). These path- and Eric Horvitz.
    [Show full text]
  • Filter Bubbles and Social Media
    "FILTER BUBBLES", SOCIAL MEDIA & BIAS WHAT IS A "FILTER BUBBLE"? A filter bubble is an environment, especially an online environment, in which a person only ever comes in to contact with opinions and views that are the same as their own. WHAT IS "BIAS"? Bias is an opinion in favour of or against a particular person, group, or idea. We all have certain biases, it's almost impossible to avoid them. This is OK but we need to acknowledge that we have them as they will impact our response to the news. WHAT IS "POLITICAL BIAS"? Political bias is your preferred political stance. Almost every newspaper has one. Again, this is not a problem - we just need to know it's there. Mainstream newspapers are usually quite open about their political stance - this is a good thing. WHAT ABOUT SOCIAL MEDIA? Most of us get our news from social media. What we see on our social media is decided by a computer programme called an "algorithm", it shows us things that we are likely to enjoy. This means the news you see will almost always align with your views. WHAT'S THE PROBLEM HERE? If you only ever read news that confirms your point of view, then you won't have a balanced understanding of events. If you don't have the full picture, it can be hard to know what's actually happening! WHAT CAN I DO TO AVOID THIS? Read articles from a wide range of sources to get a balanced view. Be aware of bias while you read (your own and that of the news source).
    [Show full text]
  • Curation Algorithms and Filter Bubbles in Social Networks∗
    Curation Algorithms and Filter Bubbles in Social Networks∗ Ron Bermany Zsolt Katonaz Abstract Social platforms often use curation algorithms to match content to user tastes. Although designed to improve user experience, these algorithms have been blamed for increased polariza- tion of consumed content. We analyze how curation algorithms impact the number of friends users follow and the quality of content generated on the network, taking into account horizontal and vertical differentiation. Although algorithms increase polarization for fixed networks, when they indirectly influence network connectivity and content quality their impact on polarization and segregation is less clear. We find that network connectivity and content quality are strategic complements, and that introducing curation makes consumers less selective and increases connectivity. In equilibrium, content creators receive lower payoffs because the competition leads to a prisoner's dilemma. Filter bubbles are not always a consequence of curation algorithms. A perfect filtering algorithm increases content polarization and creates a filter bubble when the marginal cost of quality is low, while an algorithm focused on vertical content quality increases connectivity as well as lowers polarization and does not create a filter bubble. Consequently, although user surplus can increase through curating and encouraging high quality content, the type of algorithm used matters for the unintended consequence of creating a filter bubble. Keywords: Social Media, Polarization, Filter Bubbles, Echo Chambers, Fake News, Algo- rithmic Curation, Game Theory. ∗We thank the NET Institute (www.netinst.org) for its generous support. Feedback from seminar participants at UTD Bass Conference, HUJI, SICS Conference, New York City Media Seminar, FCC, HBS, MIT Media Lab, IDC and Kellogg is acknowledged.
    [Show full text]
  • A Disinformation-Misinformation Ecology: the Case of Trump Thomas J
    Chapter A Disinformation-Misinformation Ecology: The Case of Trump Thomas J. Froehlich Abstract This paper lays out many of the factors that make disinformation or misinformation campaigns of Trump successful. By all rational standards, he is unfit for office, a compulsive liar, incompetent, arrogant, ignorant, mean, petty, and narcissistic. Yet his approval rating tends to remain at 40%. Why do rational assessments of his presidency fail to have any traction? This paper looks at the con- flation of knowledge and beliefs in partisan minds, how beliefs lead to self-decep- tion and social self-deception and how they reinforce one another. It then looks at psychological factors, conscious and unconscious, that predispose partisans to pursue partisan sources of information and reject non-partisan sources. It then explains how these factors sustain the variety and motivations of Trump supporters’ commitment to Trump. The role of cognitive authorities like Fox News and right-wing social media sites are examined to show how the power of these media sources escalates and reinforces partisan views and the rejection of other cognitive authorities. These cognitive authorities also use emotional triggers to inflame Trump supporters, keeping them addicted by feeding their anger, resentment, or self-righteousness. The paper concludes by discussing the dynamics of the Trump disinformation- misinformation ecology, creating an Age of Inflamed Grievances. Keywords: Trumpism, disinformation, cognitive authority, Fox News, social media, propaganda, inflamed grievances, psychology of disinformation, Donald Trump, media, self-deception, social self-deception 1. Introduction This paper investigates how disinformation-misinformation campaigns, particularly in the political arena, succeed and why they are so hard to challenge, defeat, or deflect.
    [Show full text]
  • The Role of Technology in Online Misinformation Sarah Kreps
    THE ROLE OF TECHNOLOGY IN ONLINE MISINFORMATION SARAH KREPS JUNE 2020 EXECUTIVE SUMMARY States have long interfered in the domestic politics of other states. Foreign election interference is nothing new, nor are misinformation campaigns. The new feature of the 2016 election was the role of technology in personalizing and then amplifying the information to maximize the impact. As a 2019 Senate Select Committee on Intelligence report concluded, malicious actors will continue to weaponize information and develop increasingly sophisticated tools for personalizing, targeting, and scaling up the content. This report focuses on those tools. It outlines the logic of digital personalization, which uses big data to analyze individual interests to determine the types of messages most likely to resonate with particular demographics. The report speaks to the role of artificial intelligence, machine learning, and neural networks in creating tools that distinguish quickly between objects, for example a stop sign versus a kite, or in a battlefield context, a combatant versus a civilian. Those same technologies can also operate in the service of misinformation through text prediction tools that receive user inputs and produce new text that is as credible as the original text itself. The report addresses potential policy solutions that can counter digital personalization, closing with a discussion of regulatory or normative tools that are less likely to be effective in countering the adverse effects of digital technology. INTRODUCTION and machine learning about user behavior to manipulate public opinion, allowed social media Meddling in domestic elections is nothing new as bots to target individuals or demographics known a tool of foreign influence.
    [Show full text]
  • Fake News About Covid-19: a Comparative Analysis of Six Ibero- American Countries
    RLCS, Revista Latina de Comunicación Social, 78, 237-264 [Research] DOI: 10.4185/RLCS-2020-1476| ISSN 1138-5820 | Año 2020 Fake news about Covid-19: a comparative analysis of six Ibero- american countries Noticias falsas y desinformación sobre el Covid-19: análisis comparativo de seis países iberoamericanos Liliana María Gutiérrez-Coba. University of La Sabana. Colombia. [email protected] [CV] Patricia Coba-Gutiérrez. University of Ibagué.Colombia. [email protected] [CV] Javier Andrés Gómez-Díaz. University Corporation Minuto de Dios. Colombia. [email protected] [CV] How to cite this article / Standardized reference Gutiérrez-Coba, L. M., Coba-Gutiérrez, P., & Gómez-Díaz, J. A. (2020). The intention behind the fake news about Covid-19: comparative analysis of six Ibero-American countries. Revista Latina de Comunicación Social, 78, 237-264. https://www.doi.org/10.4185/RLCS-2020-1476 ABSTRACT Introduction: Producers of misinformation and fake news find in fear, uncertainty in pandemic times, and virtual social networks facilitators for disseminating them, doing harder the task to detect them even for experts and laymen. Typologies designed to identify and classify hoaxes allow their analysis from theoretical perspectives such as echo chambers, filter bubbles, information manipulation, and cognitive dissonance. Method: A content analysis was developed with 371 fake news, previously verified by fact-checkers. After the intercoder test, it was proceeded to classify disinformation according to their type, intentionality, the main topic addressed, networks where they circulated, deception technique, country of origin, transnational character, among other variables. Results: The most common intent of fake news was ideological, associated with issues such as false announcements by governments, organizations, or public figures, as well as with false context elaboration technique.
    [Show full text]
  • Toward an Interdisciplinary Framework for Research and Policy Making PREMS 162317
    INFORMATION DISORDER : Toward an interdisciplinary framework for research and policy making PREMS 162317 ENG Council of Europe report Claire Wardle, PhD Hossein Derakhshan DGI(2017)09 Information Disorder Toward an interdisciplinary framework for research and policymaking By Claire Wardle, PhD and Hossein Derakhshan With research support from Anne Burns and Nic Dias September 27, 2017 The opinions expressed in this work are the responsibility of the authors and do not necessarily reflect the official policy of the Council of Europe. All rights reserved. No part of this publication may be translated, reproduced or transmitted in any form or by any means without the prior permission in writing from the Directorate of Communications (F-67075 Strasbourg Cedex or [email protected]). Photos © Council of Europe Published by the Council of Europe F-67075 Strasbourg Cedex www.coe.int © Council of Europe, October, 2017 1 Table of content Author Biographies 3 Executive Summary 4 Introduction 10 Part 1: Conceptual Framework 20 The Three Types of Information Disorder 20 The Phases and Elements of Information Disorder 22 The Three Phases of Information Disorder 23 The Three Elements of Information Disorder 25 1) The Agents: Who are they and what motivates them? 29 2) The Messages: What format do they take? 38 3) Interpreters: How do they make sense of the messages? 41 Part 2: Challenges of filter bubbles and echo chambers 49 Part 3: Attempts at solutions 57 Part 4: Future trends 75 Part 5: Conclusions 77 Part 6: Recommendations 80 Appendix: European Fact-checking and Debunking Initiatives 86 References 90 2 Authors’ Biographies Claire Wardle, PhD Claire Wardle is the Executive Director of First Draft, which is dedicated to finding solutions to the challenges associated with trust and truth in the digital age.
    [Show full text]