Trustworthy Misinformation Mitigation with Soft Information Nudging
Total Page:16
File Type:pdf, Size:1020Kb
Trustworthy misinformation mitigation with soft information nudging Benjamin D. Horne Maur´ıcio Gruppi Sibel Adalı Department of Computer Science Department of Computer Science Department of Computer Science Rensselaer Polytechnic Institute Rensselaer Polytechnic Institute Rensselaer Polytechnic Institute Troy, NY, USA Troy, NY, USA Troy, NY, USA [email protected] [email protected] [email protected] Abstract—Research in combating misinformation reports up the information’s correctness to debate which can lead many negative results: facts may not change minds, especially to suppression of minority voices and opinions. Furthermore, if they come from sources that are not trusted. Individuals can sophisticated misinformation campaigns mix correct and incor- disregard and justify lies told by trusted sources. This problem is made even worse by social recommendation algorithms which rect information, which can cause uncertainty and make dis- help amplify conspiracy theories and information confirming crediting information more difficult. The majority of technical one’s own biases due to companies’ efforts to optimize for clicks solutions have focused on classifying these extremes (fake and and watch time over individuals’ own values and public good. real), which leaves automatic assessment of uncertain, mixed As a result, more nuanced voices and facts are drowned out by veracity, and deeply contextual information difficult to assess. a continuous erosion of trust in better information sources. Most misinformation mitigation techniques assume that discrediting, A deeper problem that is left unaddressed in the technical filtering, or demoting low veracity information will help news research threads is what to do when information corrections, consumers make better information decisions. However, these whether done by an algorithm or journalist, do not work. negative results indicate that some news consumers, particularly Even if information is correctly discredited, consumers may extreme or conspiracy news consumers will not be helped. choose to ignore the correct information, due to distrust in the We argue that, given this background, technology solutions to combating misinformation should not simply seek facts or platform, algorithm, or organization providing the corrected discredit bad news sources, but instead use more subtle nudges information. This behavior is particularly prevalent among towards better information consumption. Repeated exposure to consumers with extreme or conspiratorial views [9]. If low such nudges can help promote trust in better information sources veracity information is filtered out or demoted, consumers may and also improve societal outcomes in the long run. In this article, become more extreme and distrust the contemporary media we will talk about technological solutions that can help us in developing such an approach, and introduce one such model platforms. The rise of alternative “free speech” platforms such called Trust Nudging. as Gab and Bitchute are examples of this [10]. Similarly, if Index Terms—misinformation, disinformation, decision sup- consumers perceive this filtering, demoting, or discrediting as port systems, information trust, nudge theory, recommendation partisan, distrust in information corrections can persist [11], [12], resulting in reduced trust for the news source, the platform/algorithm curating information and the fact-checking I. INTRODUCTION organization. Due to this distrust, solutions to correcting There are many useful and necessary paths to combat- misinformation can be ineffective for some consumers [9]. ing misinformation. These paths include technical methods In this paper, we begin to address this problem: How can to identify incorrect or misleading claims [1], [2], methods online media systems, if they are willing to, support trust arXiv:1911.05825v1 [cs.CY] 13 Nov 2019 to make correct information more easily available [3], and in higher quality sources? Specifically, we propose Trust methods to identify sources that disseminate incorrect infor- Nudging, a generic, trust-based recommendation model for mation [4], [5]. Some research pathways are non-technical, but improving the quality of news consumed. This proposed equally if not more important, as they address the underlying method is built on the concept of nudging, which provides issues and institutions that lead to the creation, dissemination, alternatives without forbidding any options or significantly and consumption of misinformation [6], [7]. There has also changing the economic incentives [13]. In essence, we would been significant growth in political fact-checking organiza- like to provide alternative sources of information to users at tions, including the fact-checking of news articles, social decision points without taking away any agency from them and media posts, and claims made by politicians [8]. without suppressing information. To do this, we provide subtle In all these paths, there are various underlying challenges. recommendations to readers in order to nudge them towards Overall, the misinformation problem is deeper than identifying news producers of objectively higher quality, but also have a what is “fake news.” While the dissemination of proven incor- chance of being trusted; thereby avoiding recommendations rect information is a real problem, confirming that specific that may not work or may break trust. We leverage news information is incorrect can be deeply contextual. This opens relationship graphs and the news already read by the consumer to approximate the trust of recommended news sources. Using Second, information trust plays a role in belief updating. a simulation, we show that this model can slowly increase the Information trust is a mix of an individual’s own judgment quality of news a consumer is reading, while not demanding of information and the trust in the source [25], [26]. When a substantial shift in trust or ideological beliefs. Furthermore, assessing trust, an individual may rely solely on their own we show that, as a side effect, this model lessens the partisan evaluation of the information, especially if the source is not extremity of the news being read. In addition to simulating trusted, in which case confirmation of the reader’s own beliefs this generic model, we outline different research threads that as well as other heuristics may play a large role [27]. For can help support this approach, as well as, different choice example, information that is compatible with a person’s current architectures that can support better information consumption. beliefs can be seen as more credible and stories that are Lastly, we discuss the benefits and potential drawbacks of this coherent may be easier to trust [27]. Many sources disseminat- type of recommendation method. ing misinformation have become quite good at targeting such heuristics [28]. Similarly, for trusted sources, the information II. RELATED WORK can be accepted as true without much critical evaluation. Trust A. Current Approaches to Misinformation Mitigation for sources is a complex concept as well, evaluated on multiple There have been many proposed technical solutions to axes, such as the alignment of the source’s values with the combating online misinformation. The vast majority of the reader or their perceived competence. technical solutions have been developed as detection systems, Over the past decade there has been an erosion of trust in which can filter out or discredit news that is of low veracity or media and political institutions [11], [12], which can material- provide fact-checking on the claims within the media. These ize as the polarization of trust in news outlets. If an algorithm solutions range widely in terms of technical methods used, recommends news from a high quality source that is initially including various types of machine learning models using the distrusted by the consumer, it is unlikely the consumer makes content in news articles and claims [1], [4], [14], deep neural a change. In the context of politics, a strongly partisan reader network models utilizing social features of shared news [15], may only trust sources closely aligned with their political source-level ranking models [3], and knowledge-graph models view. In this case, recommending an article from the opposite for fact-checking [2]. Many of these approaches have shown political camp is highly unlikely to work. Similarly, telling high accuracy in lab settings. Some approaches have also the reader of a conspiratorial news source to read an article shown robustness to concept drift and some adversarial at- from a neutral source is unlikely to yield any impact. As tacks [16]. both disinformation production and information trust become The assumption of detection-based misinformation solutions more politically polarized, methods that filter, block, demote, is that discrediting false or misleading information will help or discredit may be less effective, as they may be perceived consumers make fully informed decisions. However, there is as partisan paternalism. reason to believe that discrediting or filtering out bad infor- The decline of trust in long-standing news outlets is matched mation will not help some news consumers. First, discrediting with an increase in trust of information recommended on social information may be ignored by consumers with extreme or media [11], although as social media platforms become more conspiratorial views. As described in [9]: “A distinctive feature contemporary, this trust has also wavered. Partisan-based trust of