
What is Your Article Based On? Inferring Fine-grained Provenance Yi Zhang, Zachary G. Ives, Dan Roth Department of Computer and Information Science University of Pennsylvania, Philadelphia, PA, USA fyizhang5, zives, [email protected] Abstract When evaluating an article and the claims it makes, a critical reader must be able to as- sess where the information presented comes from, and whether the various claims are mu- tually consistent and support the conclusion. This motivates the study of claim provenance, which seeks to trace and explain the origins of claims. In this paper, we introduce new tech- niques to model and reason about the prove- nance of multiple interacting claims, including Figure 1: An example of a claim (in the red box) with how to capture fine-grained information about its article. Sentence 1 and sentence 2 (blue boxes) show the context. Our solution hinges on first identi- examples from the article. Each sentence refers to ex- fying the sentences that potentially contain im- ternal information: source article 1 and 2, respectively, portant external information. We then develop with accompanying urls. a query generator with our novel rank-aware cross attention mechanism, which aims at gen- erating metadata for the source article, based that was not originated within the same article, and on the context and signals collected from a where it originates from, are very important for search engine. This establishes relevant search readers who want to determine whether they can queries, and it allows us to obtain source arti- believe the claim. cle candidates for each identified sentence and propose an ILP based algorithm to infer the Figure1 shows an example of such a claim, best sources. We experiment with a newly cre- “Marco Rubio says Anthony Fauci lies about 2 ated evaluation dataset 1, Politi-Prov, based on masks. Fauci didn’t.” with its article from fact-checking articles from www.politifa politifact.com. A critical reader of the con- ct.com; our experimental results show that tent will find that several major sources support our solution leads to a significant improvement the author’s claim: Source article 1 in the figure is over baselines. CBS News,“60 Minutes” interview with Anthony Fauci, on March 8, 2020 1 Introduction , which reveals that Dr. Fauci’s main point was to preserve masks for those Misinformation is on the rise, and people are fight- who were already ill and people providing care. If ing it with fact checking. However, most of the readers can validate all sources used in the article, work in the current literature (Thorne et al., 2018; they will be able to determine whether the article Zhang et al., 2019; Barron-Cedeno´ et al., 2020; is trustworthy. In this paper, our goal is to automat- Hidey et al., 2020) focuses on automating fact- ically find these sources for a given article. This checking for a single claim. In reality, a claim is a different problem from fact-checking: Fact- can be complex, and proposed as a conclusion of checking seeks evidence for a claim, while here we an article. Therefore, understanding what infor- only care about the information sources the authors mation supports the article, especially information 2https://www.politifact.com/factcheck 1The data and the code will be available at http://co s/2020/dec/28/marco-rubio/marco-rubio-sa gcomp.org/page/publication view/944 ys-anthony-fauci-lied-about-masks-fa/ 5894 Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 5894–5903 August 1–6, 2021. ©2021 Association for Computational Linguistics used when they were writing. Furthermore, the for a more focused search. problem we address is critical also to authors who The key contributions of this paper are (1) we in- want to give credit to those who have contributed to troduce and formalize the problem of inferring fine- their article, and it enables a recursive analysis that grained provenance for an article; (2) we propose a can trace back to the starting points of an article. general framework to infer the source articles that have provided important information for the given This motivates the study of provenance for article, including (a) a ranking module that can natural language claims, which describes where identify sentences that contain important external a specific claim may have come from and how information based on the main topic and the main it has spread. Early work (Zhang et al., 2020) entities in the article; (b) a query generator that can proposed a formulation to model, and a solution generate possible metadata for the source article, to infer, the provenance graph for the given claim. e.g., the title, the published date, the source web- However, that model is insufficient to capture site, based on the context of the selected sentences; the provenance of an article, because (1) an (c) an integer linear program (ILP) based algorithm article consists of multiple claims, and it leverages to jointly identify the source articles from all of the information from other sources, therefore the candidates. (3) to evaluate our solutions, we collect provenance of all claims should be included in the a new dataset Politi-Prov from politifact.com, article’s provenance; (2) the inference solution and our experimental results show that the solution they proposed can only extract domain-level prove- we proposed can lead to a significant improvement nance information, e.g., cbsnews.com, while it can compared with baselines. not directly link the claim to its source article, e.g., https://www.cbsnews.com/news/preventing- 2 Problem Statement coronavirus-facemask-60-minutes-2020-03-08/. Such fine-grained provenance information is important because it can help people understand the original context that influenced the information they read. Therefore, in this work, we argue that the notion of a provenance graph should be extended to incorporate provenance for articles, and that we need a more comprehensive solution that can identify important external information used in the article and infer its corresponding source article: namely, its fine-grained provenance Figure 2: The pipeline of inferring fine-grained prove- information. nance for an article. Technically, capturing fine-grained provenance for an article is challenging because (1) there may Given an article d, we are to capture its fine- be large numbers of sentences in an article, and not grained provenance, by inferring k source articles all are from external sources nor important (thus, SAk(d) that provide the most important informa- their provenance may not be worth considering); tion for d. We adopt the notion of provenance from (2) a sentence in an article is usually just a textual (Zhang et al., 2020), while in this paper, we focus fragment of its source article, and simply looking on inferring provenance for a claim based on the for other articles with related content may result information from the given article. To find SAk(d), in low precision with regards to finding the correct there are three subproblems we need to solve. original article. In our running example, sentence2 First, we need to locate the important external in Figure1 is “On March 29, President Donald information in d, which means we need a sentence Trump and the coronavirus task force briefed the ranking module that can estimate a score σi for n press on steps underway to increase ...”, whose each sentence in d = fsigi=1, based on how likely source is White House’s coronavirus task force si contains external information. Then we will press briefing on March 29, 2020. If we directly choose top-k sentences based on their score, and search for the sentence on the web, it is hard to find try to find source articles for those sentences. this among popular articles from the news. Instead, Second, for each selected sentence, we need to we need a model that can generate better keywords generate a list of candidate links, which can be 5895 its source articles. To achieve this goal, we take as the given article, and the source articles listed advantage of a search engine, based on which we in the section of “Our Sources” as the ground truth can access all of the articles on the web. As we our system wants to return. We want to note it is have discussed in Section1, directly searching the possible that there may be some sources missing in identified sentence on a search engine may result the ground truth we can obtain, therefore, we focus in a low precision of finding the correct source more on the recall in the evaluation. article. Therefore, we propose to develop a query Overall, we collected data from 1765 articles, generator to generate the possible metadata of the where we use 883 of them for training, and 441 target source article as new search keywords, so and 441 for validation and testing respectively. On that the search engine is more likely to recall source average, each article has 9.8 source articles. articles. We then collect all of the search results as the candidates for a selected sentence. 4 Inferring Fine-grained Provenance Finally, we need to infer the correct source ar- In this section, we will elaborate how we solve the ticle from the candidates, for each identified sen- problems proposed in Section2. tence. Figure2 depicts the three steps we need to conduct to infer the fine-grained provenance, 4.1 Sentence Ranking which correspond to the three subproblems listed above. We will elaborate the details of each step in Given an article, the first step is to identify the Section4.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-