Improving Factual Consistency of Abstractive Summarization Via Question Answering

Improving Factual Consistency of Abstractive Summarization Via Question Answering

Improving Factual Consistency of Abstractive Summarization via Question Answering Feng Nan1 Cicero Nogueira dos Santos1 Henghui Zhu1 Patrick Ng1 Kathleen McKeown1;2 Ramesh Nallapati1 Dejiao Zhang1 Zhiguo Wang1 Andrew O. Arnold 1 Bing Xiang1 Amazon Web Services1, Columbia University2 {nanfen, cicnog, henghui, patricng, mckeownk rnallapa, dejiaoz, zhiguow, anarnld, bxiang}@amazon.com Abstract Input: ...“Klitschko doesn’t have the legs, the power that he used to,” said Lewis. “He has a chink in his armour after getting beat by Tyson Fury. Anthony Joshua is now taking that challenge, A commonly observed problem with the state- going after the man.” ... of-the art abstractive summarization models is MLE: Anthony Joshua has a “chink in his armour” ahead of his world heavyweight title bout with Wladimir Klitschko, says former that the generated summaries can be factually champion Lennox Lewis. inconsistent with the input documents. The CONSEQ: Wladimir Klitschko has a “chink in his armour” and is no fact that automatic summarization may match for British champion Anthony Joshua, says former world produce plausible-sounding yet inaccurate heavyweight champion Lennox Lewis. summaries is a major concern that limits its wide application. In this paper we present Table 1: Example summaries from the BART-large an approach to address factual consistency in finetuned models on test set. Standard MLE training summarization. We first propose an efficient generates a factually inconsistent summary whereas automatic evaluation metric to measure factual our proposed CONSEQ is consistent. consistency; next, we propose a novel learning algorithm that maximizes the proposed metric during model training. Through extensive for evaluation metrics such as BLEU and ROUGE. experiments, we confirm that our method is This empirical success can be ascribed to the fact effective in improving factual consistency that both BLEU and ROUGE are directly linked and even overall quality of the summaries, as to the n-gram overlap between the output and the judged by both automatic metrics and human target sequences, which can be efficiently learned evaluation. via MLE. In contrast, metrics to capture factual consistency are much more elusive as they must 1 Introduction take into account the relations among tokens in the Recent advances in neural text generation have context of an entire sequence. The widely used led to significant improvement in the quality of ROUGE score is inadequate to quantify factual abstractive summarization (Radford et al., 2019; consistency (Kryscinski et al., 2019). In fact, Gehrmann et al., 2019; Lewis et al., 2019). Despite the lack of an effective (automatic) metric for this progress, there are still many limitations facing factual consistency has been the major hurdle neural text summarization (Kryscinski et al., 2019), in improving abstractive summarization model the most serious of which is the tendency to training beyond MLE. Table1 shows an example of generate summaries that are not factually consistent a factually inconsistent summary generated by fine- with the input document; a factually consistent tuning the BART-large model (Lewis et al., 2019), summary only contains statements that can be which is a transformer based seq2seq model pre- inferred from the source document. Recent studies trained on a large corpus with denoising objectives. show that about 30% of the summaries generated Standard MLE training produces summaries with by neural network sequence-to-sequence (seq2seq) factual errors that, in addition to hallucinating facts, models suffer from fact fabrication (Cao et al., sometimes even contradict the input article. 2018). To make abstractive summarization models The standard training approach for seq2seq produce more factually consistent summaries, learning has been maximizing the log likelihood we need two critical components: an automatic of the target given the input sequences (MLE). It evaluation metric for factual consistency and has empirically performed well as a surrogate loss an effective training algorithm that maximizes 6881 Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 6881–6894 August 1–6, 2021. ©2021 Association for Computational Linguistics Figure 2: QAGen model: for an input text (p), it generates a question (q) followed by an answer (a). candidate answers using an answer extraction (AE) model; secondly, a question generation (QG) model takes in the summary, concatenating with each candidate answer to generate a corresponding question; thirdly, a question answering (QA) model Figure 1: Comparison between QAGS (top) and is used to answer each generated question in the QUALS (bottom) protocols. QUALS uses only one context of the summary and the input document, QAGen model instead of the AE, QG and QA models separately; finally, the answers from the QA model used in QAGS. based on the summary and the input document are compared to calculate F1 score in terms of their factualness. Our main contributions lie in both word level overlap as the QAGS score. Intuitively, areas. First, we propose an efficient automatic for the same question, if the answer obtained evaluation metric for factual consistency that is from the input document matches that from the a simplification of the recently published QAGS summary, it is an indication that the summary protocol (Wang et al., 2020). Evaluating QAGS is is factually consistent with the input document. computationally expensive and ill-suited for being We show the QAGS pipeline in the top part of part of the model training process. Our proposed Figure1. QAGS has the advantage of being protocol achieves a 55x speedup while correlating interpretable and is shown to correlate well with closely with QAGS1. Second, we propose a new human evaluation. However, using QAGS directly contrastive learning method that uses factualness as a part of the training process presents several as a training objective. We demonstrate through challenges. First, QAGS requires three separate experiments that our method improves the factual models for AE, QG and QA. In addition to the consistency of summarization models measured by summarization model being trained, these models both automatic metrics such as QAGS as well as consume a significant amount of machine memory. human evaluation. Second, performing these three steps separately takes a significant amount of time. For good 2 An Efficient Metric for Factual coverage in QAGS, multiple answers are extracted Consistency for a given summary and multiple questions are generated for each answer. This means the QA In order to improve factual consistency of model needs to perform inference on an exploding summarization models, we must have a metric to number of inputs even for one summary. Indeed, quantify it. In addition, the metric needs to be QAGS evaluation on a training set would take 584 computationally efficient so that we can incorporate days on a single GPU.2 it as part of the model training process. We first describe the QAGS protocol and then present our 2.2 QUALS (ours) QUALS protocol. In order to enable the use of a QA driven metric 2.1 Background on QAGS to maximize factual correctness during the training Given a summary and an input document, QAGS of summarization models, we propose QUALS (Wang et al., 2020) scores the summary using (QUestion Answering with Language model score a 4-steps pipeline: firstly, it extracts the named for Summarization), which is illustrated in the entities and noun phrases in the summary as bottom part of Figure1. QUALS is an efficient 1See Sec. A.2 in the Appendix for details. 2See Sec. A.2 in the Appendix for details. 6882 metric that employs a single neural language model to factual consistency; LLdoc(q; a) − LLsumm(q; a) (QAGen), as proposed in (Shakeri et al., 2020), can help normalize these domain-related shifts to generate both the questions and answers from since both the document and summary share the the summary. In particular, given a summary, same basic style, vocabulary and topic. QAGen outputs a question-answer (q-a) pair jointly, separated by a special token <a> as shown in 3 Improving Factual Consistency Figure2. Let LLsumm(q; a) be the average log Through Contrastive Learning likelihood of generating the q-a pair from the given Although QUALS can be computed more summary: efficiently, using it in the training process is 0 Nq not straightforward because one would need to 1 X i <i LLsumm(q; a) = @ log pQAGen(q jsumm; q ) backpropagate through generated sumaries and q- Nq + Na i=1 a pairs. We present our CONSEQ (CONtrastive Na ! X i <i SEQ2seq learning) algorithm that can effectively + log pQAGen(a jsumm; q; a ) ; i=1 maximize such metrics in training. To fix notation, x = x ; : : : ; x denotes a where N and N are the number of tokens for 1 m q a sequence of input tokens; y = y ; : : : ; y denotes the question and answer, respectively. Note that 1 n a sequence of target output tokens; y^ =y ^ ;:::; y^ we consider the log likelihood scores over both 1 n^ denotes a sequence of generated tokens from a the question and answer tokens to account for seq2seq model via sampling, i.e. y^ ∼ p (·|x), factual consistency of both. To obtain good θ where θ is the parameter of the model. Let r(^y; x) coverage and diverse q-a pairs, we use diverse be the evaluation (in our case the QUALS) metric beam search (Vijayakumar et al., 2016) to generate that we aim to maximize. 60 q-a pairs for a given summary with 60 diverse beam groups and a diverse beam strength of 3.1 CONSEQ 0.5. We then filter out low-quality q-a pairs by First, we train an initial seq2seq model with keeping only those with answers found in the input parameters θ0 using the original labeled training set summary. When multiple q-a pairs share the same fx(i); y(i)g via MLE.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us