
Neural CRF Model for Sentence Alignment in Text Simplification Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, Wei Xu Department of Computer Science and Engineering The Ohio State University fjiang.1530, maddela.4, lan.105,zhong.536, [email protected] Abstract the performance of many natural language process- ing (NLP) tasks, such as parsing (Chandrasekar The success of a text simplification system et al., 1996), semantic role labelling (Vickrey and heavily depends on the quality and quan- Koller, 2008), information extraction (Miwa et al., tity of complex-simple sentence pairs in 2010) , summarization (Vanderwende et al., 2007; the training corpus, which are extracted by Xu and Grishman, 2009), and machine translation aligning sentences between parallel articles. ˇ To evaluate and improve sentence alignment (Chen et al., 2012; Stajner and Popovic, 2016). quality, we create two manually annotated Automatic text simplification is primarily ad- sentence-aligned datasets from two commonly dressed by sequence-to-sequence (seq2seq) models used text simplification corpora, Newsela and whose success largely depends on the quality and Wikipedia. We propose a novel neural CRF quantity of the training corpus, which consists of alignment model which not only leverages the pairs of complex-simple sentences. Two widely sequential nature of sentences in parallel doc- used corpora, NEWSELA (Xu et al., 2015) and WIK- uments but also utilizes a neural sentence pair model to capture semantic similarity. Experi- ILARGE (Zhang and Lapata, 2017), were created by ments demonstrate that our proposed approach automatically aligning sentences between compa- outperforms all the previous work on monolin- rable articles. However, due to the lack of reliable gual sentence alignment task by more than 5 annotated data,2 sentence pairs are often aligned points in F1. We apply our CRF aligner to using surface-level similarity metrics, such as Jac- construct two new text simplification datasets, card coefficient (Xu et al., 2015) or cosine distance NEWSELA-AUTO and WIKI-AUTO, which are of TF-IDF vectors (Paetzold et al., 2017), which much larger and of better quality compared fails to capture paraphrases and the context of sur- to the existing datasets. A Transformer-based seq2seq model trained on our datasets estab- rounding sentences. A common drawback of text lishes a new state-of-the-art for text simplifica- simplification models trained on such datasets is tion in both automatic and human evaluation.1 that they behave conservatively, performing mostly deletion, and rarely paraphrase (Alva-Manchego 1 Introduction et al., 2017). Moreover, WIKILARGE is the con- catenation of three early datasets (Zhu et al., 2010; Text simplification aims to rewrite complex text Woodsend and Lapata, 2011; Coster and Kauchak, into simpler language while retaining its original 2011) that are extracted from Wikipedia dumps and meaning (Saggion, 2017). Text simplification can are known to contain many errors (Xu et al., 2015). provide reading assistance for children (Kajiwara To address these problems, we create the first et al., 2013), non-native speakers (Petersen and high-quality manually annotated sentence-aligned Ostendorf, 2007; Pellow and Eskenazi, 2014), non- datasets: NEWSELA-MANUAL with 50 article sets, expert readers (Elhadad and Sutaria, 2007; Sid- and WIKI-MANUAL with 500 article pairs. We dharthan and Katsos, 2010), and people with lan- design a novel neural CRF alignment model, which guage disorders (Rello et al., 2013). As a prepro- utilizes fine-tuned BERT to measure semantic simi- cessing step, text simplification can also improve larity and leverages the similar order of content be- 1Code and data are available at: https://github. 2Hwang et al.(2015) annotated 46 article pairs from com/chaojiang06/wiki-auto. Newsela data need to Simple-Normal Wikipedia corpus; however, its annotation be requested at: https://newsela.com/data/. is noisy, and it contains many sentence splitting errors. 7943 Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7943–7960 July 5 - 10, 2020. c 2020 Association for Computational Linguistics Figure 1: An example of sentence alignment between an original news article (right) and its simplified version (left) in Newsela. The label ai for each simple sentence si is the index of complex sentence cai it aligns to. tween parallel documents, combined with an effec- 2 Neural CRF Sentence Aligner tive paragraph alignment algorithm. Experiments show that our proposed method outperforms all We propose a neural CRF sentence alignment the previous monolingual sentence alignment ap- model, which leverages the similar order of con- proaches (Stajnerˇ et al., 2018; Paetzold et al., 2017; tent presented in parallel documents and captures Xu et al., 2015) by more than 5 points in F1. editing operations across multiple sentences, such By applying our alignment model to all the 1,882 as splitting and elaboration (see Figure1 for an article sets in Newsela and 138,095 article pairs in example). To further improve the accuracy, we Wikipedia dump, we then construct two new simpli- first align paragraphs based on semantic similarity fication datasets, NEWSELA-AUTO (666,645 sen- and vicinity information, and then extract sentence tence pairs) and WIKI-AUTO (488,332 sentence pairs from these aligned paragraphs. In this section, pairs). Our new datasets with improved quan- we describe the task setup and our approach. tity and quality facilitate the training of complex seq2seq models. A BERT-initialized Transformer model trained on our datasets outperforms the state- 2.1 Problem Formulation of-the-art by 3.4% in terms of SARI, the main au- tomatic metric for text simplification. Our sim- Given a simple article (or paragraph) S of m sen- plification model produces 25% more rephrasing tences and a complex article (or paragraph) C of than those trained on the existing datasets. Our n sentences, for each sentence si (i 2 [1; m]) in contributions include: the simple article, we aim to find its corresponding sentence cai (ai 2 [0; n]) in the complex article. 1. Two manually annotated datasets that enable We use ai to denote the index of the aligned sen- the first systematic study for training and eval- tence, where ai = 0 indicates that sentence si is uating monolingual sentence alignment; not aligned to any sentence in the complex article. 2. A neural CRF sentence alinger and a para- The full alignment a between article (or paragraph) graph alignment algorithm that employ fine- pair S and C can then be represented by a sequence tuned BERT to capture semantic similarity of alignment labels a = (a1; a2; : : : ; am). Figure and take advantage of the sequential nature of 1 shows an example of alignment labels. One spe- parallel documents; cific aspect of our CRF model is that it uses a varied 3. Two automatically constructed text simplifi- number of labels for each article (or paragraph) pair cation datasets which are of higher quality rather than a fixed set of labels. and 4.7 and 1.6 times larger than the existing datasets in their respective domains; 4. A BERT-initialized Transformer model for 2.2 Neural CRF Sentence Alignment Model automatic text simplification, trained on our datasets, which establishes a new state-of-the- We learn P (ajS; C), the conditional probability art in both automatic and human evaluation. of alignment a given an article pair (S; C), using 7944 linear-chain conditional random field: any sentences. The score is computed as follows: exp(Ψ(a; S; C)) T (a ; a ) = FFNN([g ; g ; g ; g ]) (4) P (ajS; C) = P i i−1 1 2 3 4 a2A exp(Ψ(a; S; C)) PjSj where [; ] represents concatenation operation and exp( (ai; ai−1; S; C)) = i=1 FFNN is a 2-layer feedforward neural network. We P PjSj a2A exp( i=1 (ai; ai−1; S; C))) provide more implementation details of the model (1) in Appendix A.1. where jSj = m denotes the number of sentences 2.3 Inference and Learning PjSj in article S. The score i=1 (ai; ai−1; S; C) During inference, we find the optimal alignment a^: sums over the sequence of alignment labels a = (a1; a2; : : : ; am) between the simple article S and the complex article C, and could be decomposed a^ = argmax P (ajS; C) (5) a into two factors as follows: using Viterbi algorithm in O(mn2) time. During (a ; a ; S; C) = sim(s ; c ) + T (a ; a ) i i−1 i ai i i−1 training, we maximize the conditional probability (2) of the gold alignment label a∗: sim(s ; c ) ∗ ∗ where i ai is the semantic similarity log P (a jS; C) =Ψ(a ; S; C)− score between the two sentences, and T (a ; a ) i i−1 X (6) is a pairwise score for alignment label transition log exp(Ψ(a; S; C)) a2A that ai follows ai−1. Semantic Similarity A fundamental problem in The second term sums the scores of all possible sentence alignment is to measure the semantic sim- alignments and can be computed using forward algorithm in O(mn2) time as well. ilarity between two sentences si and cj. Prior work used lexical similarity measures, such as Jaccard 2.4 Paragraph Alignment similarity (Xu et al., 2015), TF-IDF (Paetzold et al., Both accuracy and computing efficiency can be 2017), and continuous n-gram features (Stajnerˇ improved if we align paragraphs before aligning et al., 2018). In this paper, we fine-tune BERT (De- sentences. In fact, our empirical analysis revealed vlin et al., 2019) on our manually labeled dataset that sentence-level alignments mostly reside within (details in x3) to capture semantic similarity. the corresponding aligned paragraphs (details in Alignment Label Transition In parallel docu- x4.4 and Table3). Moreover, aligning paragraphs ments, the contents of the articles are often pre- first provides more training instances and reduces sented in a similar order.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages18 Page
-
File Size-