Simplener Sentence Simplification System for GEM 2021

Simplener Sentence Simplification System for GEM 2021

SimpleNER Sentence Simplification System for GEM 2021 K V Aditya Srivatsa Monil Gokani Manish Shrivastava Language Technologies Research Center (LTRC) Kohli Center on Intelligent Systems International Institute of Information Technology, Hyderabad fk.v.aditya, [email protected] [email protected] Abstract The use of control tokens in Seq2Seq models for sentence simplification has been explored before This paper describes SimpleNER, a model de- (Martin et al., 2020). But this approach has shown veloped for the sentence simplification task to add data sparsity to the system. This is because at GEM-2021. Our system is a monolingual the model is required to learn the distribution of Seq2Seq Transformer architecture that uses the various control tokens and the expected outputs control tokens pre-pended to the data, allow- ing the model to shape the generated simpli- across the ranges of each control token. To mitigate fications according to user desired attributes. this sparsity, we process our data to replace named Additionally, we show that NER-tagging the entities with respective tags using an NER tagger. training data before use helps stabilize the ef- We show that this reduces the model vocabulary fect of the control tokens and significantly im- and allows for greater generalization. To further proves the overall performance of the system. curb the data sparsity, we make use of pre-trained We also employ pretrained embeddings to re- embeddings as initial input embeddings for model duce data sparsity and allow the model to pro- 1 duce more generalizable outputs. training. Our code is publicly available here. 2 Background 1 Introduction 2.1 Sentence Simplification Sentence simplification aims at reducing the lin- guistic complexity of a given text, while preserving Past approaches towards sentence simplification all the relevant details of the initial text. This is have dealt with it as a monolingual machine transla- particularly useful for people with cognitive dis- tion(MT) task (specifically Seq2Seq MT (Sutskever abilities (Evans et al., 2014), as well as for second et al., 2014)). This meant training MT architectures language learners and people with low-literacy lev- over complex-simple sentence pairs, either aligned els (Watanabe et al., 2009). Text and Sentence manually (Alva-Manchego et al., 2020; Xu et al., simplification also play an important role within 2016) or automatically (Zhu et al., 2010; Wubben NLP. Simplification has been utilized as a prepro- et al., 2012) using large complex-simple repository cessing step in larger NLP pipelines, which can pairs such as the English Wikipedia and the Simple greatly aid learning by reducing vocabulary and English Wikipedia. regularizing of syntax. Some implementations also utilize reinforce- In our model, we use control tokens to tune a ment learning (Zhang and Lapata, 2017) over the Seq2Seq Transformer model (Vaswani et al., 2017) MT task, with automated metrics such as SARI (Xu for sentence simplification. We take character et al., 2016), information preservation, and gram- length compression, extent of paraphrase, and lex- matical fluency constituting the training reward. ical & syntactic complexity as attributes to gauge 2.2 Controllable Text Generation the transformations between complex and simple sentence pairs. We then represent each of these A recent approach towards sentence simplification attributes as numerical measures, which are then involves using control tokens during machine trans- added to our data. We show that this provides a lation (Martin et al., 2020). For simplification, it considerable improvement over as-is Transformer 1https://github.com/kvadityasrivatsa/ approaches. gem_2021_simplification_task 155 Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pages 155–160 August 5–6, 2021. ©2021 Association for Computational Linguistics Control Attribute Control Measure Control Token Amount of compression Compression ratio <NbChars x.xx> Paraphrasing Levenshtein similarity <LevSim x.xx> Lexical complexity Avg. third-quartile of log-ranks <WordRank x.xx> Syntactic complexity Max dependency tree depth <DepTreeDepth x.xx> Table 1: Control Tokens used for Modelling encodes and enforces changes in certain attributes 3.1.2 Paraphrasing of the text. Similar approaches for controlling gen- The extent of paraphrasing between the complex erated text have been explored in other domains: and simple sentences ranges from a near replica of Filippova(2020) uses control tokens to estimate the source sentence to a very dissimilar and pos- and control the amount of hallucination in gener- sibly simplified one. The measure used for this ated text, Fan et al.(2018) explored pre-pending attribute is Levenshtein similarity (Levenshtein, control tokens to the input text for summarization, 1966) (control token: ‘LevSim’) between the com- providing control over the length of the output, and plex and simple sentences. customizing text generation for different sources. Our model makes use of control tokens similar 3.1.3 Lexical Complexity to Martin et al.(2020) to tailor the generated sim- For a young reader or a second language learner, plifications according to the extent of changes in complex words can decrease the overall readability the following attributes: character length, extent of of the text substantially. The average word rank paraphrasing, and lexical & syntactic complexity. (control token: ‘WordRank’) of a sequence has These attributes are represented by their respective been shown to correlate with the lexical complex- numerical measures (see 3.1), and then pre-pended ity of the sentence (Paetzold and Specia, 2016). to the complex sentences using in specific formats Therefore, similar to Martin et al.(2020), we use (Table1). Alongside this, we use NER tagging the average of the third-quartile of log-ranks of the and pre-trained input embeddings as a method to words in a sentence (except for stop-words and spe- curb data sparsity and unwanted named entity (NE) cial tokens), to encode for its lexical complexity. replacements. 3.1.4 Syntactic Complexity 3 System Overview Complex syntactic structures and multiple nested 3.1 Control Attributes clauses can decrease the readability of text, es- Following Martin et al.(2020), we encode the fol- pecially for people with reading disabilities. To lowing attributes during training and attempt to partially account for this, we make use of the control them during inference time. Eg: maximum syntactic tree depth (control token: ‘DepTreeDepth’) of the sentence as a measure of Complex: ”<NbChars 0.80> <LevSim 0.76> <WordRank 0.79> it is particularly famous for the its syntactic complexity. We use SpaCy’s English cultivation of kiwifruit .” dependency parser (Honnibal et al., 2020) to ex- tract the depth. The deeper the syntax tree of a Simple: ”It is mostly famous for the growing of sentence, the more likely it is that it involves highly kiwifruit .” nested clausal structures. 3.1.1 Amount of compression 3.2 NER Replacement Compression in sequence length has been shown to be correlated with the simplicity and readability Using control tokens contribute to the overall per- of text (Martin et al., 2019). Since compression as formance of the model, but it also gives rise to an an operation directly involves deletion, controlling added data sparsity. It divides the sentences of the its extent plays a crucial role in the extent of infor- train set into different ranges of the control tokens. mation preservation. We make use of the compres- This results in some control values having little to sion ratio (control token: ‘NbChars’) between the no examples, which adds the task of learning and character lengths of the simple and complex sen- generalizing over the control token values for the tences to encode for this attribute. model. Additionally, the model can learn to ad- 156 ”Sergio PA˜ rez Mendoza ( born January 26 , 1990 in Guadalajara , Jalisco), Raw (Complex) also known as ”Checo” PA˜ rez , is a Mexican racing driver .” ”person@1 ( born date@1 in gpe@1 ) , also known as ” person@2 ” , is a NER Replaced norp@1 racing driver .” Table 2: NER Tagging input sentence here to the control requirement, while still failing significantly boost the vocabulary size of usable to correctly simplify the sentence. Eg: content words for the model. Source: <NbChars 0.95> <LevSim 0.75> <WordRank 0.75> oxygen is a chemical element 4 Experimental Setup with symbol o and atomic number 8 . 4.1 Architecture Prediction: It has the chemical symbol o . It Our architecture is a Transformer Model (Vaswani has the atomic number 8 . et al., 2017), and we make use of the Transformer Here, the proper noun ”Oxygen” is replaced by Seq2Seq implementation from FairSeq (Ott et al., the pronoun ”it”. Although the model follows the 2019). To understand the impact of each of the requirement of bringing down the word rank of proposed methods, we train a total of four models: the sentence and remains grammatically sound, it doesn’t help with the simplification. • T: Vanilla Transformer (Vaswani et al., 2017), To address the issue of data sparsity as well that with control tokens, used as a baseline model. of unwanted NE-replacement, we propose NER • T+Pre: Transformer trained with FastText’s mapping the data before training, and replacing the pretrained embeddings. NE-tokens back after generation. We make use of the Ontonotes NER tagger (Yu et al., 2020) in • T+NER: Transformer trained on NER the Flair toolkit (Akbik et al.,

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us