Semi-Supervised Text Simplification with Back-Translation And

Semi-Supervised Text Simplification with Back-Translation And

The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) Semi-Supervised Text Simplification with Back-Translation and Asymmetric Denoising Autoencoders Yanbin Zhao, Lu Chen, Zhi Chen, Kai Yu∗ MoE Key Lab of Artificial Intelligence SpeechLab, Department of Computer Science and Engineering Shanghai Jiao Tong University, Shanghai, China {zhaoyb, chenlusz, zhenchi713, kai.yu}@sjtu.edu.cn Abstract Most of the prior works regard this task as a mono- lingual machine translation problem and utilize sequence- Text simplification (TS) rephrases long sentences into sim- to-sequence architecture to model this process (Nisioi et plified variants while preserving inherent semantics. Tradi- al. 2017; Zhang and Lapata 2017). These systems rely on tional sequence-to-sequence models heavily rely on the quan- large corpus containing pairs of complex and simplified sen- tity and quality of parallel sentences, which limits their appli- tences, which severely restrict their usage in different lan- cability in different languages and domains. This work in- vestigates how to leverage large amounts of unpaired cor- guages and the adaptation to downstream tasks in different pora in TS task. We adopt the back-translation architecture domains. So, it is essential to explore unsupervised or semi- in unsupervised machine translation (NMT), including de- supervised learning paradigm which can effectively work noising autoencoders for language modeling and automatic with unpaired data. generation of parallel data by iterative back-translation. How- In this work, we adopt back-translation (Sennrich, Had- ever, it is non-trivial to generate appropriate complex-simple pair if we directly treat the set of simple and complex cor- dow, and Birch 2016a) framework to perform unsupervised pora as two different languages, since the two types of sen- and semi-supervised text simplification. Back-translation tences are quite similar and it is hard for the model to cap- converts the unsupervised task into a supervised one by on- ture the characteristics in different types of sentences. To the-fly sentence pair generation. It has been successfully tackle this problem, we propose asymmetric denoising meth- used in unsupervised neural machine translation (Artetxe et ods for sentences with separate complexity. When modeling al. 2018; Lample et al. 2018b), semantic parsing (Cao et simple and complex sentences with autoencoders, we intro- al. 2019) and natural language understanding (Zhao, Zhu, duce different types of noise into the training process. Such and Yu 2019). Denoising autoencoder (DAE) (Vincent et a method can significantly improve the simplification perfor- al. 2008) plays an essential part in back-translation model. mance. Our model can be trained in both unsupervised and It performs language modeling and helps the system learn semi-supervised manner. Automatic and human evaluations show that our unsupervised model outperforms the previous useful structures and features from the monolingual data. systems, and with limited supervision, our model can per- In NMT task, the translations between different languages form competitively with multiple state-of-the-art simplifica- are equal, and the denoising autoencoders have a symmet- tion systems. ric structure, which means different languages use the same types of noise (mainly word dropout and shuffle). However, if we treat the set of simple and complex sentences as two Introduction different languages, the translation processes are asymmet- ric: Translation from simple to complex is an extension pro- Text simplification reduces the complexity of a sentence in cess requires extra generations, while information distilla- both lexical and structural aspects in order to increase its tion is needed during the inverse translation. Moreover, text intelligibility. It brings benefits to individuals with low lan- simplification is a monolingual translation task. The inputs guage skills and has abundant usage scenarios in education and outputs are quite similar, which makes it more difficult and journalism fields (De Belder and Moens 2010). Also, a to capture the different features in complex and simple sen- simplified version of a text is easier to process for down- tences. As a result, symmetric denoising autoencoders may stream tasks, such as parsing (Chandrasekar, Doran, and not very helpful in modeling sentences with diverse com- Srinivas 1996), semantic role labeling (Woodsend and La- plexity and make it non-trivial to generate appropriate par- pata 2011), and information extraction (Jonnalagadda and allel data. Gonzalez 2010). To tackle this problem, we propose asymmetric denoising ∗Kai Yu is the corresponding author. autoencoders for sentences with different complexity. We Copyright c 2020, Association for the Advancement of Artificial analyze the effects of denoising type on the simplification Intelligence (www.aaai.org). All rights reserved. performance and show that separate denoising methods is 9668 beneficial for decoders to generate suitable sentences with pair of independent decoders: Ds for simple sentences and different complexity. Besides, we set several criteria to eval- Dc for complex sentences. Denote the corresponding sen- uate the generated sentences and use policy gradient to op- tence spaces by S and C. The encoder and decoders are first timize these metrics. We use this as an additional method pre-trained as asymmetric denoising autoencoders (See be- to improve the quality of the generated sentences. Our ap- low) on separated data. Next, the model goes through an it- proach relies on two unpaired corpora – one is statistically erative process. At each iteration, simple sentence x ∈S simpler than another. In summary, our contributions include: is translated to a relatively complicated one Cˆ(x) via cur- E D y ∈C • We adopt the back-translation framework to utilize large rent model and c. Similarly, complex sentence amounts of unpaired sentences for text simplification. is translated to a relatively simple version Sˆ(y) via E and D Cˆ x ,x Sˆ y ,y • We propose asymmetric denoising autoencoders for sen- s. The pairs ( ( ) ) and ( ( ) ) are automatically- tences with different complexity and analyze the corre- generated parallel sentences which can be used to train the sponding effects. model in a supervised manner with cross entropy loss. Dur- • ing the supervised training, our current model can also be We develop methods to evaluate both simple and com- regarded as translation policies. Let x, y denote the simple plex sentences derived from back-translation and use re- and complex sentences sampled from the current policies. inforcement learning algorithms to promote the quality of Corresponding rewards Rs and Rc is calculated according the back-translated sentences. to their quality. The model parameters are updated with both cross entropy loss and policy gradient. Related Works As a monolingual translation task, early text simplification Back-Translation Framework systems usually based on statistical machine translation such In the back-translation framework, the shared encoder aims as PBMT-R (Wubben, Van Den Bosch, and Krahmer 2012) to represent both simple and complex sentences in a same and Hybrid (Narayan and Gardent 2014). Xu et al. (2016) latent space, and the decoders need to decompose this rep- achieved state-of-the-art performance by leveraging para- resentation into sentences with corresponding types. We up- phrases rules extracted from bilingual texts. Recently, neu- date the model by minimizing the cross entropy loss: ral network models have been widely used in simplification systems. Nisioi et al. (2017) first applied Seq2Seq archi- L =E ∼S − log P → (x|Cˆ(x)) + tecture to model text simplification. Several extensions are ce x c s (1) also proposed for this architecture such as augmented mem- E ∼C − log P → (y|Sˆ(y)) ory (Vu et al. 2018) and multi-task learning (Guo, Pasunuru, y s c and Bansal 2018). Furthermore, Zhang and Lapata (2017) P P proposed DRESS, a Seq2Seq model trained in a reinforce- Where c→s and s→c represent the translation models ment learning framework. Sentences with high fluency, sim- from complex to simple and vice versa. The updated model plicity and adequacy are rewarded during the training pro- tends to generate better synthetic sentence pairs for the next cess. Zhao et al. (2018) utilized Transformer (Vaswani et training process. Through such iterations, the model and al. 2017) integrated with external knowledge and achieved back-translation process can promote mutually and finally state-of-the-art performance in automatic evaluation. Kriz et lead to a good performance. al. (2019) proposed complexity-weighted loss and a rerank- ing system to improve the simplicity of the sentences. Sys- Denoising tems all above require large amounts of paralleled data. Lample et al. (2018a) showed that denoising strategies such In terms of unsupervised simplification, several systems as word dropout and shuffle have a critical impact on un- only perform lexical simplification (Narayan and Gardent supervised NMT systems. We argue that these symmetric 2016; Paetzold and Specia 2016) by replacing compli- noises in NMT may not be very effective in simplification cated words with their simpler synonyms, which ignored task. So in this section, we will describe our asymmetric other operations such as reordering and rephrasing. Surya et noises for simple and complex corpus. al. (2018) proposed an unsupervised method for neural mod- els. They utilized adversarial training to enforce a similar at-

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us