Automatic Text Summarization Using Reinforcement Learning with Embedding Features

Automatic Text Summarization Using Reinforcement Learning with Embedding Features

Automatic Text Summarization Using Reinforcement Learning with Embedding Features Gyoung Ho Lee, Kong Joo Lee Information Retrieval & Knowledge Engineering Lab., Chungnam National University, Daejeon, Korea [email protected], [email protected] Abstract so they are not applicable for text summarization tasks. An automatic text summarization sys- A Reinforcement Learning method would be an tem can automatically generate a short alternative approach to optimize a score function and brief summary that contains a in extractive text summarization task. Reinforce- main concept of an original document. ment learning is to learn how an agent in an envi- In this work, we explore the ad- ronment behaves in order to receive optimal re- vantages of simple embedding features wards in a given current state (Sutton, 1998). in Reinforcement leaning approach to The system learns the optimal policy that can automatic text summarization tasks. In choose a next action with the most reward value addition, we propose a novel deep in a given state. That is to say, the system can learning network for estimating Q- evaluate the quality of a partial summary and de- values used in Reinforcement learning. termine the sentence to insert in the summary to We evaluate our model by using get the most reward. It can produce a summary by ROUGE scores with DUC 2001, 2002, inserting a sentence one by one with considering Wikipedia, ACL-ARC data. Evalua- the quality of the hypothetical summary. In this tion results show that our model is work, we propose an extractive text summariza- competitive with the previous models. tion model for a single document based on a RL method. 1 Introduction A few researchers have proposed the RL ap- proaches in automatic text summarization In this work, we present extractive text summari- (Goldstein et al., 2000; Rioux et al. 2014; Henß et zation for a single document based on Reinforce- al. 2015). Previous studies mainly exploited hand- ment leaning (RL) method. One of the advantages crafted complex features in RL-based automatic of the extractive approach is that a summary con- text summarization. However, choosing important sists of linguistically correct sentences as long as a features for a task, re-implementing the features source document has a certain level of linguistic for a new domain, and re-generating new features quality. for a new application are very difficult and time- One of the most well-known solutions of extrac- consuming jobs. The recent mainstream of NLP tive text summarization is to use maximal margin- applications with Deep Learning is to reduce the al relevance (MMR) (Goldstein et al., 2000). burden of hand-crafted features. Embedding is However, MMR cannot take into account for the one of the simplest deep learning techniques to quality of a whole summary because of its greedi- build features that represent words, sentences, or ness (Ryang and Abekawa, 2012). Another solu- documents. It has already shown state-of-the art tion is to use optimization techniques such as in- performance in many NLP applications. teger linear programming (ILP) to infer the scores The main contributions of our work are as fol- of sentences with consideration of the quality of a lows: first, we explore the advantages of simple whole summary (McDonald, 2007). However, embedding features in RL approach to automatic these methods have a very large time complexity text summarization tasks. Content embeddings vector and position embeddings vector are only 193 Proceedings of the The 8th International Joint Conference on Natural Language Processing, pages 193–197, Taipei, Taiwan, November 27 – December 1, 2017 c 2017 AFNLP features that our system adopts. Second, we pro- In the above equation, cor measures the quality pose a novel deep learning network for estimating of the partial summary (state) by comparing it Q-values used in RL. This network is devised to with the corresponding human reference summary consider the relevance of a candidate sentence for . We use the ROUGE-2 score for the measure- an entire document as well as the naturalness of a ment. generated summary. We evaluate our model by using ROUGE 3.1 Q-Network scores (Lin, 2004) and show that the performance Q-learning models the value Q(st; at) of perform- of our model is comparable to that of the previous ing an action at in the current state st. We use a studies which rely on as many features as possi- deep neural network model with a regression ble. function to compute the Q-values. The model can calculate a Q-value based on a 2 Related Work partial summary (current state) and a candidate sentence (action). The output Q-value indicates As far as we know, Ryang and Abekawa (2012) the expectation value that the agent can get when have so far been the first ones who applied RL to it selects the candidate sentence as a part of the the text summarization. The authors regard the ex- summary. Figure 1 shows the Q-network model tractive summarization task as a search problem. we propose in this work. In their work, a state is a subset of sentences and actions are transitions from one state to the next state. They only consider the final score of the whole summary as reward and use TD(λ) as RL framework. Rioux et al. (2014) extended this ap- proach by using TD. They employed ROUGE as part of their reward function and used bi-grams instead of tf ∗ idf as features. Henß and Mieskes (2015) introduced Q-learning to text summariza- tion. They suggest RL-based features that describe a sentence in the context of the previously select- ed sentences and how adding this sentence chang- Figure 1: Architecture of our Q-network es hypothetical summary. Our work extends pre- vious work using DQN based algorithm and em- The Q-network model consists of three mod- bedding features. ules shown in Figure 1. First module (upper left in the figure) is to represent that a semantic relation- 3 Model for Automatic text summariza- ship between the input document and the candi- tion date sentence (At). The input document is provid- ed as DocVec in which the whole meaning of the In this work, we apply the Deep Q-Networks document is embedded. The candidate sentence is (DQN)-based model (Volodymyr et al. 2015) to represented as a sentence embedding vector automatic text summarization tasks. In the case of ContVec. The vector DocVec and ContVec are the text summarization, the state denotes a summary inputs to this module. The output of the module is which can still be incomplete and the action de- vector DA_w, which is calculated as: notes the addition of a sentence to this summary. For using RL method in text summarization, there ( ) (2) are two parameters that should be predefined. One ( ) (3) is a length limitation of a summary. The other pa- rameter is a reward value for a partial summary. In (4) this work, we use the same length limitation and reward function used in Henß et al., (2015). The For simplicity, the biases are all omitted in the reward function is defined as: equations. The active function of the output layer is sigmoid function so that each element in DA_w has a value between 0~1. – (1) 194 The second module (bottom right in the figure) is to represent the relationship between the partial summary and the candidate sentence. The module Initial state Next state is constructed as a RNN (Recurrent Neural Net- D = {s1, s2, …, sn}, Ct+1, Ct - { at } work), in which the candidate sentence appears in C0 = {s1, s2, …, sn} PSt+1 PSt +{ at } the given partial summary(PS) as a history. Each PS0 = {s0} St+1 {D, Ct+1, PSt+1} sentence is represented as the two vectors which S0 = {D, C0, PS0} contain the content information(ContVec) and the Table 1. Definition of State0 and Next state position information(PosVec), respectively. in our Q-Learning algorithm Through the RNN, the partial summary and the candidate sentence are transformed into the final 4 Experiments state vector (StateVec). We implement the RNN which uses GRU unit and activation func- 4.1 Experimental data tion, and outputs 50-dimensional state vector. The third module (upper center in the figure) In order to evaluate our method, we use DUC 2001, DUC 2002 datasets, Anthology Reference takes the DA_w and StateVec as the input, and Corpus (ACL-ARC) (Bird, 2008), and WIKI data calculates the Q-value as the output. It combines which have been used in the previous studies the two vectors into element-wise dots and con- Henß et al, (2015). verts them into Q-value through a linear regres- The numbers of training data and testing data are sion function. The output Q-value is the expected shown in Table 2. value that can be obtained when a new summary DUC DUC ACL- is generated by inserting a candidate sentence into WIKI 2001 2002 ARC a partial summary. #-TR 299 - 5500 1936 #-TE 306 562 613 900 3.2 Reinforcement Learning in Text Sum- Table 2: Number of each data marization #-TR=The number of documents in training data The agent of the text summarization in this work #-TE=The number of document in testing data has a sentence Selector and three memories. The role of the agent is to generate a summary for an input document by using the sentence Selector. 4.2 Features The agent has the following three memories. 1) D memory contains information about sentences in In this work, a sentence is the unit of summariza- an input document. 2) PS memory contains in- tion.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us