Reading Selectively Via Binary Input Gated Recurrent Unit

Reading Selectively Via Binary Input Gated Recurrent Unit

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) Reading Selectively via Binary Input Gated Recurrent Unit Zhe Li1∗ , Peisong Wang1;2 , Hanqing Lu1 and Jian Cheng1;2 1National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences 2Center for Excellence in Brain Science and Intelligence Technology [email protected], [email protected], [email protected], [email protected] Abstract Gated Recurrent Unit (GRU) [Cho et al., 2014]. Compared with LSTM that has three gate functions, GRU has only two Recurrent Neural Networks (RNNs) have shown gates: a reset gate function to determine how much previous great promise in sequence modeling tasks. Gated information can be ignored, and an update gate function to se- Recurrent Unit (GRU) is one of the most used re- lect whether the hidden state is to be updated with the current current structures, which makes a good trade-off information. Thus GRU can better balance the relationship between performance and time spent. However, its between performance and computational complexity, which practical implementation based on soft gates only makes it very widely used. For ease of optimization, GRU partially achieves the goal to control information uses sigmoid as the gate function, whose outputs are soft val- flow. We can hardly explain what the network has ues between 0 and 1. However, the sigmoid gate function is learnt internally. Inspired by human reading, we in- easy to overfit [Zaremba et al., 2014], and the information troduce binary input gated recurrent unit (BIGRU), flow behind this gate function is not graspable to human. No a GRU based model using a binary input gate in- matter LSTM or GRU, the model reads all the text available stead of the reset gate in GRU. By doing so, our to them. But inspired by the human reading, we know that not model can read selectively during interference. In all inputs are equally important. Therefore, the fact that texts our experiments, we show that BIGRU mainly ig- are usually written with redundancy makes us think about the nores the conjunctions, adverbs and articles that do possibility of reading selectively. If the network can distin- not make a big difference to the document under- guish the important words from the decorated or connection standing, which is meaningful for us to further un- words in a document, we can better understand how the net- derstand how the network works. In addition, due work works internally. to reduced interference from redundant informa- In this paper, we want to explore a new GRU based RNN tion, our model achieves better performances than structure that can read only the important information and ig- baseline GRU in all the testing tasks. nore the redundancies. The reset gate function in GRU is replaced by our proposed binary input gate function. With 1 Introduction the hard binary gate, the model learns which word should be skimmed. One benefit of our model is that the binary input Recurrent Neural Networks (RNNs) have become the most gate can clearly show the flow of information. As shown in popular approach to address machine learning tasks involv- our experiments, only significant information flows into the [ et ing sequential data, such as machine translation Bahdanau network. Another advantage is that while a model lies in a flat al. ] [ ] , 2015 , document classification Le and Mikolov, 2014 , region of loss surface, it is likely to generalize well [Hochre- [ et al. ] sentiment analysis Socher , 2011 , language recogni- iter and Schmidhuber, 1997a], because any small perturbation [ et al. ] [ et tion Graves , 2013 , and image generation Villegas to the model makes little fluctuation to the loss. As training al. ] , 2017 . a binary gate means pushing the value to 0 and 1 residing in However, the vanilla RNN suffers from the gradient ex- the flat region of the sigmoid function, small disturbances of ploding/vanishing problem during training. To address this the output value of sigmoid functions are less likely to change [ problem, long short-term memory (LSTM) Hochreiter and the binary gate output. The model ought to be more robust. Schmidhuber, 1997b] was proposed by introducing gate func- tions to control the information flow within a unit. LSTM However, training binary gate function is also very chal- has shown impressive results in several applications, but it lenging. The binary gate is obviously not differentiable, has much higher computational complexity, which means a common method for optimizing functions involving dis- longer inference time and more power consumption [Cheng crete variables is REINFORCE algorithm [Williams, 1992], et al., 2018a; Cheng et al., 2018b; Wang and Cheng, 2016; but it uses the reward signal, increasing floating point op- He and Cheng, 2018]. Therefore, K. Cho et al. proposed erations [Bengio et al., 2013]. Another way is to use the estimator [Bengio et al., 2013; Courbariaux et al., 2016; ∗Contact Author Hu et al., 2018] which has been successfully applied in differ- 5074 Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) ent works. By using the estimator, all the model parameters and a jump agent. In every time step, the skip agent deter- can be trained to minimize the target loss function with stan- mines whether to skip the current word, and the jump agent dard backpropagation and without defining any additional su- selects a jump to the next word, or the next sub-sentence sep- pervision or reward signal. We conduct document classifica- arator sympol (,;), or the next end of sentence sympol (.!?), or tion task and language modeling task on 6 different datasets to to the end of the text (which is also an instance of end of sen- verify our model and our model achieves better performance. tence). G2-LSTM [Li et al., 2018] proposes a new way for In summary, this paper makes the following contributions: LSTM training. It uses the developed Gumbel-Softmax trick • We propose a novel interpretable RNN structure based to push the output of sigmoid function towards 0 and 1, and on GRU: We replace the reset gate functions of GRU the gate functions are still continuous. Although this network by the binary input gate functions, and retain the update reads all the text, the word is considered less important if the gate functions. value of its input gate function is closer to 0. Overall, all the above state-of-art models can identify the • Our model can read the input sequences selectively: In importance of words and read the text selectively. However, our model, we can find more clearly whether the current except G2-LSTM which reads all the text, other networks that information is passed into the network or not. In the read selectively have shown promising results only on spe- experimental analysis, we show the gates in our learned cific temporal tasks: they targeted merely on the document model are meaningful and intuitively interpretable. classification task or the question answering task on small • The proposed method achieves better results compared datasets. These tasks are easier than language modeling task to the baseline model: In the experiments, with the same and require less understanding of every word. And so far, amount of parameters and computational complexity, there are no such reading selectively models that can match our model outperforms the baseline algorithms on all the performance of vanilla RNN on the word-level language tasks. modeling task. In this paper, we present the gated recurrent unit with binary input gate functions which has superior per- The reminder of the paper is organized as follows. Section formances to the baseline algorithm on the documents clas- 2 introduces other works related to our research. Section 3 sification task and the word-level language modeling task as describes the model of binary input gated recurrent unit and well. how to train it. The experiments and analysis are shown in section 4. Finally, we conclude our model and discuss the future work in the last section. 3 Model Description In this section, we present a new structure of GRU, which we 2 Related Work call it Binary Input Gated Recurrent Unit (BIGRU), contain- ing binary input gate functions and update gate functions. The As all texts have emphasis and redundancy, making neural binary input gate functions are in charge of controlling the in- networks identify the importance of words has drown much formation input and the update gate functions are responsible attention recently. LSTM-Jump [Yu et al., 2017] augments for the prediction. Therefore, BIGRU has the ability to read an LSTM cell with a classification layer that decides how selectively on both documents classification task and word- many words to jump after reading a fixed number of words. level language modeling task. Moreover, the maximum jump distance and the number of jumps allowed all need to be chosen in advance. The model is 3.1 Gated Recurrent Unit trained with the REINFORCE algorithm, where the reward is Recurrent Neural Network (RNN) takes an input sequence defined based on whether the model predicts correctly or not. fx1; x2; ··· ; xT g and generates a hidden state sequence These hyperparameters define a reduced set of subsequences fh1; h2; ··· ; hT g. In recurrent neural networks, the hidden that the model can sample, instead of allowing the network to states in layer k are used as inputs to layer k + 1. The hidden learn any arbitrary sampling scheme. Skim-RNN [Seo et al., states in last layer are used for prediction and making deci- 2018] contains two RNNs, a ”small” RNN and a ”big” RNN. sion. The hidden state sequence is generated by iteratively ap- At every time step the model determines to use the small RNN plying a parametric state transition model S from time t = 1 or the big RNN, which is based on the current input and the to T : previous states.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us