Human Sentence Processing: Recurrence Or Attention?

Human Sentence Processing: Recurrence Or Attention?

Human Sentence Processing: Recurrence or Attention? Danny Merkx Stefan L. Frank Radboud University Radboud University Nijmegen, The Netherlands Nijmegen, The Netherlands [email protected] [email protected] Abstract and Schmidhuber, 1997) and Gated Recurrent Unit (GRU; Cho et al., 2014) models. Recurrent neural networks (RNNs) have long In essence, all RNN types process sequential been an architecture of interest for compu- information by recurrence: Each new input is pro- tational models of human sentence process- ing. The recently introduced Transformer ar- cessed and combined with the current hidden state. chitecture outperforms RNNs on many natural While gated RNNs achieved state-of-the-art results language processing tasks but little is known on NLP tasks such as translation, caption genera- about its ability to model human language pro- tion and speech recognition (Bahdanau et al., 2015; cessing. We compare Transformer- and RNN- Xu et al., 2015; Zeyer et al., 2017; Michel and Neu- based language models’ ability to account for big, 2018), a recent study comparing SRN, GRU measures of human reading effort. Our anal- and LSTM models’ ability to predict human read- ysis shows Transformers to outperform RNNs in explaining self-paced reading times and neu- ing times and N400 amplitudes found no significant ral activity during reading English sentences, differences (Aurnhammer and Frank, 2019). challenging the widely held idea that human Unlike the LSTM and GRU, the recently intro- sentence processing involves recurrent and im- duced Transformer architecture is not simply an mediate processing and provides evidence for improved type of RNN because it does not use re- cue-based retrieval. currence at all. A Transformer cell as originally proposed (Vaswani et al., 2017) consists of self- 1 Introduction attention layers (Luong et al., 2015) followed by Recurrent Neural Networks (RNNs) are widely a linear feed forward layer. In contrast to recur- used in psycholinguistics and Natural Language rent processing, self-attention layers are allowed to Processing (NLP). Psycholinguists have looked to ‘attend’ to parts of previous input directly. RNNs as an architecture for modelling human sen- Although the Transformer has achieved state-of- tence processing (for a recent review, see Frank the art results on several NLP tasks (Devlin et al., et al., 2019). RNNs have been used to account 2019; Hayashi et al., 2019; Karita et al., 2019), for the time it takes humans to read the words of a not much is known about how it fares as a model text (Monsalve et al., 2012; Goodkind and Bicknell, of human sentence processing. The success of 2018) and the size of the N400 event-related brain RNNs in explaining behavioural and neurophysi- potential as measured by electroencephalography ological data suggests that something akin to re- (EEG) during reading (Frank et al., 2015; Rabovsky current processing is involved in human sentence et al., 2018; Brouwer et al., 2017; Schwartz and processing. In contrast, the attention operations’ Mitchell, 2019). direct access to past input, regardless of temporal Simple Recurrent Networks (SRNs; Elman, distance, seems cognitively implausible. 1990) have difficulties capturing long-term pat- We compare how accurately the word sur- terns. Alternative RNN architectures have been prisal estimates by Transformer- and GRU-based proposed that address this issue by adding gating language models (LMs) predict human process- mechanisms that control the flow of information ing effort as measured by self-paced reading, over time; allowing the networks to weigh old and eye tracking and EEG. The same human read- new inputs and memorise or forget information ing data was used by Aurnhammer and Frank when appropriate. The best known of these are (2019) to compare RNN types. We believe the the Long Short-Term Memory (LSTM; Hochreiter introduction of the Transformer merits a simi- 12 Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 12–22 Online Event, June 10, 2021. ©2021 Association for Computational Linguistics lar comparison because the differences between Transformers and RNNs are more fundamental than among RNN types. All code used for the training of the neural networks and the anal- ysis is available at https://github.com/ DannyMerkx/next_word_prediction 2 Background Figure 1: Comparison of sequential information flow through the Transformer and RNN, trained on next- 2.1 Human Sentence Processing word prediction. Why are some words more difficult to process than others? It has long been known that more pre- dictable words are generally read faster and are through the estimated word probabilities. Here we more likely to be skipped than less predictable briefly highlight the difference in how RNNs and words (Ehrlich and Rayner, 1981). Predictabil- Transformers process sequential information. The ity has been formalised as surprisal, which can activation flow through the networks is represented 1 be derived from LMs. Neural network LMs are in Figure1. trained to predict the next word given all previous In an RNN, incoming information is immedi- words in the sequence. After training, the LM can ately processed and represented as a hidden state. assign a probability to a word: it has an expecta- The next token in the sequence is again immedi- tion of a word w at position t given the preced- ately processed and combined with the previous hidden state to form a new hidden state. Across ing words w1; :::; wt−1. The word’s surprisal then layers, each time-step only sees the corresponding equals − log P (wtjw1; :::; wt−1). Hale(2001) and Levy(2008) related surprisal hidden state from the previous layer in addition to human word processing effort in sentence com- to the hidden state of the previous time-step, so prehension. In psycholinguistics, reading times are processing is immediate and incremental. Infor- commonly taken as a measure of word process- mation from previous time-steps is encoded in the ing difficulty, and the positive correlation between hidden state, which is limited in how much it can reading time and surprisal has been firmly estab- encode so decay of previous time-steps is implicit lished (Mitchell et al., 2010; Monsalve et al., 2012; and difficult to avoid. In contrast, the Transformer’s Smith and Levy, 2013). The N400, a brain poten- attention layer allows each input to directly receive 2 tial peaking around 400 ms after stimulus onset information from all previous time-steps. This ba- and associated with semantic incongruity (Kutas sically unlimited memory is a major conceptual dif- and Hillyard, 1980), has been shown to correlate ference with RNNs. Processing is not incremental with word surprisal in both EEG and MEG studies over time: Processing of word wt is not dependent (Frank et al., 2015; Wehbe et al., 2014). on hidden states H1 through Ht−1 but on the unpro- In this paper, we compare RNN and Transformer- cessed inputs w1 through wt−1. Consequently, the based LMs on their ability to predict reading time Transformer cannot use implicit order information, and N400 amplitude. Likewise, Aurnhammer and rather, explicit order information is added to the Frank(2019) compared SRNs, LSTMs and GRUs input. on human reading data from three psycholinguistic However, a uni-directional Transformer can also experiments. Despite the GRU and LSTM gener- use implicit order information as long as it has ally outperforming the SRN on NLP tasks, they multiple layers. Consider H1;3 in the first layer found no difference in how well the models’ sur- which is based on w1; w2 and w3. Hidden state prisal predicted self-paced reading, eye-tracking 1Note that the figure only shows how activation is propa- and EEG data. gated through time and across layers, not how specific architec- tures compute the hidden states (see Elman(1990); Hochreiter 2.2 Comparing RNN and Transformer and Schmidhuber(1997); Cho et al.(2014); Vaswani et al. (2017) for specifics on the SRN, LSTM, GRU and Trans- According to (Levy, 2008), surprisal acts as a former, respectively). ‘causal bottleneck’ in the comprehension process, 2Language modelling is trivial if the model also receives information from future time-steps, as is commonly allowed in which implies that predictions of human processing Transformers. Our Transformer is thus uni-directional, which difficulty only depend on the model architecture is achieved by applying a simple mask to the input. 13 H1;3 does not depend on the order of the previous 3.1 Language Model Architectures inputs (any order will result in the same hidden We first trained a GRU model using the same ar- state). However, H depends on H ;H and 2;3 1;1 1;2 chitecture as Aurnhammer and Frank(2019): an H . If the order of the inputs w ; w ; w is dif- 1;3 1 2 3 embedding layer with 400 dimensions per word, a ferent, H will be the same hidden state but H 1;3 1;1 500-unit GRU layer, followed by a 400-unit linear and H will not, resulting in a different hidden 1;2 layer with a tanh activation function, and finally state at H . 2;3 an output layer with log-softmax activation func- Unlike Transformers, RNNs are inherently se- tion. All LMs used in this experiment use randomly quential, making them seemingly more plausible as initialised (i.e., not pre-trained) embedding layers. a cognitive model. Christiansen and Chater(2016) We implement the Transformer in PyTorch fol- argue for a ‘now-or-never’ bottleneck in language lowing Vaswani et al.(2017). To minimise the processing; incoming inputs need to be rapidly re- differences between the LMs, we picked parame- coded and passed on for further processing to pre- ters for the Transformer such that the total number vent interference from the rapidly incoming stream of weights is as close as possible to the GRU model.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us