A Simple Word Interaction Model for Implicit Discourse Relation

A Simple Word Interaction Model for Implicit Discourse Relation

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17) SWIM: A Simple Word Interaction Model for Implicit Discourse Relation Recognition∗ Wenqiang Lei1, Xuancong Wang2, Meichun Liu3, Ilija Ilievski1, Xiangnan He1, Min-Yen Kan1 1National University of Singapore 2Institute for Infocomm Research 3City University of Hong Kong fwenqiang, xiangnan, [email protected], [email protected] [email protected], [email protected] Abstract Ex (1) You are so fortunate. The hurricane came five Capturing the semantic interaction of pairs of hours after you left. words across arguments and proper argument rep- Ex (2) In 1986, the number of cats was down to 1000. resentation are both crucial issues in implicit dis- In 2002, it went up to 2000. course relation recognition. The current state-of- Ex (3) I never gamble too far. In other words, I quit the-art represents arguments as distributional vec- after one try. tors that are computed via bi-directional Long Short-Term Memory networks (BiLSTMs), known Ex (4) I was sleeping when he entered. to have significant model complexity. Figure 1: Toy examples of each of the four Level–1 discourse re- In contrast, we demonstrate that word-weighted av- lations annotated in the PDTB formalism. (1) and (2) are implicit eraging can encode argument representation which relations; (3) and (4) are explicit. Arg1 is italicized and Arg2 is can be incorporated with word pair information ef- bolded, as per convention. ficiently. By saving an order of magnitude in pa- rameters and eschewing the recurrent structure, our proposed model achieves equivalent performance, 2017]) – inclusive of this one – restrict their use to the top- but trains seven times faster. most, Level–1 categories: Contingency (Ex. (1)), Compari- son (Ex. (2)), Expansion (Ex. (3)) and Temporal (Ex. (4)). The two text spans where the discourse relation holds are 1 Introduction called arguments, named Arg1 and Arg2 respectively. Arg2 is defined as the argument that syntactically houses the (explicit Sentences alone do not serve to form coherent discourse. or implicit) discourse connective. Logical relations, both inter- and intra-sententially, are needed for a coherent text. Such relations are termed dis- Modeling word pairs have been shown useful for implicit [ course relations. Automatically recognizing discourse rela- discourse relation recognition in many studies Marcu and et al. tions is useful for downstream applications such as machine Echihabi, 2002a; Rutherford and Xue, 2014; Chen , ] translation and summarization. 2016 . This is because semantic interactions exist between the two arguments. This can be realised in many forms, of Discourse relations can be overtly signaled by occurrences which word pairs are arguably the simplest. For example, of explicit discourse connectives such as Indeed and After that the interaction between up and down is most likely to sig- (cf Ex. (3) & (4)). In contrast when the context is clear, nal a Comparison relation as in Ex. (2). Traditional methods such overt signals can be omitted, leading to discourse re- use word pairs [Marcu and Echihabi, 2002a], or variants like lations that it is said to be implicitly signaled (not marked by Brown Clustering pairs [Rutherford and Xue, 2014], as fea- a lexical connective in the text; cf Ex. (1) & (2)). The lack tures for supervised learning. of any overt signal makes implicit discourse relations much more challenging to recognize. In addition, argument representation — or how arguments This explicit/implicit distinction is adopted by the Penn are modeled as a whole — is also crucial for correct inter- Discourse Treebank (PDTB, version 2.0) [Prasad et al., pretation. Taking the fortunate–hurricane pair in Ex. (1), one 2008]. We adopted the PDTB for this study due to its large might construe a Comparison relation due to the word pair’s size when compared against other discourse corpora such as contrasting sentiment polarity. However, it is understood as the Rhetorical Structure Theory Treebank (RST) [Carlson et a Contingency relation when the entire context of both argu- al., 2002]. While the PDTB has a hierarchical annotation ments are taken into account. scheme, currently most studies (e.g. [Chandrasekaran et al., To address the argument representation challenge, recent works have leveraged the powerful semantic representability ∗Wenqiang Lei and Xuancong Wang are the contact authors. of neural network models. For example, Gated Relevance 4026 Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17) Networks (GRN) [Chen et al., 2016] and Neural Networks MLP MLP MLP WA with Multi-Level Attention (NNMA) [Liu and Li, 2016], ap- WA interaction ply a bi-directional Long Short-Term Memory network (BiL- interaction interaction BiLSTMs STM) [Hochreiter and Schmidhuber, 1997; Schuster and Pali- BiLSTMs embedding wal, 1997] to represent each argument. Both models achieve embedding embedding the-state-of-the-art F1 score without employing handcrafted (a) SWIM (b) GRN (c) BiLSTM + SWIM features. However, BiLSTMs inevitably introduce many pa- rameters. The large number of parameters slows the training Figure 2: The components of the SWIM, GRN and BiLSTM + GRN process and is prone to overfitting. Especially in the context models. “WA” denotes “weighted average”. of the PDTB, a relatively small dataset, a lightweight repre- sentation is a viable method to simplify the model. passed to a final a multilayer perceptron layer (MLP) to de- Motivated by this observation, we propose a new model termine the final discourse relation (cf Figure 2(a)). that integrates the modeling of both word pair interaction and argument representation, without the use of BiLSTMs. Our 3.1 Word Interaction Score model — termed the Simple Word Interaction Model (SWIM) Following current best practices, SWIM’s word interaction — achieves a comparable F1 score to the state-of-the-art and score captures both linear and quadratic relations between runs seven times faster by eschewing the use of BiLSTMs. the two word’s embeddings. Formally, let M and N denote the lengths of Argument 1 (hereafter, Arg1) and Argument 2 2 Related Work (Arg2). Further, let xi, yj denote the pre-trained (row) word embeddings of ith word of Arg1 and jth word of Arg2, sep- Supervised learning approaches for the implicit discourse re- arately. Then for each pair of words xi and yj, SWIM calcu- lations recognition is the common paradigm in prior work. lates an interaction score as in Eq. (1): Surface features for the task include word pairs [Marcu and Echihabi, 2002b] sentiment polarity scores, General Inquirer T sij = xiAyj + B[xi; yj] + cij (1) tags [Pitler et al., 2009], and parser production rules [Lin Rd×d R1×2d R et al., 2009]. Among all these models, the na¨ıve feature where A 2 , B 2 and cij 2 are train- of word pairs [Marcu and Echihabi, 2002a; Rutherford and able parameters, and [xi; yj] is a concatenation of the two T Xue, 2014] – which is the co-occurrence frequency of a pair word embeddings. xiAyj models the quadratic relation be- of words, one drawn from each of the two arguments – has tween the two word embeddings, while B[xi; yj] captures proven to be extremely efficient. linear relationships. cij is the bias term for the final interac- Another trend is to employ semi-supervised methods. tion. At training, these parameters — A, B and cij — model This is due to the limited amount of annotated data and the and encode word pair semantics, assigning a high interaction relative abundance of weakly-labeled data (i.e., explicitly- score when well correlated with a particular discourse rela- signaled instances). For example, Lan et al. [2013] explored tion class. Similar approaches have been applied to many multitask learning; Hernault et al. [2010] applied feature vec- fields, e.g. recommendation system [He et al., 2017]. tor extension; and Rutherford and Xue [2015] selectively SWIM calculates the interaction matrix S, computing sij added some explicit instances as implicit training data, ac- for each possible (ith) word of Arg1 and (jth) word of Arg2. cording to their connectives. The recent wave of deep learning approaches to NLP 3.2 Argument Representation problems leverage word embeddings pre-trained on large cor- Both (Bi)LSTMs and embedding averaging are valid meth- pora to achieve significant gains on tasks involving complex ods for representing text sequences, inclusive of discourse ar- semantics. While many architectural designs have been ex- guments and sentences. Recent studies suggest that LSTMs plored (e.g. [Zhang et al., 2016]), the state-of-the-art deep perform well when sequential order is important [Iyyer et al., learning methods on our task — the GRN (discussed in detail 2015]. However, this is less the case in argument represen- later) and NNMA — are fundamentally BiLSTM based, in- tation, where content plays a larger role than ordering – e.g., volving many parameters, which leads to inefficiency in train- in Exs. (1) & (2), the discourse relations do not change even ing and testing. when the words are reordered. Simpler methods, such word averaging may be sufficient and effective as suggested by Wi- 3 Simple Word Interaction Model (SWIM) eting et al. [2016] who concluded that “word averaging mod- els perform well for [the related tasks of] sentence similar- SWIM models both the fine-grained word pair interaction and ity and entailment, outperforming LSTMs.” Hence, SWIM coarse-grained argument representation. We first introduce adopts embedding averaging. the model here and detail its implementation in the follow- However, embedding averaging alone is insufficient – each ing sections. To capture word pair interaction, we calcu- argument’s words are actually understood in the context of late an interaction score for each word pair that measures the the opposing argument. For example, on its own down in importance of the interaction between its component words.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us