Discourse Relation Sense Classification with Two-Step

Total Page:16

File Type:pdf, Size:1020Kb

Discourse Relation Sense Classification with Two-Step Discourse Relation Sense Classification with Two-Step Classifiers Yusuke Kido Akiko Aizawa The University of Tokyo National Institute of Informatics 7-3-1 Hongo, Bunkyo-ku 2-1-2 Hitotsubashi, Chiyoda-ku Tokyo, Japan Tokyo, Japan [email protected] [email protected] Abstract introduction of a new evaluation criterion based on partial argument matching in the CoNLL- Discourse Relation Sense Classification is 2016 Shared Task. On the other hand, the the classification task of assigning a sense sense classification components, which assign to discourse relations, and is a part of a sense to each discourse relation, continue to the series of tasks in discourse parsing. perform poorly. In particular, Non-Explicit sense This paper analyzes the characteristics classification is a difficult task, and even the best of the data we work with and describes system achieved an F1 score of only 0.42 given the system we submitted to the CoNLL- the gold standard argument pairs without error 2016 Shared Task. Our system uses two propagation (Wang and Lan, 2015). sets of two-step classifiers for Explicit In response to this situation, Discourse Relation and AltLex relations and Implicit and Sense Classification has become a separate task EntRel relations, respectively. Regardless in the CoNLL-2016 Shared Task (Xue et al., of the simplicity of the implementation, 2016). In this task, participants implement a it achieves competitive performance using system that takes gold standard argument pairs and minimalistic features. assigns a sense to each of them. To tackle this The submitted version of our system task, we first analyzed the characteristics of the ranked 8th with an overall F1 score of discourse relation data. We then implemented a 0.5188. The evaluation on the test dataset classification system based on the analysis. One achieved the best performance for Explicit of the distinctive points of our system is that, relations with an F1 score of 0.9022. compared to existing systems, it uses smaller number of features, which enables the source 1 Introduction code to be quite short and clear, and the training In the CoNLL-2015 Shared Task on Shallow time to be fast. The performance is nonetheless Discourse Parsing (Xue et al., 2015), all the competitive, and its potential for improvement is participants adopted some variation of the pipeline also promising owing to the short program. architecture proposed by Lin et al. (2014). Among This paper aims to reorganize the ideas about the components of the architecture, the main what this task actually involves, and to show challenges are the exact argument extraction the future direction for improvement. It is and Non-Explicit sense classification (Lin et al., organized as follows: Section 2 presents the 2014). data analysis. Then the implementation of the Argument extraction is a task to identify two system we submitted is described in Section 3. argument spans for a given discourse relation. The experimental results and the conclusion are Although the reported scores were relatively low provided in Section 4 and 5. for these components this is partially because of the “quite harsh” evaluation1. This led to the 2 Data Analysis 1 CoNLL 2016 Shared Task Official Blog There are four types of discourse relations, i.e., http://conll16st.blogspot.com/2016/04/ partial-scoring-and-other-evaluation. Explicit, Implicit, AltLex, and EntRel. In html the official scorer, these discourse relations are 129 Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 129–135, Berlin, Germany, August 7-12, 2016. c 2016 Association for Computational Linguistics divided into two groups, namely, Explicit and Table 1: System output for discourse relations Non-Explicit relations, and they are evaluated that are labeled as Comparison.Concession in the separately. AltLex relations are classified into golden data. The left and right columns show the Non-Explicit relations, but they share some connective words and the sense assigned by the characteristics with Explicit relations in that they system, respectively. have words that explicitly serve as connective words in the text. These connective words Connective Assigned Sense are one of the most important features in sense while Comparison.Contrast classification, as explained later; therefore, we even though Comparison.Concession divide the types of relations into (i) Explicit still Comparison.Contrast and AltLex and (ii) Implicit and EntRel types nevertheless Comparison.Contrast in this analysis. Throughout this paper, we do but Comparison.Contrast not distinguish between Explicit connective and yet Comparison.Contrast words that work as connective in AltLex relations, though Comparison.Contrast and they are simply referred to as connective. nonetheless Comparison.Concession even if Contingency.Condition 2.1 Explicit and AltLex Discourse Relations although Comparison.Contrast In the sense classification of Explicit and AltLex relations, connective words serve as important relations labeled as Comparison.Concession in features. the golden data. Table 1 shows the senses the Figure 1 shows the distribution of sense per system assigned. For example, some of the connective word over the Explicit relations. For discourse relations that have a connective word example, 91.8% of relations with connective word while are labeled as Comparison.Concession in and are labeled as Expansion.Conjunction, and the golden data, but the system assigned them as 5.8% as Contingency.Cause.Result. As can be Comparison.Contrast. seen, each kind of connective word is mostly According to the annotation manual, Contrast covered by only a few senses. Some words such and Concession are different in that only as also and if have more than 98.8% coverage by Concession has directionality in the interpretation a single sense. of the arguments. Distinguishing these two senses According to this observation, it is easy to build is, however, ambiguous and difficult, even for a reasonably accurate sense classifier simply by human annotators. taking connective words as a feature. For example, one obvious method is a majority classifier that 2.2 Implicit and EntRel Discourse Relations assigns the most frequent sense for the relations By definition, Implicit and EntRel relations have with the same connective words in the training no connective words in the text, which compli- dataset. Figure 2 shows the accuracy per sense cates the sense classification task considerably. of such a classifier in the training dataset. The Other researchers overcame this problem by ap- method is rather simple, but it achieves more than plying machine-learning techniques such as a 80% accuracy for most of the senses. Naive Bayes classifier (Wang and Lan, 2015) or One exception is Comparison.Concession, AdaBoost (Stepanov et al., 2015). They use vari- which had only a 17.4% accuracy. This is a ous features including those obtained from parses sense derived from Comparison.Concession and of the argument texts. Comparison.Pragmatic concession in the original As a baseline, we first implemented a support PDTB, and applies “when the connective indicates vector machine (SVM) classifier taking a bag-of- that one of the arguments describes a situation words of tokens in the argument texts as features. A which causes C, while the other asserts (or The evaluation was found to assign EntRel to implies) C” (Prasad et al., 2007). Discourse a large part of the input data. This trend is ¬ relations with connective words such as although, particularly noticeable for relatively infrequent but, and however are assigned this sense. In the senses. This problem is partially attributable to evaluation using the development data, the system the unbalanced data. In fact, there are more assigned Comparison.Contrast to most discourse EntRel instances included in the training data than 130 and Expansion.Conjunction 91.8% ContingenTceCym.ToCECpemoxaomunpmrstapaieplnor..iasArsRgsariioeisoelsn.ynsSoc.un..CnyRIlcnt.oheC s5cnsrtoh.tat7arnrnat%ocdoetsnieimutatys ist0seo i.0.ioPn8o.ntr 1%n 0e 0%0 c.0.1.e00.%0d%%ence 1.1% but Comparison.Contrast 76.5% Comparison.Concession 20.0% ExpaTnECEsexxoimoppnnapatin.onnCsrsgiaoioeoln.nnAj.c.u.ERAsynyx.e.clCnctsteietcaoarphuntntrseai oemo3tn.i.n.Rve0o ene0%u.tsa.sC 0us0.Ph%ol.t0orn e0%s c0.e0e.n0%d %aelntecren 0a.t1iv%e 0.0% also Expansion.Conjunction 99.7% TEexmpapnosriaoln.S.Rynecsthartoenmye 0n.t1 0%.0% when Temporal.Synchrony 50.8% Temporal.Asynchronous.SuccessCioonn 1ti9n.g8e%ncy.Condition 17.7%Contingency.CauTsCCeeCEom.oRomxnpmeptpoaiapnarsnargosilrens.iiAons 9oncs.ny..9ACR..n%Colectaonsheutncrasonteternasea.stmoRsiiuvtoe se0nsn. u. Pt030 lr.0%t.1e3 .0%c1.e%3d%ence 0.4% if Contingency.Condition 98.8% CCoEomxmppapanraisrsiosonn.CR.Coeosntntarctaesmst s0eio.n6nt% 00..13% while Comparison.Contrast 52.0% Temporal.Synchrony 26.9% Comparison.ConceEssxipoann 1s1io.4n%.Conjunction 9.6% as Temporal.Synchrony 61.6% Contingency.Cause.Reason 35.4% TemCpoornatli.nAgseynnccyh.Croanuosues.R.Seuscuclte s0s.i3o%n 2.6% because Contingency.Cause.Reason 100.0% after Temporal.Asynchronous.Succession 90.3% Contingency.Cause.Reason 9.6% however Comparison.Contrast 77.6% Comparison.Concession 21.4% Expansion.Conjunction 0.8% Figure 1: Distribution of the sense assigned to each connective word. All explicit relations with the ten most frequent connective words are extracted from the official training data for the CoNLL-2016 Shared Task. Expansion.Instantiation Temporal.Asynchronous.Precedence Expansion.Conjunction Contingency.Condition
Recommended publications
  • Implicitness of Discourse Relations
    Implicitness of Discourse Relations F atemeh T orabi Asr and V era Demberg Cluster of Excellence Multimodal Computing and Interaction (MMCI) Saarland University Campus C7.4, !"# Saarbrüc&en, Germany [email protected], [email protected] Abstract The annotations of explicit and implicit discourse connectives in the Penn Discourse Treebank make it possible to investigate on a large scale how different types of discourse relations are expressed. Assuming an account of the Uniform Information Density hypothesis we expect that discourse relations should be expressed explicitly with a discourse connector when they are unexpected, but may be implicit when the discourse relation can be anticipated. !e investigate whether discourse relations which have been argued to be expected by the comprehender exhibit a higher ratio of implicit connectors. !e "nd support for two hypotheses put forth in previous research which suggest that continuous and causal relations are presupposed by language users when processing consecutive sentences in a text. !e then proceed to analyze the effect of Implicit Causality (IC) verbs (which have been argued to raise an expectation for an explanation) as a local cue for an upcoming causal relation. Keywords: Causality Continuity Discourse relations, Discourse cues Implicit discourse relations, Corpus study Uniform Information Density. Proceedings of COLING 2012: Technical Papers, pages 2669–2684, COLING 2012, Mumbai, December 2012. 2669 1 Introduction David Hume in his prominent work “An enquiry concerning human understanding” proposed that ideas in the human mind were associated according to at least three types of relations: resemblance contiguity in time or place, and causality (Hume, 1784).
    [Show full text]
  • Discourse Relations and Defeasible Knowledge`
    DISCOURSE RELATIONS AND DEFEASIBLE KNOWLEDGE* Alex Lascarides t Nicholas Asher Human Communication Research Centre Center for Cognitive Science University of Edinburgh University of Texas 2 Buccleuch Place, Edinburgh, EH8 9LW Austin, Texas 78712 Scotland USA alex@uk, ac. ed. cogsc£ asher@sygmund, cgs. utexas, edu Abstract of the event described in a, i.e. fl's event is part of the preparatory phase of a's: 2 This paper presents a formal account of the temporal interpretation of text. The distinct nat- (2) The council built the bridge. The architect ural interpretations of texts with similar syntax drew up the plans. are explained in terms of defeasible rules charac- terising causal laws and Gricean-style pragmatic Explanation(a, fl): For example the event de- maxims. Intuitively compelling patterns of defea,- scribed in clause fl caused the event described in sible entailment that are supported by the logic clause a: in which the theory is expressed are shown to un- derly temporal interpretation. (3) Max fell. John pushed him. Background(a, fl): For example the state de- The Problem scribed in fl is the 'backdrop' or circumstances under which the event in a occurred (so the event and state temporally overlap): The temporal interpretation of text involves an account of how the events described are related (4) Max opened the door. The room was pitch to each other. These relations follow from the dark. discourse relations that are central to temporal import. 1 Some of these are listed below, where Result(a, fl): The event described in a caused the clause a appears in the text before fl: the event or state described in fl: Narration(a,fl): The event described in fl is a consequence of (but not necessarily caused by) (5) Max switched off the light.
    [Show full text]
  • Negation Markers and Discourse Connective Omission
    Uniform Information Density at the Level of Discourse Relations: Negation Markers and Discourse Connective Omission Fatemeh Torabi Asr & Vera Demberg Saarland University fatemeh|[email protected] Abstract About half of the discourse relations annotated in Penn Discourse Treebank (Prasad et al., 2008) are not explicitly marked using a discourse connective. But we do not have extensive theories of when or why a discourse relation is marked explicitly or when the connective is omitted. Asr and Demberg (2012a) have suggested an information-theoretic perspective according to which discourse connectives are more likely to be omitted when they are marking a relation that is expected or pre- dictable. This account is based on the Uniform Information Density theory (Levy and Jaeger, 2007), which suggests that speakers choose among alternative formulations that are allowed in their lan- guage the ones that achieve a roughly uniform rate of information transmission. Optional discourse markers should thus be omitted if they would lead to a trough in information density, and be inserted in order to avoid peaks in information density. We here test this hypothesis by observing how far a specific cue, negation in any form, affects the discourse relations that can be predicted to hold in a text, and how the presence of this cue in turn affects the use of explicit discourse connectives. 1 Introduction Discourse connectives are known as optional linguistic elements to construct relations between clausal units in text: in the Penn Discourse Treebank (PDTB Prasad et al. 2008), only about half of the annotated discourse relations are marked in the text by a discourse connective.
    [Show full text]
  • Obligatory Presupposition in Discourse
    Obligatory presupposition in discourse Pascal Amsili1 & Claire Beyssade2* 1Université Paris Diderot & Laboratoire de Linguistique Formelle, CNRS/2Institut Jean Nicod, CNRS Paris Some presupposition triggers, like too, seem to be obligatory in discourses where the presupposition they induce is explicitely expressed. We show that this phenomenon concerns a larger class than is usually acknowledged, and suggest that this class corresponds to the class of presupposition triggers that have no asserted content. We then propose a pragmatic explanation relying on the neo-gricean notion of antipresupposition. We also show that the phenomenon has a complex interaction with discourse relations. 1. Introduction The starting point of this work is a number of situations where some presupposition triggers seem obligatory in discourse. Here are two examples. (1) a. Jean est allé il y a deux ans au Canada. Il n’ira plus là-bas. John went to Canada two years ago. He won’t go there anymore b. #Jean est allé il y a deux ans au Canada. Il n’ira pas là-bas. John went to Canada two years ago. He won’t go there c. Léa a fait une bêtise. Elle ne la refera pas. Lea did a silly thing. She won’t re-do it. d. #Léa a fait une bêtise. Elle ne la fera pas. Lea did a silly thing. She won’t do it. *This article benefitted from comments from the audience at CiD’06 (Maynooth), and particularly from discussion with Henk Zeevat. We are also very grateful to an anonymous reviewer who provided very detailed and constructive comments.
    [Show full text]
  • Semantic Relations in Discourse: the Current State of ISO 24617-8
    Semantic Relations in Discourse: The Current State of ISO 24617-8 Rashmi Prasad* and Harry Bunt** *Department of Health Informatics and Administration, University of Wisconsin-Milwaukee, Milwaukee, USA **Tilburg Center for Cognition and Communication (TiCC), Tilburg University, Tilburg, Netherlands [email protected] [email protected] Abstract This paper describes some of the research conducted with the aim to develop a proposal for an ISO standard for the annotation of semantic relations in discourse. A range of theoretical approaches and annotation efforts were analysed for their commonalities and their differences, in order to define a clear delineation of the scope of the ISO effort and to give it a solid theoretical and empirical basis. A set of 20 core discourse relations was identified as indispensible for an annotation standard, and these relations were provided with clear definitions and illustrative examples from existing corpora. The ISO principles for linguistic annotation in general and semantic annotation in particular were applied to design a markup language (DRelML) for discourse relation annotation. 1 Introduction The last decade has seen a proliferation of linguistically annotated corpora coding many phenomena in support of empirical natural language research – both computational and theoretical. In the realm of discourse (which for the purposes of this paper is taken to include dialogue as well), a surge of interest in discourse processing has led to the development of several corpora annotated for discourse relations, for example, causal, contrastive and temporal relations. Discourse relations, also called ‘coherence relations’ or ‘rhetorical relations’, may be expressed explicitly or implicitly, and they convey meaning that is key to an understanding of the discourse, beyond the meaning conveyed by individual clauses and sentences.
    [Show full text]
  • Arxiv:1904.10419V2 [Cs.CL] 30 Aug 2019 (E.G
    GumDrop at the DISRPT2019 Shared Task: A Model Stacking Approach to Discourse Unit Segmentation and Connective Detection Yue Yu Yilun Zhu Yang Liu Computer Science Linguistics Linguistics Georgetown University Georgetown University Georgetown University Yan Liu Siyao Peng Mackenzie Gong Amir Zeldes Analytics Linguistics CCT Linguistics Georgetown University Georgetown University Georgetown University Georgetown University fyy476,yz565,yl879,yl1023,sp1184,mg1745,[email protected] Abstract However, as recent work (Braud et al., 2017b) has shown, performance on smaller or less homo- In this paper we present GumDrop, George- town University’s entry at the DISRPT 2019 geneous corpora than RST-DT, and especially in Shared Task on automatic discourse unit seg- the absence of gold syntax trees (which are real- mentation and connective detection. Our ap- istically unavailable at test time for practical ap- proach relies on model stacking, creating a plications), hovers around the mid 80s, making it heterogeneous ensemble of classifiers, which problematic for full discourse parsing in practice. feed into a metalearner for each final task. The This is more critical for languages and domains in system encompasses three trainable compo- which relatively small datasets are available, mak- nent stacks: one for sentence splitting, one for ing the application of generic neural models less discourse unit segmentation and one for con- nective detection. The flexibility of each en- promising. semble allows the system to generalize well to The DISRPT 2019 Shared Task aims to identify datasets of different sizes and with varying lev- spans associated with discourse relations in data els of homogeneity. from three formalisms: RST (Mann and Thomp- son, 1988), SDRT (Asher, 1993) and PDTB 1 Introduction (Prasad et al., 2014).
    [Show full text]
  • Discourse Connectives: from Historical Origin to Present-Day Development Magdaléna Rysová Charles University, Faculty of Mathematics and Physics
    Chapter 2 Discourse connectives: From historical origin to present-day development Magdaléna Rysová Charles University, Faculty of Mathematics and Physics The paper focuses on the description and delimitation of discourse connectives, i.e. linguistic expressions significantly contributing to text coherence and generally helping the reader to better understand semantic relations within a text. Thepaper discusses the historical origin of discourse connectives viewed from the perspec- tive of present-day linguistics. Its aim is to define present-day discourse connec- tives according to their historical origin through which we see what is happening in discourse in contemporary language. The paper analyzes the historical origin of the most frequent connectives in Czech, English and German (which could be useful for more accurate translations of connectives in these languages) and point out that they underwent a similar process to gain a status of present-day discourse connectives. The paper argues that this historical origin or process of rising dis- course connectives might be language universal. Finally, the paper demonstrates how these observations may be helpful for annotations of discourse in large cor- pora. 1 Introduction and motivation Currently, linguistic research focuses often on creating and analyzing big lan- guage data. One of the frequently discussed topics of corpus linguistics is the an- notation of discourse carried out especially through detection of discourse con- nectives. However, discourse connectives are not an easily definable group of expressions. Linguistic means signaling discourse relations may be conjunctions like but, or etc., prepositional phrases like for this reason, fixed collocations like Magdaléna Rysová. 2017. Discourse connectives: From historical origin to present- day development.
    [Show full text]
  • A Presuppositional Account of Causal and Temporal Interpretations of And
    Topoi DOI 10.1007/s11245-014-9289-9 A Presuppositional Account of Causal and Temporal Interpretations of and Joanna Blochowiak Ó Springer Science+Business Media Dordrecht 2014 Abstract Despite extensive studies, the issue concerning (1) a. He took off his boots and got into bed. the pragmatic mechanisms leading to causal and temporal b. He got into bed and took off his boots. interpretations of and remains problematic and has not yet c. P ^ Q = Q ^ P. been addressed in its totality within one framework. This Grice noticed that the meaning of and in (1)a and paper proposes a solution based on presuppositional (1)b cannot be reduced to its logical counterpart ^ mechanisms built into a comprehensive analysis that shown in (1)c. In particular, the commutativity of logical accounts for both the various interpretations of and-sen- conjunction is not attested in many cases in natural tences as well as those of other types of sentences language. Grice’s strategy to preserve the logical sense involving similar interpretations. This account is a specific of and consisted in moving the supplementary meaning part of a unified solution to the knotty problem of different material to pragmatics, i.e. non-truth-conditional aspects manners of conveying causal and temporal relations both of meaning. with connectives and also with juxtaposed sentences. It is In addition to the basic problem exposed in (1), any formulated within the Relevance Nomological Model theory aiming at explaining the behaviour of the connective which provides a general framework for the analysis of and faces at least two puzzles involving the temporal and causal constructions such as the connective because and causal interpretations of and.
    [Show full text]
  • Evidence from Evidentials
    UBCWPL University of British Columbia Working Papers in Linguistics Evidence from Evidentials Edited by: Tyler Peterson and Uli Sauerland September 2010 Volume 28 ii Evidence from Evidentials Edited by: Tyler Peterson and Uli Sauerland The University of British Columbia Working Papers in Linguistics Volume 28 September 2010 UBCWPL is published by the graduate students of the University of British Columbia. We feature current research on language and linguistics by students and faculty of the department, and we are the regular publishers of two conference proceedings: the Workshop on Structure and Constituency in Languages of the Americas (WSCLA) and the International Conference on Salish and Neighbouring Languages (ICSNL). If you have any comments or suggestions, or would like to place orders, please contact : UBCWPL Editors Department of Linguistics Totem Field Studios 2613 West Mall V6T 1Z4 Tel: 604 822 4256 Fax 604 822 9687 E-mail: <[email protected]> Since articles in UBCWPL are works in progress, their publication elsewhere is not precluded. All rights remain with the authors. iii Cover artwork by Lester Ned Jr. Contact: Ancestral Native Art Creations 10704 #9 Highway Compt. 376 Rosedale, BC V0X 1X0 Phone: (604) 793-5306 Fax: (604) 794-3217 Email: [email protected] iv TABLE OF CONTENTS TYLER PETERSON, ROSE-MARIE DÉCHAINE, AND ULI SAUERLAND ...............................1 Evidence from Evidentials: Introduction JASON BROWN ................................................................................................................9
    [Show full text]
  • Toward a Discourse Structure Account of Speech and Attitude Reports
    Toward a Discourse Structure Account of Speech and Attitude Reports Antoine Venant1,2 1 Laboratoire Parole et Langage 2 Ecole Normale Superieure de Cachan Abstract. This paper addresses the question of propositional attitude reports within Segmented Discourse Representation Theory (SDRT). In line with most SDRT discussions on attitudes reports, we argue that the reported attitude should be segmented in the same way as the rest of the discourse is. We identify several issues that are raised by the segmenta- tion of attitude reports. First, the nature of some relations crossing the boundaries between main and embedded speech remains unclear. More- over, such constructions are introducing a conflict between SDRT’s Right Frontier Constraint (RFC) and well established facts about accessibility from factual to modal contexts. We propose two solutions for adapting discourse structure to overcome these conflicts. The first one introduces a new ingredient in the theory while the second one is more conservative and relies on continuation-style semantics for SDRT. 1 Introduction From a semantic perspective, attitudes reports require to solve several notorious puz- zles. Among these, are a lot of problems triggered by definites: Substitution of directly co-referential expressions is generally not allowed under the scope of an attitude verb and neither does existential generalization(see the shortest spy problem raised by [8]). Closely related to those are effects of attitudes verbs on discourse referents availability. For instance factive epistemic verbs like ’to know’ allow referents introduced under their scope to be then referred from outside their scope, while non factive like ’to be- lieve’ do not.
    [Show full text]
  • A Simple Word Interaction Model for Implicit Discourse Relation
    Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17) SWIM: A Simple Word Interaction Model for Implicit Discourse Relation Recognition∗ Wenqiang Lei1, Xuancong Wang2, Meichun Liu3, Ilija Ilievski1, Xiangnan He1, Min-Yen Kan1 1National University of Singapore 2Institute for Infocomm Research 3City University of Hong Kong fwenqiang, xiangnan, [email protected], [email protected] [email protected], [email protected] Abstract Ex (1) You are so fortunate. The hurricane came five Capturing the semantic interaction of pairs of hours after you left. words across arguments and proper argument rep- Ex (2) In 1986, the number of cats was down to 1000. resentation are both crucial issues in implicit dis- In 2002, it went up to 2000. course relation recognition. The current state-of- Ex (3) I never gamble too far. In other words, I quit the-art represents arguments as distributional vec- after one try. tors that are computed via bi-directional Long Short-Term Memory networks (BiLSTMs), known Ex (4) I was sleeping when he entered. to have significant model complexity. Figure 1: Toy examples of each of the four Level–1 discourse re- In contrast, we demonstrate that word-weighted av- lations annotated in the PDTB formalism. (1) and (2) are implicit eraging can encode argument representation which relations; (3) and (4) are explicit. Arg1 is italicized and Arg2 is can be incorporated with word pair information ef- bolded, as per convention. ficiently. By saving an order of magnitude in pa- rameters and eschewing the recurrent structure, our proposed model achieves equivalent performance, 2017]) – inclusive of this one – restrict their use to the top- but trains seven times faster.
    [Show full text]
  • Implicit Discourse Relation Classification Via Multi-Task Neural
    Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) Implicit Discourse Relation Classification via Multi-Task Neural Networks Yang Liu1, Sujian Li1,2, Xiaodong Zhang1 and Zhifang Sui1,2 1 Key Laboratory of Computational Linguistics, Peking University, MOE, China 2 Collaborative Innovation Center for Language Ability, Xuzhou, Jiangsu, China {cs-ly, lisujian, zxdcs, szf}@pku.edu.cn Abstract pairs occurring in the sentence pairs are useful, since they to some extent can represent semantic relationships between Without discourse connectives, classifying implicit dis- two sentences (for example, word pairs appearing around course relations is a challenging task and a bottleneck for building a practical discourse parser. Previous re- the contrast relation often tend to be antonyms). In earlier search usually makes use of one kind of discourse studies, researchers find that word pairs do help classifying framework such as PDTB or RST to improve the clas- the discourse relations. However, it is strange that most of sification performance on discourse relations. Actually, these useful word pairs are composed of stopwords. Ruther- under different discourse annotation frameworks, there ford and Xue (2014) point out that this counter-intuitive exist multiple corpora which have internal connections. phenomenon is caused by the sparsity nature of these word To exploit the combination of different discourse cor- pairs. They employ Brown clusters as an alternative abstract pora, we design related discourse classification tasks word representation, and as a result, they get more intuitive specific to a corpus, and propose a novel Convolutional cluster pairs and achieve a better performance. Neural Network embedded multi-task learning system to synthesize these tasks by learning both unique and Another problem in discourse parsing is the coexistence shared representations for each task.
    [Show full text]