Broad-Coverage Semantic Dependency Parsing

Broad-Coverage Semantic Dependency Parsing

SemEval 2014 Task 8: Broad-Coverage Semantic Dependency Parsing Stephan Oepen♣♠, Marco Kuhlmann♥, Yusuke Miyao♦, Daniel Zeman◦, ? Dan Flickinger•, Jan Hajicˇ◦, Angelina Ivanova♣, and Yi Zhang ♣ University of Oslo, Department of Informatics ♠ Potsdam University, Department of Linguistics ♥ Linköping University, Department of Computer and Information Science ♦ National Institute of Informatics, Tokyo ◦ Charles University in Prague, Faculty of Mathematics and Physics, Institute of Formal and Applied Linguistics • Stanford University, Center for the Study of Language and Information ? Nuance Communications Aachen GmbH [email protected] Abstract Unfortunately, tree-oriented parsers are ill-suited for producing meaning representations, i.e. mov- Task 8 at SemEval 2014 defines Broad- ing from the analysis of grammatical structure to Coverage Semantic Dependency Pars- sentence semantics. Even if syntactic parsing ar- ing (SDP) as the problem of recovering guably can be limited to tree structures, this is not sentence-internal predicate–argument rela- the case in semantic analysis, where a node will tionships for all content words, i.e. the se- often be the argument of multiple predicates (i.e. mantic structure constituting the relational have more than one incoming arc), and it will often core of sentence meaning. In this task be desirable to leave nodes corresponding to se- description, we position the problem in mantically vacuous word classes unattached (with comparison to other sub-tasks in compu- no incoming arcs). tational language analysis, introduce the se- Thus, Task 8 at SemEval 2014, Broad-Coverage mantic dependency target representations Semantic Dependency Parsing (SDP 2014),1 seeks used, reflect on high-level commonalities to stimulate the dependency parsing community and differences between these representa- to move towards more general graph processing, tions, and summarize the task setup, partic- to thus enable a more direct analysis of Who did ipating systems, and main results. What to Whom? For English, there exist several independent annotations of sentence meaning over 1 Background and Motivation the venerable Wall Street Journal (WSJ) text of the Penn Treebank (PTB; Marcus et al., 1993). These Syntactic dependency parsing has seen great ad- resources constitute parallel semantic annotations vances in the past decade, in part owing to rela- over the same common text, but to date they have tively broad consensus on target representations, not been related to each other and, in fact, have and in part reflecting the successful execution of a hardly been applied for training and testing of data- series of shared tasks at the annual Conference for driven parsers. In this task, we have used three Natural Language Learning (CoNLL; Buchholz & different such target representations for bi-lexical Marsi, 2006; Nivre et al., 2007; inter alios). From semantic dependencies, as demonstrated in Figure 1 this very active research area accurate and efficient below for the WSJ sentence: syntactic parsers have developed for a wide range (1) A similar technique is almost impossible to apply to of natural languages. However, the predominant other crops, such as cotton, soybeans, and rice. data structure in dependency parsing to date are trees, in the formal sense that every node in the de- Semantically, technique arguably is dependent on pendency graph is reachable from a distinguished the determiner (the quantificational locus), the mod- root node by exactly one directed path. ifier similar, and the predicate apply. Conversely, the predicative copula, infinitival to, and the vac- This work is licenced under a Creative Commons At- tribution 4.0 International License. Page numbers and the 1See http://alt.qcri.org/semeval2014/ proceedings footer are added by the organizers: http:// task8/ for further technical details, information on how to creativecommons.org/licenses/by/4.0/. obtain the data, and official results. 63 Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 63–72, Dublin, Ireland, August 23-24, 2014. A1 A2 A similar technique is almost impossible to apply to other crops , such as cotton , soybeans and rice . (a) Partial semantic dependencies in PropBank and NomBank. top BV ARG2 ARG3 ARG1 ARG1 ARG1 ARG1 ARG1 mwe ARG2 implicit_conj _and_c A similar technique is almost impossible to apply to other crops, such as cotton, soybeans and rice. (b) DELPH-IN Minimal Recursion Semantics–derived bi-lexical dependencies (DM). top ARG2 ARG1 ARG2 ARG1 ARG2 ARG2 ARG2 ARG1 ARG1 ARG1 ARG1 ARG1 ARG1 ARG1 ARG1 ARG1 ARG1 ARG1 ARG2 ARG2 A similar technique is almost impossible to apply to other crops , such as cotton , soybeans and rice (c) Enju Predicate–Argument Structures (PAS). top ADDR PAT ADDR ACT ADDR CONJ.m PAT ADDR APPS.m RSTR EXT RSTR APPS.m CONJ.m CONJ.m A similar technique is almost impossible to apply to other crops , such as cotton , soybeans and rice . (d) Parts of the tectogrammatical layer of the Prague Czech-English Dependency Treebank (PCEDT). Figure 1: Sample semantic dependency graphs for Example (1). uous preposition marking the deep object of ap- tion does not encompass predicate disambiguation, ply can be argued to not have a semantic contri- a design decision in part owed to our goal to focus bution of their own. Besides calling for node re- on parsing-oriented, i.e. structural, analysis, and in entrancies and partial connectivity, semantic depen- part to lacking consensus on sense inventories for dency graphs may also exhibit higher degrees of all content words. non-projectivity than is typical of syntactic depen- Finally, a third closely related area of much cur- dency trees. rent interest is often dubbed ‘semantic parsing’, In addition to its relation to syntactic dependency which Kate and Wong (2010) define as “the task of parsing, the task also has some overlap with Se- mapping natural language sentences into complete mantic Role Labeling (SRL; Gildea & Jurafsky, formal meaning representations which a computer 2002). In much previous work, however, target can execute for some domain-specific application.” representations typically draw on resources like In contrast to most work in this tradition, our SDP PropBank and NomBank (Palmer et al., 2005; Mey- target representations aim to be task- and domain- ers et al., 2004), which are limited to argument independent, though at least part of this general- identification and labeling for verbal and nominal ity comes at the expense of ‘completeness’ in the predicates. A plethora of semantic phenomena— above sense; i.e. there are aspects of sentence mean- for example negation and other scopal embedding, ing that arguably remain implicit. comparatives, possessives, various types of modi- fication, and even conjunction—typically remain 2 Target Representations unanalyzed in SRL. Thus, its target representations are partial to a degree that can prohibit seman- We use three distinct target representations for se- tic downstream processing, for example inference- mantic dependencies. As is evident in our run- based techniques. In contrast, we require parsers ning example (Figure 1), showing what are called to identify all semantic dependencies, i.e. compute the DM, PAS, and PCEDT semantic dependencies, a representation that integrates all content words in there are contentful differences among these anno- one structure. Another difference to common inter- tations, and there is of course not one obvious (or pretations of SRL is that the SDP 2014 task defini- even objective) truth. In the following paragraphs, 64 we provide some background on the ‘pedigree’ and id form lemma pos top pred arg1 arg2 linguistic characterization of these representations. #20200002 1 Ms. Ms. NNP + _ _ − DM: DELPH-IN MRS-Derived Bi-Lexical De- 2 Haag Haag NNP compound ARG1 − − pendencies These semantic dependency graphs 3 plays play VBZ + + _ _ 4 Elianti Elianti NNP _ ARG2 originate in a manual re-annotation of Sections 00– − − 5 . _ _ 21 of the WSJ Corpus with syntactico-semantic − − analyses derived from the LinGO English Re- Table 1: Tabular SDP data format (showing DM). source Grammar (ERG; Flickinger, 2000). Among other layers of linguistic annotation, this resource— texts from the PTB, and their Czech translations. dubbed DeepBank by Flickinger et al. (2012)— Similarly to other treebanks in the Prague family, includes underspecified logical-form meaning rep- there are two layers of syntactic annotation: an- resentations in the framework of Minimal Recur- alytical (a-trees) and tectogrammatical (t-trees). sion Semantics (MRS; Copestake et al., 2005). PCEDT bi-lexical dependencies in this task have Our DM target representations are derived through been extracted from the t-trees. The specifics of a two-step ‘lossy’ conversion of MRSs, first to the PCEDT representations are best observed in the variable-free Elementary Dependency Structures procedure that converts the original PCEDT data to (EDS; Oepen & Lønning, 2006), then to ‘pure’ the SDP data format; see Miyao et al. (2014). Top bi-lexical form—projecting some construction se- nodes are derived from t-tree roots; i.e. they mostly mantics onto word-to-word dependencies (Ivanova correspond to main verbs. In case of coordinate et al., 2012). In preparing our gold-standard clauses, there are multiple top nodes per sentence. DM graphs from DeepBank, the same conversion pipeline was used as in the system submission of 3 Graph Representation Miyao et al. (2014). For this target representa- tion, top nodes designate

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us