Benchmarking Meaning Representations in Neural Semantic Parsing

Benchmarking Meaning Representations in Neural Semantic Parsing

Benchmarking Meaning Representations in Neural Semantic Parsing Jiaqi Guo1,∗ Qian Liu2,∗ Jian-Guang Lou3, Zhenwen Li4 Xueqing Liu5, Tao Xie4, Ting Liu1 1Xi’an Jiaotong University, Xi’an, China 2Beihang University, Beijing, China 3Microsoft Research, Beijing, China 4Peking University, Beijing, China 5Stevens Institute of Technology, Hoboken, New Jersey, USA [email protected], [email protected] [email protected], flizhenwen, [email protected] [email protected], [email protected] Abstract MR Geo ATIS Job Prolog - - 91.4 Meaning representation is an important com- ponent of semantic parsing. Although re- Lambda 90.4 91.3 85.0 searchers have designed a lot of meaning rep- FunQL 92.5 - - resentations, recent work focuses on only a SQL 78.0 69.0 - few of them. Thus, the impact of meaning Prolog 89.6 - 92.1 representation on semantic parsing is less un- Lambda --- derstood. Furthermore, existing work’s per- FunQL --- formance is often not comprehensively evalu- ated due to the lack of readily-available execu- SQL 82.5 79.2 - tion engines. Upon identifying these gaps, we Table 1: State-of-the-art performance for MRs on Geo, propose UNIMER, a new unified benchmark ATIS, and Job. The top table shows exact-match accu- on meaning representations, by integrating ex- racy whereas the bottom table shows execution-match isting semantic parsing datasets, completing accuracy. Most existing work focuses on evaluating the missing logical forms, and implementing only a small subset of dataset × MR pairs, leaving most the missing execution engines. The resulting of the table unexplored. A finer-grained table is avail- unified benchmark contains the complete enu- able in the supplementary material (Table8). meration of logical forms and execution en- gines over three datasets × four meaning rep- resentations. A thorough experimental study of neural networks techniques, significant improve- on UNIMER reveals that neural semantic pars- ing approaches exhibit notably different per- ments have been made in semantic parsing perfor- formance when they are trained to generate mance (Jia and Liang, 2016; Yin and Neubig, 2017; different meaning representations. Also, pro- Dong and Lapata, 2018; Shaw et al., 2019). gram alias and grammar rules heavily impact Despite the advancement in performance, we the performance of different meaning repre- identify three important biases in existing work’s sentations. Our benchmark, execution engines evaluation methodology. First, although multiple and implementation can be found on: https: MRs are proposed, most existing work is evaluated //github.com/JasperGuo/Unimer. on only one or two of them, leading to less com- 1 Introduction prehensive or even unfair comparisons. Table1 shows the state-of-the-art performance of semantic A remarkable vision of artificial intelligence is to parsing on different dataset × MR combinations, enable human interactions with machines through where the rows are the MRs and the columns are natural language. Semantic parsing has emerged the datasets. We can observe that while Lambda as a key technology for achieving this goal. In Calculus is intensively studied, the other MRs have general, semantic parsing aims to transform a nat- not been sufficiently studied. This biased evalua- ural language utterance into a logic form, i.e., tion is partly caused by the absence of target logic meaning repre- a formal, machine-interpretable forms in the missing cells. Second, existing work sentation (MR) (Zelle and Mooney, 1996; Dahl 1 often compares the performance on different MRs et al., 1994). Thanks to the recent development directly (Sun et al., 2020; Shaw et al., 2019; Chen ∗Work done during an internship at Microsoft Research. et al., 2020) without considering the confounding 1In this paper, we focus on grounded semantic parsing, where meaning representations are grounded to specific knowl- edge bases, instead of ungrounded semantic parsing. 1520 Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 1520–1540, November 16–20, 2020. c 2020 Association for Computational Linguistics role that MR plays in the performance,2 causing capture the semantic equivalence (Bunel et al., unfair comparisons and misleading conclusions. 2018). As different MRs have different degrees Third, a more comprehensive evaluation methodol- of syntactic difference, they suffer from this ogy would consider both the exact-match accuracy problem differently. Second, Grammar rules. and the execution-match accuracy, because two Grammar-based neural models can guarantee that logic forms can be semantically equivalent yet do the generated program is syntactically correct (Yin not match precisely in their surface forms. How- and Neubig, 2017; Wang et al., 2020; Sun et al., ever, as shown in Table1, most existing work is 2020). For a given set of logical forms in an MR, only evaluated with the exact-match accuracy. This there exist multiple sets of grammar rules to model bias is potentially due to the fact that execution them. We observe that when the grammar-based engines are not available in six out of the twelve neural model is trained with different sets of dataset × MR combinations. grammar rules, it exhibits a notable performance Upon identifying the three biases, in this paper, discrepancy. This finding alias with the one made we propose UNIMER, a new unified benchmark, by in traditional semantic parsers (Kate, 2008) that unifying four publicly available MRs in three of the properly transforming grammar rules can lead to most popular semantic parsing datasets: Geo, ATIS better performance of a traditional semantic parser. and Jobs. First, for each natural language utter- ance in the three datasets, UNIMER provides anno- In summary, this paper makes the following tated logical forms in four different MRs, including main contributions: Prolog, Lambda Calculus, FunQL, and SQL. We identify that annotated logical forms in some MR • We propose UNIMER, a new unified bench- × dataset combinations are missing. As a result, mark on meaning representations, by integrat- we complete the benchmark by semi-automatically ing and completing semantic parsing datasets translating logical forms from one MR to another. in three datasets × four MRs; we also imple- Second, we implement six missing execution en- ment six execution engines so that execution- gines for MRs so that the execution-match accuracy match accuracy can be evaluated in all cases; can be readily computed for all the dataset × MR • We provide the baseline results for two widely combinations. Both the logical forms and their ex- used neural semantic parsing approaches on ecution results are manually checked to ensure the our benchmark, and we conduct an empirical correctness of annotations and execution engines. study to understand the impact that program After constructing UNIMER, to obtain a pre- alias and grammar rule plays on the perfor- liminary understanding on the impact of MRs mance of neural semantic parsing; on semantic parsing, we empirically study the performance of MRs on UNIMER by using two 2 Preliminaries widely-used neural semantic parsing approaches (a seq2seq model (Dong and Lapata, 2016; Jia In this section, we provide a brief description of and Liang, 2016) and a grammar-based neural the MRs and neural semantic parsing approaches model (Yin and Neubig, 2017)), under the super- that we study in the paper. vised learning setting. 2.1 Meaning Representations In addition to the empirical study above, we further analyze the impact of two operations, i.e., We investigate four MRs in this paper, namely, program alias and grammar rules, to understand Prolog, Lambda Calculus, FunQL, and SQL, be- how they affect different MRs differently. First, cause they are widely used in semantic parsing and Program alias. A semantically equivalent program we can obtain their corresponding labeled data in may have many syntactically different forms. As at least one semantic parsing domain. We regard a result, if the training and testing data have a Prolog, Lambda Calculus, and FunQL as domain- difference in their syntactic distributions of logic specific MRs, since the predicates defined in them forms, a naive maximum likelihood estimation are specific for a given domain. Consequently, the can suffer from this difference because it fails to execution engines of domain-specific MRs need to be significantly customized for different domains, 2In (Kate et al., 2005; Liang et al., 2011; Guo et al., 2019), it was revealed that using an appropriate MR can substantially requiring plenty of manual efforts. In contrast, SQL improve the performance of a semantic parser. is a domain-general MR for querying relational 1521 MR Logical Form answer(A, ( flight(A) , tomorrow(A) , during day(A, B) , const (B, period (morning)), Prolog from(A, C) , const(C, city(Pittsburgh)), to(A, D), const (D, city(Atlanta)))) Lambda ( lambda A:e ( (flight A) ^ (during day A morning:pd) ^ (from A Pittsburgh:ci) Calculus ^ (to A Atlanta:ci) ^ (tomorrow A) ) ) answer ( flight ( tomorrow ( intersect ( during day ( period ( morning ) ) , FunQL from ( city ( Pittsburgh ) ) , to ( city ( Atlanta ) ) ) ) ) ) SELECT flight id FROM . WHERE city 1.city name = ‘pittsburgh’ AND city 2.city name = ‘atlanta’ AND date day 1.year = 1991 SQL AND date day 1.month number = 1 AND date day 1.day number = 20 AND departure time BETWEEN 0 AND 1200 Table 2: Examples of meaning representations for utterance “what flights do you have in tomorrow morning from pittsburgh to atlanta?” in the ATIS domain. databases. Its execution engines (e.g., MySQL) 2005). It abstracts away variables and encodes can be used directly in different domains. Table2 compositionality via its nested function-argument shows a logical form for each of the four MRs in structure, making it easier to implement an efficient the ATIS domain. execution engine for FunQL. Concretely, unlike Prolog has long been used to represent the meaning Prolog and Lambda Calculus, predicates in FunQL of natural language (Zelle and Mooney, 1996; Kate take a set of entities as input and return another set and Mooney, 2006).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    21 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us