Machine Learning for Semantic Parsing in Review

Machine Learning for Semantic Parsing in Review

Machine Learning for Semantic Parsing in Review Ahmad Aghaebrahimian Filip Jurčíček Charles University in Prague Charles University in Prague Faculty of Mathematics and Physics Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics Institute of Formal and Applied Linguistics [email protected] [email protected] Abstract Spoken Language Understanding (SLU) and more specifically, semantic parsing is an indispensable task in each speech-enabled application. In this survey, we review the current research on SLU and semantic parsing with emphasis on machine learning techniques used for these tasks. Observing the current trends in semantic parsing, we conclude our discussion by suggesting some of the most promising future research trends. Keywords: Spoken Language Understanding, Semantic Parsing, Machine Learning Now, Amazon’s Echo, Microsoft’s Cortana, Clarity Lab’s 1. Introduction Sirius, Nuance's Dragon Go and many other successful Around half a century ago, Alan Turing proposed the speech understanding applications are already true first model for machine understanding. From that time samples of such efforts. SLU has been the overall or on, there has been a growing interest in developing partial focus of numerous research projects including technologies for machine understanding in different ATIS (1989-1995), Verbmobile (1996-2000), How may I modalities. Speech, specifically, as one of the most help you? (2001), AMITIES (2001-2004), TALK (2009- elementary and natural communication medium, has 2011), Classic (2008-2011), Parlance (2011-2014), attracted much attention. It brings together seemingly Carnegie Mellon Communicator (1999-2002), unrelated areas of science, from speech processing in Companions (2006-2010) and Alex (2012-current), to electrical engineering and natural language processing in name a few. computer science to machine learning in artificial We organized the remaining of this survey into the intelligence all together to challenge the task of Natural following sections. After having an overview on the Language Understanding (NLU). In this survey, we focus subject matter in Section 2, we review SLU sub-tasks in on speech-enabled NLU or simply Spoken Language Section 3 and machine learning techniques for them in Understanding (SLU) as well as semantic parsing. Section 4. Before we conclude in Section 6, we introduce NLU and SLU are both designed to find conceptual two widely used parsing techniques and formalisms in representation of natural language sentences and Section 5. utterances. What distinguishes SLU and NLU is the fact that SLU’s input is an utterance, which in addition to 2. SLU Overview prosody and speaker identity contains situational and SLU components can typically be categorized into three contextual information. In plain words, the function of systems as hand-crafted, data-driven or a combination SLU component is to convert an utterance to a thereof in hybrid systems. Hand-crafted systems are representation of the user’s intention which is referred to based on symbolic analysis of language while data-driven as semantic representation. Semantic representations are and more specifically, statistical systems are mostly usually in logical forms. Logical forms are detailed, based on the notions of information theory. context-independent, fully specified and unambiguous The hand-crafted paradigm is mostly advocated in expressions which cover the utterance’s arguments and artificial intelligence. In this paradigm, a common predicates using an artificial language. This artificial approach is to build intelligent agents which mimic the language or technically formalism can be of different human brain for language understanding. Hand-crafted sorts, such as λ-Calculus (Church, 1936, 1941; Carpenter, systems still have some strong proponents and basically 1997), first order fragments (Bird et al., 2009), robot many commercial applications today make use of them controller language (Matuszek et al., 2012), etc. extensively. However, the advocates of the data-driven SLU components are mostly used in Spoken Dialogue paradigm argue that mimicking biological organisms in Systems (SDS) (Jurčíček et al., 2014; Young et al., 2013; mechanical machines is a fruitless effort. Fred Jelinek, Lee and Eskenazi, 2013), spoken information retrieval the pioneer of statistical methods, once made an analogy systems (speech mining) (De Jong et al., 2000) and between birds and air planes by arguing that airplanes do automated speech translation (Xiaodong et al., 2013). The not flap their wings like birds and they still can fly. current expansion in the market of smart phones, smart Data-driven approaches benefit from a learning model watches (e.g. Google’s and Apple’s watches) and myriad which makes them language independent. Language of other speech-enabled gadgets such as digital personal independency is a significant advantage for data-driven assistants, home entertainment, and security systems systems over the hand-crafted ones. Efficient language- opens up a great opportunity for more challenging efforts independent models in one language will work in another in SLU research and development. Apple’s Siri, Google language provided that the features representing the new language are fed into the model through a set of training appropriate logical slots to form its logical representation data. (Zelle and Mooney, 1996) Although current data-driven approaches are proved Due to limited expressive power of current MRLs, these more efficient in terms of development efforts and logical forms are limited to specific domains and accuracy, similar to the hand-crafted ones, they are expanding logical forms for larger domains or, ideally, mostly domain-dependent, i.e. designed to recognize and for open-domain applications have been attracting special work with specific notions and functions relating to a interest recently. It requires a learning capacity which single topic. For instance, the air travel domain is a makes systems able to recognize unbounded number of limited domain in which only a few notions such as lexicons and grammar. Such broad coverage and learning departure_time or flight_number and a few functions capacity paves the way for semantic parsing in open- such as flight_booking are recognized. However, open- domain dialogue systems, open-domain question domain systems capable of performing conversations in answering and open-domain information extraction. wider range of topics are more desirable. In open-domain applications, the number of the notions and the functions 4. Machine Learning Techniques is practically unbounded. This is the reason why the In the history of semantic parsing, different models, design of Meaning Representation Language (MRL) for grammars, and techniques are examined for different such systems poses a big challenge SLU sub-tasks including generative models (e.g., Hidden In essence, data-driven and fully statistical SLU Markov Models (HMM) (Schwartz et al. 1996)), components provide us with an approach for easier discriminative techniques (Wang and Acero, 2006), or adaptation to new domains and more robust performance probabilistic context free grammars (Ward and Issar, as well as less deployment cost. In this survey, we 1994). All of these models and techniques have their own concentrate on statistical and data-driven methods. pros and cons; while generative models, for instance, are flexible in mapping individual words to their 3. SLU Sub-Tasks corresponding semantic tree nodes, they are limited in SLU implementations generally include three main modeling local dependencies and co-related features. In components including domain detection, intent contrast, latent-variable discriminative models are able to determination and slot filling components (Li et al., learn representations with long dependencies. To have 2012). more elaboration on such techniques, we briefly review some of the most practical machine learning methods for 3.1. Domain Detection semantic parsing in the following sections. The Meaning Representation Languages (MRL) in most 4.1. CRF semantic parsers is designed to represent notions limited to a specific domain. Therefore, given an utterance, the State-of-the-art semantic parsing approaches use first task of an SLU component is to recognize the statistical machine learning techniques such as Conditional Random Fields (CRF) (Lafferty et al., 2001). relevant domain. This process is applied for multi-domain SLUs, while for single-domain SLUs, it suffices to detect CRFs are a class of discriminative sequence labeling out-of-domain utterances and handle them appropriately. models. They are essentially undirected graphical models. They are flexible in dealing with overlapping features and for this reason, CRFs usually outperform generative 3.2. Intent Determination models such as HMM (Yao et al., 2013). In each domain, there is a set of pre-defined intentions In contrast to Maximum Entropy Markov Model which are specific to that domain. For instance, in the air (MEMM) (Ratnaparkhi, 1998) classifiers which are left- traffic domain, ‘ticket reservation’ is a common intention to-right probabilistic models for sequence labeling, and the intent determination component is expected to CRFs-based models provide global conditional recognize this intention in user’s utterance. distribution. Using this feature in CRFs, Xu and Sarikaya Until recently, all MRLs used for semantic parsing in (2013 a, b) proposed a joint model for domain detection different

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us