GPT-Too: a Language-Model-First Approach for AMR-To-Text Generation

Total Page:16

File Type:pdf, Size:1020Kb

GPT-Too: a Language-Model-First Approach for AMR-To-Text Generation GPT-too: A Language-Model-First Approach for AMR-to-Text Generation Manuel Mager1∗ Ramon´ Fernandez Astudillo2 Tahira Naseem2 Md Arafat Sultan2 Young-Suk Lee2 Radu Florian2 Salim Roukos2 1 Institute for Natural Language Processing, University of Stuttgart, Germany 2 IBM Research AI, Yorktown Heights, NY 10598, USA [email protected] framon.astudillo,[email protected] ftnaseem, [email protected] Abstract use RNN encoders with dual graph representa- tions. Transformer-based seq2seq (Vaswani Abstract Meaning Representations (AMRs) et al., 2017) was first applied to AMR-to-text in are broad-coverage sentence-level semantic graphs. Existing approaches to generating text (Sinh and Le Minh, 2019). Zhu et al.(2019) from AMR have focused on training sequence- greatly improve over the prior state-of-the-art to-sequence or graph-to-sequence models on by modifying self-attention to account for AMR AMR annotated data only. In this paper, we graph structure. Using transformers has also been propose an alternative approach that combines recently explored by Wang et al.(2020) who pro- a strong pre-trained language model with cy- pose a mutli-head graph attention mechanism. cle consistency-based re-scoring. Despite the Pre-trained transformer representations (Rad- simplicity of the approach, our experimental ford et al., 2018; Devlin et al., 2019; Radford results show these models outperform all pre- vious techniques on the English LDC2017T10 et al., 2019) use transfer learning to yield pow- dataset, including the recent use of transformer erful language models that considerably outper- architectures. In addition to the standard eval- form the prior art. They have also shown great uation metrics, we provide human evalua- success when fine-tuned to particular text gener- tion experiments that further substantiate the ation tasks (See et al., 2019; Zhang et al., 2019; strength of our approach. Keskar et al., 2019). Given their success, it would 1 Introduction be desirable to apply pre-trained transformer mod- els to a graph-to-text task like AMR-to-text, but Abstract Meaning Representation (AMR) (Ba- the need for graph encoding precludes in princi- narescu et al., 2013) is a rooted, directed, acyclic ple that option. Feeding the network with some graph with labeled edges (relations) and nodes sequential representation of the graph, such as a (concepts) expressing “who is doing what to topological sorting, looses some of the graphs rep- whom”. AMR-to-text generates sentences repre- resentational power. Complex graph annotations, senting the semantics underlying an AMR graph. such as AMR, also contain many special symbols Initial works in AMR-to-text used transduc- and special constructs that departure from natural ers (Flanigan et al., 2016), phrase-based ma- language and may by not interpretable by a pre- chine translation (Pourdamghani et al., 2016) and trained language model. neural sequence-to-sequence (seq2seq) models In this paper we explore the possibility of di- with linearized graphs (Konstas et al., 2017). Cao rectly fine-tuning a pre-trained transformer lan- and Clark(2019) leverage constituency parsing guage model on a sequential representation of for generation. Beck et al.(2018) improve upon AMR graphs, despite the expected difficulties prior RNN graph encoding (Song et al., 2018) with listed above. For this we re-purpose a GPT-2 lan- Levi Graph Transformations. Damonte and Co- guage model (Radford et al., 2019) to yield an hen(2019) compare multiple representations and AMR-to-text system. We show that it is surpris- find graph encoders to be the best. Guo et al. ingly easy to fine-tune GPT-2 to learn AMR graph (2019) use RNN graph encoders with dense graph to text mapping that outperforms the previous convolutional encoding. Ribeiro et al.(2019) state-of-the-art on automatic evaluation metrics. ∗ This research was done during an internship at IBM Since a single graph AMR, graph corresponds to Research AI. multiple sentences with the same meaning, we 1846 Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1846–1852 July 5 - 10, 2020. c 2020 Association for Computational Linguistics also provide human evaluation and semantic simi- also been used for filtering synthetic training data larity metric results (Zhang et al., 2020) which are for question answering (Alberti et al., 2019). Here less dependent on reference text. Human evalua- we propose the use of a cycle consistency measure tion and semantic similarity results highlight the to re-score the system outputs. positive impact of a strong language model strat- In particular, we take the top k sentences gen- egy. Finally we also introduce a simple re-scoring erated by our system from each gold AMR graph technique based on cycle-consistency that further and parse them using an off-the-shelf parser to ob- improves performance. tain a second AMR graph. We then re-score each sentence using the standard AMR parsing metric 2 Fine-tuning GPT-2 for conditional Smatch (Cai and Knight, 2013) by comparing the language generation gold and parsed AMRs. In order to fine-tune a generative model 4 Experimental setup (GPT-2; Radford et al.(2019)) for condi- tional text generation, prior works fine-tune the Following Previous works on AMR-to-text, we language model to predict target text starting Use the standard LDC2017T10 AMR corpus for from the additional source text as context. In our evaluation of the proposed model. This Corpus experiments, we found it beneficial to fine-tune contains 36,521 training instances of AMR graphs on the joint distribution of AMR and text instead in PENMAN notation and the corresponding texts. i.e. also reconstruct the source. Given a tokenized It also includes 1368 and 1371 development and sentence w1 ··· wN and the sequential AMR test instances, respectively. We tokenize each in- representation a1 ··· aM we maximized the joint put text using The JAMR toolkit (Flanigan et al., probability 2014). The concatenation of an AMR graph and the corresponding text is split into words, special symbols and sub-word units using the GPT-2 to- N Y kenizer. We add all arc labels seen in the train- p (w; a) = p (w j w ; a ) GPT-2 GPT-2 j 1:j−1 1:M ing set and the root node :root to the vocabu- j=1 lary of the GPT-2model, but we freeze the em- M Y bedding layer for training. We use the Hugging · pGPT-2(ai j a1:i−1) Face implementation of (Wolf et al., 2019) for i=1 GPT-2 small (GPT-2S), medium (GPT-2M) and A special separator token is added to mark the large (GPT-2L). Fine-tuning converges after 6 end of the sequential AMR representation. Spe- epochs, which takes just a few hours on a V100 cial AMR symbols that should not be interpreted GPU1. For cycle-consistency re-scoring we use an literally are assigned tokens from the GPT-2 un- implementation of Naseem et al.(2019) in Py- used token list. In addition to this, we also ob- Torch. For re-scoring experiments, we use a beam served that freezing the input embeddings when size of 15. fine-tuning had positive impact in performance. At test time, we provide the AMR as context as AMR input representation. we test three vari- in conventional conditional text generation: ants of AMR representation. First, a depth-first search (DFS) through the graph following Konstas et al.(2017), where the input sequence is the path w^j = arg maxfpGPT-2(wj j w1:j−1; a1:M )g followed in the graph. Second, to see if GPT-2 is wj in fact learning from the graph structure, we re- 3 Re-scoring via Cycle Consistency move all the edges from the DFS, keeping only the concept nodes. This has the effect of removing The general idea of cycle consistency is to assess the relation information between concepts, such as the quality of a system’s output based on how well subject/object relations. As a third option, we use an external ‘reverse’ system can reconstruct the in- the PENMAN representation without any modifi- put from it. In previous works, cycle-consistency cation. The three input representations are illus- based losses have been used as part of the training trated below: objective in machine translation (He et al., 2016) 1Code for this paper is available at: https:// and speech recognition (Hori et al., 2019). It has github.com/IBM/GPT-too-AMR2text 1847 Nodes recommend advocate-01 it Model Input BLEU chrF++ vigorous GPT-2S Rec. Only nodes AMR 9.45 41.59 DFS recommend :ARG1 advocate-01 GPT-2S Rec. Lin. AMR w/o edges. 11.35 43.25 :ARG1 it :manner vigorous GPT-2S Rec. Lin. AMR w/edges. 20.14 53.12 Penman (r / recommend-01 :ARG1 (a / GPT-2S Rec. Penman AMR 22.37 53.92 advocate-01 :ARG1 (i / it) GPT-2M Rec. Lin. AMR w/edges. 22.86 55.04 :manner (v / vigorous))) GPT-2M Rec. Penman AMR 27.99 61.26 Decoding. For generation, we experiment with Table 1: Results on the LDC2017T10 develop- greedy decoding, beam search, and nucleus sam- ment set using GPT-2 S(mall) and M(edium) with pling (Holtzman et al., 2019). For beam search, Rec(onstruction) loss (see x2) for different AMR rep- we explore beam sizes of 5, 10 and 15. As the resentations (see x4). system, in some cases, produces repetitive output Approach Decoding BLEU chrF++ at the end of the text, we additionally perform a GPT-2M Conditional Greedy 25.73 57.2 post-processing step to remove these occurrences. GPT-2M Rec.
Recommended publications
  • Aviation Speech Recognition System Using Artificial Intelligence
    SPEECH RECOGNITION SYSTEM OVERVIEW AVIATION SPEECH RECOGNITION SYSTEM USING ARTIFICIAL INTELLIGENCE Appareo’s embeddable AI model for air traffic control (ATC) transcription makes the company one of the world leaders in aviation artificial intelligence. TECHNOLOGY APPLICATION As featured in Appareo’s two flight apps for pilots, Stratus InsightTM and Stratus Horizon Pro, the custom speech recognition system ATC Transcription makes the company one of the world leaders in aviation artificial intelligence. ATC Transcription is a recurrent neural network that transcribes analog or digital aviation audio into text in near-real time. This in-house artificial intelligence, trained with Appareo’s proprietary dataset of flight-deck audio, takes the terabytes of training data and its accompanying transcriptions and reduces that content to a powerful 160 MB model that can be run inside the aircraft. BENEFITS OF ATC TRANSCRIPTION: • Provides improved situational awareness by identifying, capturing, and presenting ATC communications relevant to your aircraft’s operation for review or replay. • Provides the ability to stream transcribed speech to on-board (avionic) or off-board (iPad/ iPhone) display sources. • Provides a continuous representation of secondary audio channels (ATIS, AWOS, etc.) for review in textual form, allowing your attention to focus on a single channel. • ...and more. What makes ATC Transcription special is the context-specific training work that Appareo has done, using state-of-the-art artificial intelligence development techniques, to create an AI that understands the aviation industry vocabulary. Other natural language processing work is easily confused by the cadence, noise, and vocabulary of the aviation industry, yet Appareo’s groundbreaking work has overcome these barriers and brought functional artificial intelligence to the cockpit.
    [Show full text]
  • A Comparison of Online Automatic Speech Recognition Systems and the Nonverbal Responses to Unintelligible Speech
    A Comparison of Online Automatic Speech Recognition Systems and the Nonverbal Responses to Unintelligible Speech Joshua Y. Kim1, Chunfeng Liu1, Rafael A. Calvo1*, Kathryn McCabe2, Silas C. R. Taylor3, Björn W. Schuller4, Kaihang Wu1 1 University of Sydney, Faculty of Engineering and Information Technologies 2 University of California, Davis, Psychiatry and Behavioral Sciences 3 University of New South Wales, Faculty of Medicine 4 Imperial College London, Department of Computing Abstract large-scale commercial products such as Google Home and Amazon Alexa. Mainstream ASR Automatic Speech Recognition (ASR) systems use only voice as inputs, but there is systems have proliferated over the recent potential benefit in using multi-modal data in order years to the point that free platforms such to improve accuracy [1]. Compared to machines, as YouTube now provide speech recognition services. Given the wide humans are highly skilled in utilizing such selection of ASR systems, we contribute to unstructured multi-modal information. For the field of automatic speech recognition example, a human speaker is attuned to nonverbal by comparing the relative performance of behavior signals and actively looks for these non- two sets of manual transcriptions and five verbal ‘hints’ that a listener understands the speech sets of automatic transcriptions (Google content, and if not, they adjust their speech Cloud, IBM Watson, Microsoft Azure, accordingly. Therefore, understanding nonverbal Trint, and YouTube) to help researchers to responses to unintelligible speech can both select accurate transcription services. In improve future ASR systems to mark uncertain addition, we identify nonverbal behaviors transcriptions, and also help to provide feedback so that are associated with unintelligible speech, as indicated by high word error that the speaker can improve his or her verbal rates.
    [Show full text]
  • Learned in Speech Recognition: Contextual Acoustic Word Embeddings
    LEARNED IN SPEECH RECOGNITION: CONTEXTUAL ACOUSTIC WORD EMBEDDINGS Shruti Palaskar∗, Vikas Raunak∗ and Florian Metze Carnegie Mellon University, Pittsburgh, PA, U.S.A. fspalaska j vraunak j fmetze [email protected] ABSTRACT model [10, 11, 12] trained for direct Acoustic-to-Word (A2W) speech recognition [13]. Using this model, we jointly learn to End-to-end acoustic-to-word speech recognition models have re- automatically segment and classify input speech into individual cently gained popularity because they are easy to train, scale well to words, hence getting rid of the problem of chunking or requiring large amounts of training data, and do not require a lexicon. In addi- pre-defined word boundaries. As our A2W model is trained at the tion, word models may also be easier to integrate with downstream utterance level, we show that we can not only learn acoustic word tasks such as spoken language understanding, because inference embeddings, but also learn them in the proper context of their con- (search) is much simplified compared to phoneme, character or any taining sentence. We also evaluate our contextual acoustic word other sort of sub-word units. In this paper, we describe methods embeddings on a spoken language understanding task, demonstrat- to construct contextual acoustic word embeddings directly from a ing that they can be useful in non-transcription downstream tasks. supervised sequence-to-sequence acoustic-to-word speech recog- Our main contributions in this paper are the following: nition model using the learned attention distribution. On a suite 1. We demonstrate the usability of attention not only for aligning of 16 standard sentence evaluation tasks, our embeddings show words to acoustic frames without any forced alignment but also for competitive performance against a word2vec model trained on the constructing Contextual Acoustic Word Embeddings (CAWE).
    [Show full text]
  • Knowledge-Powered Deep Learning for Word Embedding
    Knowledge-Powered Deep Learning for Word Embedding Jiang Bian, Bin Gao, and Tie-Yan Liu Microsoft Research {jibian,bingao,tyliu}@microsoft.com Abstract. The basis of applying deep learning to solve natural language process- ing tasks is to obtain high-quality distributed representations of words, i.e., word embeddings, from large amounts of text data. However, text itself usually con- tains incomplete and ambiguous information, which makes necessity to leverage extra knowledge to understand it. Fortunately, text itself already contains well- defined morphological and syntactic knowledge; moreover, the large amount of texts on the Web enable the extraction of plenty of semantic knowledge. There- fore, it makes sense to design novel deep learning algorithms and systems in order to leverage the above knowledge to compute more effective word embed- dings. In this paper, we conduct an empirical study on the capacity of leveraging morphological, syntactic, and semantic knowledge to achieve high-quality word embeddings. Our study explores these types of knowledge to define new basis for word representation, provide additional input information, and serve as auxiliary supervision in deep learning, respectively. Experiments on an analogical reason- ing task, a word similarity task, and a word completion task have all demonstrated that knowledge-powered deep learning can enhance the effectiveness of word em- bedding. 1 Introduction With rapid development of deep learning techniques in recent years, it has drawn in- creasing attention to train complex and deep models on large amounts of data, in order to solve a wide range of text mining and natural language processing (NLP) tasks [4, 1, 8, 13, 19, 20].
    [Show full text]
  • Recurrent Neural Network Based Language Model
    INTERSPEECH 2010 Recurrent neural network based language model Toma´sˇ Mikolov1;2, Martin Karafiat´ 1, Luka´sˇ Burget1, Jan “Honza” Cernockˇ y´1, Sanjeev Khudanpur2 1Speech@FIT, Brno University of Technology, Czech Republic 2 Department of Electrical and Computer Engineering, Johns Hopkins University, USA fimikolov,karafiat,burget,[email protected], [email protected] Abstract INPUT(t) OUTPUT(t) A new recurrent neural network based language model (RNN CONTEXT(t) LM) with applications to speech recognition is presented. Re- sults indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18% reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5% on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empiri- cal evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high com- putational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition CONTEXT(t-1) 1. Introduction Figure 1: Simple recurrent neural network. Sequential data prediction is considered by many as a key prob- lem in machine learning and artificial intelligence (see for ex- ample [1]). The goal of statistical language modeling is to plex and often work well only for systems based on very limited predict the next word in textual data given context; thus we amounts of training data.
    [Show full text]
  • Unified Language Model Pre-Training for Natural
    Unified Language Model Pre-training for Natural Language Understanding and Generation Li Dong∗ Nan Yang∗ Wenhui Wang∗ Furu Wei∗† Xiaodong Liu Yu Wang Jianfeng Gao Ming Zhou Hsiao-Wuen Hon Microsoft Research {lidong1,nanya,wenwan,fuwei}@microsoft.com {xiaodl,yuwan,jfgao,mingzhou,hon}@microsoft.com Abstract This paper presents a new UNIfied pre-trained Language Model (UNILM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirec- tional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UNILM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UNILM achieves new state-of- the-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at https://github.com/microsoft/unilm. 1 Introduction Language model (LM) pre-training has substantially advanced the state of the art across a variety of natural language processing tasks [8, 29, 19, 31, 9, 1].
    [Show full text]
  • Hello, It's GPT-2
    Hello, It’s GPT-2 - How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems Paweł Budzianowski1;2;3 and Ivan Vulic´2;3 1Engineering Department, Cambridge University, UK 2Language Technology Lab, Cambridge University, UK 3PolyAI Limited, London, UK [email protected], [email protected] Abstract (Young et al., 2013). On the other hand, open- domain conversational bots (Li et al., 2017; Serban Data scarcity is a long-standing and crucial et al., 2017) can leverage large amounts of freely challenge that hinders quick development of available unannotated data (Ritter et al., 2010; task-oriented dialogue systems across multiple domains: task-oriented dialogue models are Henderson et al., 2019a). Large corpora allow expected to learn grammar, syntax, dialogue for training end-to-end neural models, which typ- reasoning, decision making, and language gen- ically rely on sequence-to-sequence architectures eration from absurdly small amounts of task- (Sutskever et al., 2014). Although highly data- specific data. In this paper, we demonstrate driven, such systems are prone to producing unre- that recent progress in language modeling pre- liable and meaningless responses, which impedes training and transfer learning shows promise their deployment in the actual conversational ap- to overcome this problem. We propose a task- oriented dialogue model that operates solely plications (Li et al., 2017). on text input: it effectively bypasses ex- Due to the unresolved issues with the end-to- plicit policy and language generation modules. end architectures, the focus has been extended to Building on top of the TransferTransfo frame- retrieval-based models.
    [Show full text]
  • Synthesis and Recognition of Speech Creating and Listening to Speech
    ISSN 1883-1974 (Print) ISSN 1884-0787 (Online) National Institute of Informatics News NII Interview 51 A Combination of Speech Synthesis and Speech Oct. 2014 Recognition Creates an Affluent Society NII Special 1 “Statistical Speech Synthesis” Technology with a Rapidly Growing Application Area NII Special 2 Finding Practical Application for Speech Recognition Feature Synthesis and Recognition of Speech Creating and Listening to Speech A digital book version of “NII Today” is now available. http://www.nii.ac.jp/about/publication/today/ This English language edition NII Today corresponds to No. 65 of the Japanese edition [Advance Notice] Great news! NII Interview Yamagishi-sensei will create my voice! Yamagishi One result is a speech translation sys- sound. Bit (NII Character) A Combination of Speech Synthesis tem. This system recognizes speech and translates it Ohkawara I would like your comments on the fu- using machine translation to synthesize speech, also ture challenges. and Speech Recognition automatically translating it into every language to Ono The challenge for speech recognition is how speak. Moreover, the speech is created with a human close it will come to humans in the distant speech A Word from the Interviewer voice. In second language learning, you can under- case. If this study is advanced, it will be possible to Creates an Affluent Society stand how you should pronounce it with your own summarize the contents of a meeting and to automati- voice. If this is further advanced, the system could cally take the minutes. If a robot understands the con- have an actor in a movie speak in a different language tents of conversations by multiple people in a natural More and more people have begun to use smart- a smartphone, I use speech input more often.
    [Show full text]
  • Natural Language Processing in Speech Understanding Systems
    Working Paper Series ISSN 11 70-487X Natural language processing in speech understanding systems by Dr Geoffrey Holmes Working Paper 92/6 September, 1992 © 1992 by Dr Geoffrey Holmes Department of Computer Science The University of Waikato Private Bag 3105 Hamilton, New Zealand Natural Language P rocessing In Speech Understanding Systems G. Holmes Department of Computer Science, University of Waikato, New Zealand Overview Speech understanding systems (SUS's) came of age in late 1071 as a result of a five year devel­ opment. programme instigated by the Information Processing Technology Office of the Advanced Research Projects Agency (ARPA) of the Department of Defense in the United States. The aim of the progranune was to research and tlevelop practical man-machine conuuunication system s. It has been argued since, t hat. t he main contribution of this project was not in the development of speech science, but in the development of artificial intelligence. That debate is beyond the scope of th.is paper, though no one would question the fact. that the field to benefit most within artificial intelligence as a result of this progranune is natural language understan ding. More recent projects of a similar nature, such as projects in the Unite<l Kiug<lom's ALVEY programme and Ew·ope's ESPRIT programme have added further developments to this important field. Th.is paper presents a review of some of the natural language processing techniques used within speech understanding syst:ems. In particular. t.ecl.miq11es for handling syntactic, semantic and pragmatic informat.ion are ,Uscussed. They are integrated into SUS's as knowledge sources.
    [Show full text]
  • Arxiv:2007.00183V2 [Eess.AS] 24 Nov 2020
    WHOLE-WORD SEGMENTAL SPEECH RECOGNITION WITH ACOUSTIC WORD EMBEDDINGS Bowen Shi, Shane Settle, Karen Livescu TTI-Chicago, USA fbshi,settle.shane,[email protected] ABSTRACT Segmental models are sequence prediction models in which scores of hypotheses are based on entire variable-length seg- ments of frames. We consider segmental models for whole- word (“acoustic-to-word”) speech recognition, with the feature vectors defined using vector embeddings of segments. Such models are computationally challenging as the number of paths is proportional to the vocabulary size, which can be orders of magnitude larger than when using subword units like phones. We describe an efficient approach for end-to-end whole-word segmental models, with forward-backward and Viterbi de- coding performed on a GPU and a simple segment scoring function that reduces space complexity. In addition, we inves- tigate the use of pre-training via jointly trained acoustic word embeddings (AWEs) and acoustically grounded word embed- dings (AGWEs) of written word labels. We find that word error rate can be reduced by a large margin by pre-training the acoustic segment representation with AWEs, and additional Fig. 1. Whole-word segmental model for speech recognition. (smaller) gains can be obtained by pre-training the word pre- Note: boundary frames are not shared. diction layer with AGWEs. Our final models improve over segmental models, where the sequence probability is com- prior A2W models. puted based on segment scores instead of frame probabilities. Index Terms— speech recognition, segmental model, Segmental models have a long history in speech recognition acoustic-to-word, acoustic word embeddings, pre-training research, but they have been used primarily for phonetic recog- nition or as phone-level acoustic models [11–18].
    [Show full text]
  • Improving Named Entity Recognition Using Deep Learning with Human in the Loop
    Demonstration Improving Named Entity Recognition using Deep Learning with Human in the Loop Ticiana L. Coelho da Silva Regis Pires Magalhães José Antônio F. de Macêdo Insight Data Science Lab Insight Data Science Lab Insight Data Science Lab Fortaleza, Ceara, Brazil Fortaleza, Ceara, Brazil Fortaleza, Ceara, Brazil [email protected] [email protected] [email protected] David Araújo Abreu Natanael da Silva Araújo Vinicius Teixeira de Melo Insight Data Science Lab Insight Data Science Lab Insight Data Science Lab Fortaleza, Ceara, Brazil Fortaleza, Ceara, Brazil Fortaleza, Ceara, Brazil [email protected] [email protected] [email protected] Pedro Olímpio Pinheiro Paulo A. L. Rego Aloisio Vieira Lira Neto Insight Data Science Lab Federal University of Ceara Brazilian Federal Highway Police Fortaleza, Ceara, Brazil Fortaleza, Ceara, Brazil Fortaleza, Ceara, Brazil [email protected] [email protected] [email protected] ABSTRACT Even, orthographic features and language-specific knowledge Named Entity Recognition (NER) is a challenging problem in Nat- resources as gazetteers are widely used in NER, such approaches ural Language Processing (NLP). Deep Learning techniques have are costly to develop, especially for new languages and new do- been extensively applied in NER tasks because they require little mains, making NER a challenge to adapt to these scenarios[10]. feature engineering and are free from language-specific resources, Deep Learning models have been extensively used in NER learning important features from word or character embeddings tasks [3, 5, 9] because they require little feature engineering and trained on large amounts of data. However, these techniques they may learn important features from word or character embed- are data-hungry and require a massive amount of training data.
    [Show full text]
  • Discriminative Transfer Learning for Optimizing ASR and Semantic Labeling in Task-Oriented Spoken Dialog
    Discriminative Transfer Learning for Optimizing ASR and Semantic Labeling in Task-oriented Spoken Dialog Yao Qian, Yu Shi and Michael Zeng Microsoft Speech and Dialogue Research Group, Redmond, WA, USA fyaoqian,yushi,[email protected] Abstract universal representation for the common-sense knowledge and lexical semantics hiding in the data. Adapting pre-trained trans- Spoken language understanding (SLU) tries to decode an in- former to ASR lattice was proposed to SDS in [7], where lattice put speech utterance such that effective semantic actions can reachability masks and lattice positional encoding were utilized be taken to continue meaningful and interactive spoken dialog to enable lattice inputs during fine-tuning. In [8], ASR-robust (SD). The performance of SLU, however, can be adversely af- contextualized embeddings were learned by using word acous- fected by automatic speech recognition (ASR) errors. In this tic confusion. The studies on ATIS corpus [9] have shown that paper, we exploit transfer learning in a Generative pre-trained both approaches could improve the robustness of NLU to ASR Transformer (GPT) to jointly optimize ASR error correction errors. and semantic labeling in terms of dialog act and slot-value for a given user’s spoken response in the context of SD system Meanwhile, many researchers tried to skip ASR entirely or (SDS). With the encoded ASR output and dialog history as use only partial information, e.g., phone sequence instead of context, a conditional generative model is trained to generate word sequence, extracted from its modules for SLU [10–12]. transcripts correction, dialog act, and slot-values successively. In [10], it has studied techniques for building call routers from The proposed generation model is jointly optimized as a clas- scratch without any knowledge of the application vocabulary sification task, which utilizes the ground-truth and N-best hy- or grammar.
    [Show full text]