Hello, It's GPT-2

Total Page:16

File Type:pdf, Size:1020Kb

Hello, It's GPT-2 Hello, It’s GPT-2 - How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems Paweł Budzianowski1;2;3 and Ivan Vulic´2;3 1Engineering Department, Cambridge University, UK 2Language Technology Lab, Cambridge University, UK 3PolyAI Limited, London, UK [email protected], [email protected] Abstract (Young et al., 2013). On the other hand, open- domain conversational bots (Li et al., 2017; Serban Data scarcity is a long-standing and crucial et al., 2017) can leverage large amounts of freely challenge that hinders quick development of available unannotated data (Ritter et al., 2010; task-oriented dialogue systems across multiple domains: task-oriented dialogue models are Henderson et al., 2019a). Large corpora allow expected to learn grammar, syntax, dialogue for training end-to-end neural models, which typ- reasoning, decision making, and language gen- ically rely on sequence-to-sequence architectures eration from absurdly small amounts of task- (Sutskever et al., 2014). Although highly data- specific data. In this paper, we demonstrate driven, such systems are prone to producing unre- that recent progress in language modeling pre- liable and meaningless responses, which impedes training and transfer learning shows promise their deployment in the actual conversational ap- to overcome this problem. We propose a task- oriented dialogue model that operates solely plications (Li et al., 2017). on text input: it effectively bypasses ex- Due to the unresolved issues with the end-to- plicit policy and language generation modules. end architectures, the focus has been extended to Building on top of the TransferTransfo frame- retrieval-based models. Here, the massive datasets work (Wolf et al., 2019) and generative model can be leveraged to aid task-specific applications pre-training (Radford et al., 2019), we vali- (Kannan et al., 2016; Henderson et al., 2017, date the approach on complex multi-domain 2019b). The retrieval systems allow for the full task-oriented dialogues from the MultiWOZ control over system responses, but the behaviour dataset. Our automatic and human evaluations show that the proposed model is on par with of the system is often highly predictable. It also a strong task-specific neural baseline. In the depends on the pre-existing set of responses, and long run, our approach holds promise to miti- the coverage is typically insufficient for a mul- gate the data scarcity problem, and to support titude of domains and tasks. However, recent the construction of more engaging and more progress in training high-capacity language mod- eloquent task-oriented conversational agents. els (e.g., GPT, GPT-2) (Radford et al., 2018, 2019) on large datasets reopens the question of whether 1 Introduction such generative models can support task-oriented Statistical conversational systems can be roughly dialogue applications. Recently, Wolf et al. (2019) clustered into two main categories: 1) task- and Golovanov et al. (2019) showed that the GPT oriented modular systems and 2) open-domain model, once fine-tuned, can be useful in the do- chit-chat neural models. The former typically con- main of personal conversations. In short, their sist of independently trained constituent modules approach led to substantial improvements on the such as language understanding, dialogue manage- Persona-Chat dataset (Zhang et al., 2018), show- ment, and response generation. The main goal of casing the potential of exploiting large pretrained such systems is to provide meaningful system re- generative models in the conversational domain.1 sponses which are invaluable in building conversa- In this paper, we demonstrate that large gener- tional agents of practical value for restricted do- ative models pretrained on large general-domain mains and tasks. However, data collection and 1E.g., TransferTransfo (Wolf et al., 2019) yields gains in annotation for such systems is complex, time- all crucial dialogue evaluation measures such as fluency, con- intensive, expensive, and not easily transferable sistency and engagingness on the Persona-Chat dataset. 15 Proceedings of the 3rd Workshop on Neural Generation and Translation (WNGT 2019), pages 15–22 Hong Kong, China, November 4, 2019. c 2019 Association for Computational Linguistics www.aclweb.org/anthology/D19-56%2d data. A natural question to ask is: Can we leverage transfer learning through generative pretraining on large unlabelled corpora to enable task-oriented di- alogue modeling. In this work, we rely on the stan- dard language modeling (LM) pretraining, where the task is to predict the next word given the pre- ceding word sequence (Bengio et al., 2003). The objective maximizes the likelihood over the word sequence S = fw1; :::; wjSjg: XjSj L1(S) = log P (wijw0; w1; :::; wi−1): (1) Figure 1: Dialogue-context-to-text task. i=1 Transfer learning based on such LM pretraining corpora can support task-oriented dialogue appli- combined with the Transformer decoder model cations. We first discuss how to combine a set (Vaswani et al., 2017) resulted in significant of diverse components such as word tokenization, progress across many downstream tasks (Rei, multi-task learning, and probabilistic sampling to 2017; Howard and Ruder, 2018; Radford et al., support task-oriented applications. We then show 2018, 2019). how to adapt the task-oriented dialogue framework to operate entirely on text input, effectively by- 2.1 TransferTransfo Framework passing an explicit dialogue management module Golovanov et al. (2019) and Wolf et al. (2019) and a domain-specific natural language generation achieved a first successful transfer of a genera- module. The proposed model operates entirely tive pretrained GPT model to an open-domain di- in the sequence-to-sequence fashion, consuming alogue task. The pretrained GPT model is fine- only simple text as input. The entire dialogue con- tuned in a multi-task learning fashion following text, which includes the belief state, the database the original work (Radford et al., 2018). The LM state and previous turns, is provided to the decoder objective from Eq. (1) is combined with the next as raw text. The proposed model follows the re- utterance classification task: cently proposed TransferTransfo framework (Wolf ∗ et al., 2019), and relies on pretrained models from p(c; a) = softmax(hl Wh): (2) the GPT family (Radford et al., 2018, 2019). c and a represent the context of the conversation Our results in the standard Dialogue-Context-to- (c) and a proposed answer (a), hl is the last hidden Text task (see Figure 1) on the multi-domain Multi- state of the transformer decoder, and Wh is learnt WOZ dataset (Budzianowski et al., 2018b) suggest during the fine-tuning phase. The model signifi- that our GPT-based task-oriented dialogue model cantly improves upon previous baselines over all learns to generate and understand domain-specific automatic dialogue evaluation metrics as well as tokens, which in turn leads to a seamless adapta- in evaluation with human subjects when evaluated tion to particular focused domains. While auto- on the Persona-Chat dataset (Zhang et al., 2018). matic evaluation indicates that our framework still The GPT input consists of token embeddings falls slightly short of a strong task-specific neural and positional embeddings. In order to move from baseline, it also hints at the main advantage of our a single-speaker setting to a setting with two inter- framework: it is widely portable and easily adapt- locutors, Wolf et al. (2019) introduced dialogue- able to a large number of domains, bypassing the state embeddings. These embeddings inform the intricate modular design only at a small cost in per- model whether the current token comes from an formance. Furthermore, user-centered evaluations utterance of the first speaker or an utterance of the suggest that there is no significant difference be- second speaker. The dialogue-state embeddings tween the two models. are learned during the fine-tuning phase. 2 From Unsupervised Pretraining to 3 Domain Transfer for (Task-Oriented) Dialogue Modeling Dialogue Modeling Task-oriented dialogue modeling requires substan- We now briefly discuss several advances in model- tial amounts of domain-specific manually labeled ing of natural language that facilitate applicability 16 Figure 2: The framework for modeling task-oriented conversations based on a pretrained GPT model which uses only unstructured simple text as input. The context, belief state, and database state are joined together without explicit standalone dialogue policy and generation modules. The token-level (i.e., dialogue-state) embeddings are learned following Wolf et al. (2019). of pretrained generative models in task-oriented di- added to as another part of the text-only input pro- alogue modeling. To the best of our knowledge, vided in “natural language”. this work is first to combine these existing compo- nents to enable task-oriented dialogue modeling. 3.3 Transferring Language Generation 3.1 Domain Adaptation and Delexicalization Capabilities Dealing with out-of-vocabulary (OOV) words has Transformer architecture shows ability to learn been a long-standing challenge in dialogue mod- new (i.e., domain-specific) token embeddings in eling, e.g., it is crucial for task-oriented genera- the fine-tuning phase (Radford et al., 2018; Wolf tion where the generated output is often delexical- et al., 2019). This means that the GPT models can ized (Wen et al., 2015). Delexicalization replaces adapt through special tokens to particular tasks. slot values by their corresponding (generic) slot By providing the input representation as text with tokens and it allows learning value-independent domain-specific tokens, we can use off-the-shelf parameters. Recently, owing to subword-level to- architectures and adapt to the domain-specific in- kenisation (Sennrich et al., 2016), language mod- put without the need of training new dialogue sub- els are now able to deal with OOVs and domain- modules. As mentioned in §2.1, the token level specific vocabularies more effectively (Radford layer (Figure 2) informs the transformer decoder et al., 2018).
Recommended publications
  • NVIDIA CEO Jensen Huang to Host AI Pioneers Yoshua Bengio, Geoffrey Hinton and Yann Lecun, and Others, at GTC21
    NVIDIA CEO Jensen Huang to Host AI Pioneers Yoshua Bengio, Geoffrey Hinton and Yann LeCun, and Others, at GTC21 Online Conference to Feature Jensen Huang Keynote and 1,300 Talks from Leaders in Data Center, Networking, Graphics and Autonomous Vehicles NVIDIA today announced that its CEO and founder Jensen Huang will host renowned AI pioneers Yoshua Bengio, Geoffrey Hinton and Yann LeCun at the company’s upcoming technology conference, GTC21, running April 12-16. The event will kick off with a news-filled livestreamed keynote by Huang on April 12 at 8:30 am Pacific. Bengio, Hinton and LeCun won the 2018 ACM Turing Award, known as the Nobel Prize of computing, for breakthroughs that enabled the deep learning revolution. Their work underpins the proliferation of AI technologies now being adopted around the world, from natural language processing to autonomous machines. Bengio is a professor at the University of Montreal and head of Mila - Quebec Artificial Intelligence Institute; Hinton is a professor at the University of Toronto and a researcher at Google; and LeCun is a professor at New York University and chief AI scientist at Facebook. More than 100,000 developers, business leaders, creatives and others are expected to register for GTC, including CxOs and IT professionals focused on data center infrastructure. Registration is free and is not required to view the keynote. In addition to the three Turing winners, major speakers include: Girish Bablani, Corporate Vice President, Microsoft Azure John Bowman, Director of Data Science, Walmart
    [Show full text]
  • On Recurrent and Deep Neural Networks
    On Recurrent and Deep Neural Networks Razvan Pascanu Advisor: Yoshua Bengio PhD Defence Universit´ede Montr´eal,LISA lab September 2014 Pascanu On Recurrent and Deep Neural Networks 1/ 38 Studying the mechanism behind learning provides a meta-solution for solving tasks. Motivation \A computer once beat me at chess, but it was no match for me at kick boxing" | Emo Phillips Pascanu On Recurrent and Deep Neural Networks 2/ 38 Motivation \A computer once beat me at chess, but it was no match for me at kick boxing" | Emo Phillips Studying the mechanism behind learning provides a meta-solution for solving tasks. Pascanu On Recurrent and Deep Neural Networks 2/ 38 I fθ(x) = f (θ; x) ? F I f = arg minθ Θ EEx;t π [d(fθ(x); t)] 2 ∼ Supervised Learing I f :Θ D T F × ! Pascanu On Recurrent and Deep Neural Networks 3/ 38 ? I f = arg minθ Θ EEx;t π [d(fθ(x); t)] 2 ∼ Supervised Learing I f :Θ D T F × ! I fθ(x) = f (θ; x) F Pascanu On Recurrent and Deep Neural Networks 3/ 38 Supervised Learing I f :Θ D T F × ! I fθ(x) = f (θ; x) ? F I f = arg minθ Θ EEx;t π [d(fθ(x); t)] 2 ∼ Pascanu On Recurrent and Deep Neural Networks 3/ 38 Optimization for learning θ[k+1] θ[k] Pascanu On Recurrent and Deep Neural Networks 4/ 38 Neural networks Output neurons Last hidden layer bias = 1 Second hidden layer First hidden layer Input layer Pascanu On Recurrent and Deep Neural Networks 5/ 38 Recurrent neural networks Output neurons Output neurons Last hidden layer bias = 1 bias = 1 Recurrent Layer Second hidden layer First hidden layer Input layer Input layer (b) Recurrent
    [Show full text]
  • Knowledge-Powered Deep Learning for Word Embedding
    Knowledge-Powered Deep Learning for Word Embedding Jiang Bian, Bin Gao, and Tie-Yan Liu Microsoft Research {jibian,bingao,tyliu}@microsoft.com Abstract. The basis of applying deep learning to solve natural language process- ing tasks is to obtain high-quality distributed representations of words, i.e., word embeddings, from large amounts of text data. However, text itself usually con- tains incomplete and ambiguous information, which makes necessity to leverage extra knowledge to understand it. Fortunately, text itself already contains well- defined morphological and syntactic knowledge; moreover, the large amount of texts on the Web enable the extraction of plenty of semantic knowledge. There- fore, it makes sense to design novel deep learning algorithms and systems in order to leverage the above knowledge to compute more effective word embed- dings. In this paper, we conduct an empirical study on the capacity of leveraging morphological, syntactic, and semantic knowledge to achieve high-quality word embeddings. Our study explores these types of knowledge to define new basis for word representation, provide additional input information, and serve as auxiliary supervision in deep learning, respectively. Experiments on an analogical reason- ing task, a word similarity task, and a word completion task have all demonstrated that knowledge-powered deep learning can enhance the effectiveness of word em- bedding. 1 Introduction With rapid development of deep learning techniques in recent years, it has drawn in- creasing attention to train complex and deep models on large amounts of data, in order to solve a wide range of text mining and natural language processing (NLP) tasks [4, 1, 8, 13, 19, 20].
    [Show full text]
  • Recurrent Neural Network Based Language Model
    INTERSPEECH 2010 Recurrent neural network based language model Toma´sˇ Mikolov1;2, Martin Karafiat´ 1, Luka´sˇ Burget1, Jan “Honza” Cernockˇ y´1, Sanjeev Khudanpur2 1Speech@FIT, Brno University of Technology, Czech Republic 2 Department of Electrical and Computer Engineering, Johns Hopkins University, USA fimikolov,karafiat,burget,[email protected], [email protected] Abstract INPUT(t) OUTPUT(t) A new recurrent neural network based language model (RNN CONTEXT(t) LM) with applications to speech recognition is presented. Re- sults indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18% reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5% on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empiri- cal evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high com- putational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition CONTEXT(t-1) 1. Introduction Figure 1: Simple recurrent neural network. Sequential data prediction is considered by many as a key prob- lem in machine learning and artificial intelligence (see for ex- ample [1]). The goal of statistical language modeling is to plex and often work well only for systems based on very limited predict the next word in textual data given context; thus we amounts of training data.
    [Show full text]
  • Yoshua Bengio and Gary Marcus on the Best Way Forward for AI
    Yoshua Bengio and Gary Marcus on the Best Way Forward for AI Transcript of the 23 December 2019 AI Debate, hosted at Mila Moderated and transcribed by Vincent Boucher, Montreal AI AI DEBATE : Yoshua Bengio | Gary Marcus — Organized by MONTREAL.AI and hosted at Mila, on Monday, December 23, 2019, from 6:30 PM to 8:30 PM (EST) Slides, video, readings and more can be found on the MONTREAL.AI debate webpage. Transcript of the AI Debate Opening Address | Vincent Boucher Good Evening from Mila in Montreal Ladies & Gentlemen, Welcome to the “AI Debate”. I am Vincent Boucher, Founding Chairman of Montreal.AI. Our participants tonight are Professor GARY MARCUS and Professor YOSHUA BENGIO. Professor GARY MARCUS is a Scientist, Best-Selling Author, and Entrepreneur. Professor MARCUS has published extensively in neuroscience, genetics, linguistics, evolutionary psychology and artificial intelligence and is perhaps the youngest Professor Emeritus at NYU. He is Founder and CEO of Robust.AI and the author of five books, including The Algebraic Mind. His newest book, Rebooting AI: Building Machines We Can Trust, aims to shake up the field of artificial intelligence and has been praised by Noam Chomsky, Steven Pinker and Garry Kasparov. Professor YOSHUA BENGIO is a Deep Learning Pioneer. In 2018, Professor BENGIO was the computer scientist who collected the largest number of new citations worldwide. In 2019, he received, jointly with Geoffrey Hinton and Yann LeCun, the ACM A.M. Turing Award — “the Nobel Prize of Computing”. He is the Founder and Scientific Director of Mila — the largest university-based research group in deep learning in the world.
    [Show full text]
  • Unified Language Model Pre-Training for Natural
    Unified Language Model Pre-training for Natural Language Understanding and Generation Li Dong∗ Nan Yang∗ Wenhui Wang∗ Furu Wei∗† Xiaodong Liu Yu Wang Jianfeng Gao Ming Zhou Hsiao-Wuen Hon Microsoft Research {lidong1,nanya,wenwan,fuwei}@microsoft.com {xiaodl,yuwan,jfgao,mingzhou,hon}@microsoft.com Abstract This paper presents a new UNIfied pre-trained Language Model (UNILM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirec- tional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UNILM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UNILM achieves new state-of- the-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at https://github.com/microsoft/unilm. 1 Introduction Language model (LM) pre-training has substantially advanced the state of the art across a variety of natural language processing tasks [8, 29, 19, 31, 9, 1].
    [Show full text]
  • Hierarchical Multiscale Recurrent Neural Networks
    Published as a conference paper at ICLR 2017 HIERARCHICAL MULTISCALE RECURRENT NEURAL NETWORKS Junyoung Chung, Sungjin Ahn & Yoshua Bengio ∗ Département d’informatique et de recherche opérationnelle Université de Montréal {junyoung.chung,sungjin.ahn,yoshua.bengio}@umontreal.ca ABSTRACT Learning both hierarchical and temporal representation has been among the long- standing challenges of recurrent neural networks. Multiscale recurrent neural networks have been considered as a promising approach to resolve this issue, yet there has been a lack of empirical evidence showing that this type of models can actually capture the temporal dependencies by discovering the latent hierarchical structure of the sequence. In this paper, we propose a novel multiscale approach, called the hierarchical multiscale recurrent neural network, that can capture the latent hierarchical structure in the sequence by encoding the temporal dependencies with different timescales using a novel update mechanism. We show some evidence that the proposed model can discover underlying hierarchical structure in the sequences without using explicit boundary information. We evaluate our proposed model on character-level language modelling and handwriting sequence generation. 1 INTRODUCTION One of the key principles of learning in deep neural networks as well as in the human brain is to obtain a hierarchical representation with increasing levels of abstraction (Bengio, 2009; LeCun et al., 2015; Schmidhuber, 2015). A stack of representation layers, learned from the data in a way to optimize the target task, make deep neural networks entertain advantages such as generalization to unseen examples (Hoffman et al., 2013), sharing learned knowledge among multiple tasks, and discovering disentangling factors of variation (Kingma & Welling, 2013).
    [Show full text]
  • N-Gram Language Modeling Tutorial Dustin Hillard and Sarah Petersen Lecture Notes Courtesy of Prof
    N-gram Language Modeling Tutorial Dustin Hillard and Sarah Petersen Lecture notes courtesy of Prof. Mari Ostendorf Outline: • Statistical Language Model (LM) Basics • n-gram models • Class LMs • Cache LMs • Mixtures • Empirical observations (Goodman CSL 2001) • Factored LMs Part I: Statistical Language Model (LM) Basics • What is a statistical LM and why are they interesting? • Evaluating LMs • History equivalence classes What is a statistical language model? A stochastic process model for word sequences. A mechanism for computing the probability of: p(w1, . , wT ) 1 Why are LMs interesting? • Important component of a speech recognition system – Helps discriminate between similar sounding words – Helps reduce search costs • In statistical machine translation, a language model characterizes the target language, captures fluency • For selecting alternatives in summarization, generation • Text classification (style, reading level, language, topic, . ) • Language models can be used for more than just words – letter sequences (language identification) – speech act sequence modeling – case and punctuation restoration 2 Evaluating LMs Evaluating LMs in the context of an application can be expensive, so LMs are usually evaluated on the own in terms of perplexity:: ˜ 1 PP = 2Hr where H˜ = − log p(w , . , w ) r T 2 1 T where {w1, . , wT } is held out test data that provides the empirical distribution q(·) in the cross-entropy formula H˜ = − X q(x) log p(x) x and p(·) is the LM estimated on a training set. Interpretations: • Entropy rate: lower entropy means that it is easier to predict the next symbol and hence easier to rule out alternatives when combined with other models small H˜r → small PP • Average branching factor: When a distribution is uniform for a vocabulary of size V , then entropy is log2 V , and perplexity is V .
    [Show full text]
  • I2t2i: Learning Text to Image Synthesis with Textual Data Augmentation
    I2T2I: LEARNING TEXT TO IMAGE SYNTHESIS WITH TEXTUAL DATA AUGMENTATION Hao Dong, Jingqing Zhang, Douglas McIlwraith, Yike Guo Data Science Institute, Imperial College London ABSTRACT of text-to-image synthesis is still far from being solved. Translating information between text and image is a funda- GAN-CLS I2T2I mental problem in artificial intelligence that connects natural language processing and computer vision. In the past few A yellow years, performance in image caption generation has seen sig- school nificant improvement through the adoption of recurrent neural bus parked in networks (RNN). Meanwhile, text-to-image generation begun a parking to generate plausible images using datasets of specific cate- lot. gories like birds and flowers. We’ve even seen image genera- A man tion from multi-category datasets such as the Microsoft Com- swinging a baseball mon Objects in Context (MSCOCO) through the use of gen- bat over erative adversarial networks (GANs). Synthesizing objects home plate. with a complex shape, however, is still challenging. For ex- ample, animals and humans have many degrees of freedom, which means that they can take on many complex shapes. We propose a new training method called Image-Text-Image Fig. 1: Comparing text-to-image synthesis with and without (I2T2I) which integrates text-to-image and image-to-text (im- textual data augmentation. I2T2I synthesizes the human pose age captioning) synthesis to improve the performance of text- and bus color better. to-image synthesis. We demonstrate that I2T2I can generate better multi-categories images using MSCOCO than the state- Recently, both fractionally-strided convolutional and con- of-the-art.
    [Show full text]
  • A General Language Model for Information Retrieval Fei Song W
    A General Language Model for Information Retrieval Fei Song W. Bruce Croft Dept. of Computing and Info. Science Dept. of Computer Science University of Guelph University of Massachusetts Guelph, Ontario, Canada N1G 2W1 Amherst, Massachusetts 01003 [email protected] [email protected] ABSTRACT However, estimating the probabilities of word sequences can be expensive, since sentences can be arbitrarily long and the size of Statistical language modeling has been successfully used for a corpus needs to be very large. In practice, the statistical speech recognition, part-of-speech tagging, and syntactic parsing. language model is often approximated by N-gram models. Recently, it has also been applied to information retrieval. Unigram: P(w ) = P(w )P(w ) ¡ P(w ) According to this new paradigm, each document is viewed as a 1,n 1 2 n language sample, and a query as a generation process. The Bigram: P(w ) = P(w )P(w | w ) ¡ P(w | w ) retrieved documents are ranked based on the probabilities of 1,n 1 2 1 n n−1 producing a query from the corresponding language models of Trigram: P(w ) = P(w )P(w | w )P(w | w ) ¡ P(w | w ) these documents. In this paper, we will present a new language 1,n 1 2 1 3 1,2 n n−2,n−1 model for information retrieval, which is based on a range of data The unigram model makes a strong assumption that each word smoothing techniques, including the Good-Turing estimate, occurs independently, and consequently, the probability of a curve-fitting functions, and model combinations.
    [Show full text]
  • A Unified Multilingual Handwriting Recognition System Using
    1 A Unified Multilingual Handwriting Recognition System using multigrams sub-lexical units Wassim Swaileha,∗, Yann Soullarda, Thierry Paqueta aLITIS Laboratory - EA 4108 Normandie University - University of Rouen Rouen, France ABSTRACT We address the design of a unified multilingual system for handwriting recognition. Most of multi- lingual systems rests on specialized models that are trained on a single language and one of them is selected at test time. While some recognition systems are based on a unified optical model, dealing with a unified language model remains a major issue, as traditional language models are generally trained on corpora composed of large word lexicons per language. Here, we bring a solution by con- sidering language models based on sub-lexical units, called multigrams. Dealing with multigrams strongly reduces the lexicon size and thus decreases the language model complexity. This makes pos- sible the design of an end-to-end unified multilingual recognition system where both a single optical model and a single language model are trained on all the languages. We discuss the impact of the language unification on each model and show that our system reaches state-of-the-art methods perfor- mance with a strong reduction of the complexity. c 2018 Elsevier Ltd. All rights reserved. 1. Introduction Offline handwriting recognition is a challenging task due to the high variability of data, as writing styles, the quality of the input image and the lack of contextual information. The recognition is commonly done in two steps. First, an optical character recognition (OCR) model is optimized for recognizing sequences of characters from input images of handwritten texts (Plotz¨ and Fink (2009); Singh (2013)).
    [Show full text]
  • Word Embeddings: History & Behind the Scene
    Word Embeddings: History & Behind the Scene Alfan F. Wicaksono Information Retrieval Lab. Faculty of Computer Science Universitas Indonesia References • Bengio, Y., Ducharme, R., Vincent, P., & Janvin, C. (2003). A Neural Probabilistic Language Model. The Journal of Machine Learning Research, 3, 1137–1155. • Mikolov, T., Corrado, G., Chen, K., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. Proceedings of the International Conference on Learning Representations (ICLR 2013), 1–12 • Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Distributed Representations of Words and Phrases and their Compositionality. NIPS, 1–9. • Morin, F., & Bengio, Y. (2005). Hierarchical Probabilistic Neural Network Language Model. Aistats, 5. References Good weblogs for high-level understanding: • http://sebastianruder.com/word-embeddings-1/ • Sebastian Ruder. On word embeddings - Part 2: Approximating the Softmax. http://sebastianruder.com/word- embeddings-softmax • https://www.tensorflow.org/tutorials/word2vec • https://www.gavagai.se/blog/2015/09/30/a-brief-history-of- word-embeddings/ Some slides were also borrowed from From Dan Jurafsky’s course slide: Word Meaning and Similarity. Stanford University. Terminology • The term “Word Embedding” came from deep learning community • For computational linguistic community, they prefer “Distributional Semantic Model” • Other terms: – Distributed Representation – Semantic Vector Space – Word Space https://www.gavagai.se/blog/2015/09/30/a-brief-history-of-word-embeddings/ Before we learn Word Embeddings... Semantic Similarity • Word Similarity – Near-synonyms – “boat” and “ship”, “car” and “bicycle” • Word Relatedness – Can be related any way – Similar: “boat” and “ship” – Topical Similarity: • “boat” and “water” • “car” and “gasoline” Why Word Similarity? • Document Classification • Document Clustering • Language Modeling • Information Retrieval • ..
    [Show full text]