Topic Compositional Neural Language Model

Total Page:16

File Type:pdf, Size:1020Kb

Topic Compositional Neural Language Model Topic Compositional Neural Language Model Wenlin Wang1 Zhe Gan1 Wenqi Wang3 Dinghan Shen1 Jiaji Huang2 Wei Ping2 Sanjeev Satheesh2 Lawrence Carin1 1Duke University 2Baidu Silicon Valley AI Lab 3Purdue University Abstract et al., 2012; Vaswani et al., 2013) to image caption- ing (Mao et al., 2014; Devlin et al., 2015). Training a good language model often improves the underlying We propose a Topic Compositional Neural metrics of these applications, e.g., word error rates for Language Model (TCNLM), a novel method speech recognition and BLEU scores (Papineni et al., designed to simultaneously capture both the 2002) for machine translation. Hence, learning a pow- global semantic meaning and the local word- erful language model has become a central task in ordering structure in a document. The TC- NLP. Typically, the primary goal of a language model NLM learns the global semantic coherence is to predict distributions over words, which has to of a document via a neural topic model, encode both the semantic knowledge and grammat- and the probability of each learned latent ical structure in the documents. RNN-based neural topic is further used to build a Mixture-of- language models have yielded state-of-the-art perfor- Experts (MoE) language model, where each mance (Jozefowicz et al., 2016; Shazeer et al., 2017). expert (corresponding to one topic) is a re- However, they are typically applied only at the sen- current neural network (RNN) that accounts tence level, without access to the broad document con- for learning the local structure of a word se- text. Such models may consequently fail to capture quence. In order to train the MoE model effi- long-term dependencies of a document (Dieng et al., ciently, a matrix factorization method is ap- 2016). plied, by extending each weight matrix of the RNN to be an ensemble of topic-dependent Fortunately, such broader context information is of weight matrices. The degree to which each a semantic nature, and can be captured by a topic member of the ensemble is used is tied to the model. Topic models have been studied for decades document-dependent probability of the cor- and have become a powerful tool for extracting high- responding topics. Experimental results on level semantic structure of document collections, by several corpora show that the proposed ap- inferring latent topics. The classical Latent Dirichlet proach outperforms both a pure RNN-based Allocation (LDA) method (Blei et al., 2003) and its model and other topic-guided language mod- variants, including recent work on neural topic mod- els. Further, our model yields sensible topics, els (Wan et al., 2012; Cao et al., 2015; Miao et al., and also has the capacity to generate mean- 2017), have been useful for a plethora of applications ingful sentences conditioned on given topics. in NLP. Although language models that leverage topics have shown promise, they also have several limitations. For 1 Introduction example, some of the existing methods use only pre- trained topic models (Mikolov and Zweig, 2012), with- A language model is a fundamental component to nat- out considering the word-sequence prediction task of ural language processing (NLP). It plays a key role interest. Another key limitation of the existing meth- in many traditional NLP tasks, ranging from speech ods lies in the integration of the learned topics into the recognition (Mikolov et al., 2010; Arisoy et al., 2012; language model; e.g., either through concatenating the Sriram et al., 2017), machine translation (Schwenk topic vector as an additional feature of RNNs (Mikolov and Zweig, 2012; Lau et al., 2017), or re-scoring the Proceedings of the 21st International Conference on Ar- predicted distribution over words using the topic vec- tificial Intelligence and Statistics (AISTATS) 2018, Lan- tor (Dieng et al., 2016). The former requires a bal- zarote, Spain. PMLR: Volume 84. Copyright 2018 by the ance between the number of RNN hidden units and author(s). Topic Compositional Neural Language Model Neural Topic Model Topic Proportion Neural Language Model d Law 0.36 p(d t) y1 y2 yM | Art 0.03 The judge <eos> Market 0.10 t h2 hM g( ) Travel 0.07 · LSTM LSTM LSTM ✓ (µ, σ2) Company 0.09 ⇠ N q(t d) Politics 0.15 | µ log σ2 Sport 0.01 Education 0.11 <sos> The guilty MLP x xM Medical 0.02 x1 2 d Army 0.06 Figure 1: The overall architecture of the proposed model. the number of topics, while the latter has to carefully RNN-based language models define the conditional design the vocabulary of the topic model. probabiltiy of each word ym given all the previous words y through the hidden state h : Motivated by the aforementioned goals and limitations 1:m−1 m of existing approaches, we propose the Topic Compo- p(ym y1:m−1) = p(ym hm) (2) sitional Neural Language Model (TCNLM), a new ap- j j proach to simultaneously learn a neural topic model hm = f(hm−1; xm) : (3) and a neural language model. As depicted in Figure 1, The function f( ) is typically implemented as a ba- TCNLM learns the latent topics within a variational sic RNN cell, a· Long Short-Term Memory (LSTM) autoencoder (Kingma and Welling, 2013) framework, cell (Hochreiter and Schmidhuber, 1997), or a Gated and the designed latent code t quantifies the proba- Recurrent Unit (GRU) cell (Cho et al., 2014). The bility of topic usage within a document. Latent code input and output words are related via the relation t is further used in a Mixture-of-Experts model (Hu x = y . et al., 1997), where each latent topic has a correspond- m m−1 ing language model (expert). A combination of these Topic Model A topic model is a probabilistic graph- \experts," weighted by the topic-usage probabilities, ical representation for uncovering the underlying se- results in our prediction for the sentences. A ma- mantic structure of a document collection. Latent trix factorization approach is further utilized to reduce Dirichlet Allocation (LDA) (Blei et al., 2003), for ex- computational cost as well as prevent overfitting. The ample, provides a robust and scalable approach for entire model is trained end-to-end by maximizing the document modeling, by introducing latent variables variational lower bound. Through a comprehensive for each token, indicating its topic assignment. Specif- set of experiments, we demonstrate that the proposed ically, let t denote the topic proportion for document model is able to significantly reduce the perplexity of d, and z represent the topic assignment for word w . a language model and effectively assemble the mean- n n The Dirichlet distribution is employed as the prior of ing of topics to generate meaningful sentences. Both t. The generative process of LDA may be summarized quantitative and qualitative comparisons are provided as: to verify the superiority of our model. t Dir(α0); zn Discrete(t) ; wn Discrete(βzn ) ; 2 Preliminaries ∼ ∼ ∼ where βzn represents the distribution over words for We briefly review RNN-based language models and topic zn, α0 is the hyper-parameter of the Dirichlet traditional probabilistic topic models. prior, n [1;Nd], and Nd is the number of words in document2 d. The marginal likelihood for document d Language Model A language model aims to learn can be expressed as a probability distribution over a sequence of words in Z Y X a pre-defined vocabulary. We denote as the vocab- p(d α0; β) = p(t α0) p(wn βzn )p(zn t)dt : V j t j j j ulary set and y1; :::; yM to be a sequence of words, n zn with each y f . A languageg model defines the like- m 2 V lihood of the sequence through a joint probability dis- 3 Topic Compositional Neural tribution Language Model M Y p(y ; :::; y ) = p(y ) p(y y − ) : (1) We describe the proposed TCNLM, as illustrated in 1 M 1 mj 1:m 1 m=2 Figure 1. Our model consists of two key components: W. Wang, Z. Gan, W. Wang, D. Shen, J. Huang, W. Ping, S. Satheesh, L. Carin (i) a neural topic model (NTM), and (ii) a neural Diversity Regularizer Redundance in inferred language model (NLM). The NTM aims to capture the topics is a common issue exisiting in general topic long-range semantic meanings across the document, models. In order to address this issue, it is straightfor- while the NLM is designed to learn the local semantic ward to regularize the row-wise distance between each and syntactic relationships between words. paired topics to diversify the topics. Following Xie et al. (2015); Miao et al. (2017), we apply a topic di- 3.1 Neural Topic Model versity regularization while carrying out the inference. D Specifically, the distance between a pair of topics Let d Z+ denote the bag-of-words representation 2 are measured by their cosine distance a(βi; βj) = of a document, with Z+ denoting nonnegative inte- jβ ·β j arccos i j . The mean angle of all pairs of gers. D is the vocabulary size, and each element of jjβijj2jjβj jj2 d reflects a count of the number of times the corre- 1 P P T topics is φ = T 2 i j a(βi; βj), and the variance sponding word occurs in the document. Distinct from 1 P P 2 is ν = T 2 i j(a(βi; βj) φ) . Finally, the topic LDA (Blei et al., 2003), we pass a Gaussian random diversity regularization is defined− as R = φ ν. vector through a softmax function to parameterize the − multinomial document topic distributions (Miao et al., 2017). Specifically, the generative process of the NTM 3.2 Neural Language Model is We propose a Mixture-of-Experts (MoE) language 2 θ (µ0; σ0) t = g(θ) i.e.
Recommended publications
  • Knowledge-Powered Deep Learning for Word Embedding
    Knowledge-Powered Deep Learning for Word Embedding Jiang Bian, Bin Gao, and Tie-Yan Liu Microsoft Research {jibian,bingao,tyliu}@microsoft.com Abstract. The basis of applying deep learning to solve natural language process- ing tasks is to obtain high-quality distributed representations of words, i.e., word embeddings, from large amounts of text data. However, text itself usually con- tains incomplete and ambiguous information, which makes necessity to leverage extra knowledge to understand it. Fortunately, text itself already contains well- defined morphological and syntactic knowledge; moreover, the large amount of texts on the Web enable the extraction of plenty of semantic knowledge. There- fore, it makes sense to design novel deep learning algorithms and systems in order to leverage the above knowledge to compute more effective word embed- dings. In this paper, we conduct an empirical study on the capacity of leveraging morphological, syntactic, and semantic knowledge to achieve high-quality word embeddings. Our study explores these types of knowledge to define new basis for word representation, provide additional input information, and serve as auxiliary supervision in deep learning, respectively. Experiments on an analogical reason- ing task, a word similarity task, and a word completion task have all demonstrated that knowledge-powered deep learning can enhance the effectiveness of word em- bedding. 1 Introduction With rapid development of deep learning techniques in recent years, it has drawn in- creasing attention to train complex and deep models on large amounts of data, in order to solve a wide range of text mining and natural language processing (NLP) tasks [4, 1, 8, 13, 19, 20].
    [Show full text]
  • Recurrent Neural Network Based Language Model
    INTERSPEECH 2010 Recurrent neural network based language model Toma´sˇ Mikolov1;2, Martin Karafiat´ 1, Luka´sˇ Burget1, Jan “Honza” Cernockˇ y´1, Sanjeev Khudanpur2 1Speech@FIT, Brno University of Technology, Czech Republic 2 Department of Electrical and Computer Engineering, Johns Hopkins University, USA fimikolov,karafiat,burget,[email protected], [email protected] Abstract INPUT(t) OUTPUT(t) A new recurrent neural network based language model (RNN CONTEXT(t) LM) with applications to speech recognition is presented. Re- sults indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18% reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5% on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empiri- cal evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high com- putational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition CONTEXT(t-1) 1. Introduction Figure 1: Simple recurrent neural network. Sequential data prediction is considered by many as a key prob- lem in machine learning and artificial intelligence (see for ex- ample [1]). The goal of statistical language modeling is to plex and often work well only for systems based on very limited predict the next word in textual data given context; thus we amounts of training data.
    [Show full text]
  • Unified Language Model Pre-Training for Natural
    Unified Language Model Pre-training for Natural Language Understanding and Generation Li Dong∗ Nan Yang∗ Wenhui Wang∗ Furu Wei∗† Xiaodong Liu Yu Wang Jianfeng Gao Ming Zhou Hsiao-Wuen Hon Microsoft Research {lidong1,nanya,wenwan,fuwei}@microsoft.com {xiaodl,yuwan,jfgao,mingzhou,hon}@microsoft.com Abstract This paper presents a new UNIfied pre-trained Language Model (UNILM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirec- tional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UNILM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UNILM achieves new state-of- the-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at https://github.com/microsoft/unilm. 1 Introduction Language model (LM) pre-training has substantially advanced the state of the art across a variety of natural language processing tasks [8, 29, 19, 31, 9, 1].
    [Show full text]
  • Hello, It's GPT-2
    Hello, It’s GPT-2 - How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems Paweł Budzianowski1;2;3 and Ivan Vulic´2;3 1Engineering Department, Cambridge University, UK 2Language Technology Lab, Cambridge University, UK 3PolyAI Limited, London, UK [email protected], [email protected] Abstract (Young et al., 2013). On the other hand, open- domain conversational bots (Li et al., 2017; Serban Data scarcity is a long-standing and crucial et al., 2017) can leverage large amounts of freely challenge that hinders quick development of available unannotated data (Ritter et al., 2010; task-oriented dialogue systems across multiple domains: task-oriented dialogue models are Henderson et al., 2019a). Large corpora allow expected to learn grammar, syntax, dialogue for training end-to-end neural models, which typ- reasoning, decision making, and language gen- ically rely on sequence-to-sequence architectures eration from absurdly small amounts of task- (Sutskever et al., 2014). Although highly data- specific data. In this paper, we demonstrate driven, such systems are prone to producing unre- that recent progress in language modeling pre- liable and meaningless responses, which impedes training and transfer learning shows promise their deployment in the actual conversational ap- to overcome this problem. We propose a task- oriented dialogue model that operates solely plications (Li et al., 2017). on text input: it effectively bypasses ex- Due to the unresolved issues with the end-to- plicit policy and language generation modules. end architectures, the focus has been extended to Building on top of the TransferTransfo frame- retrieval-based models.
    [Show full text]
  • N-Gram Language Modeling Tutorial Dustin Hillard and Sarah Petersen Lecture Notes Courtesy of Prof
    N-gram Language Modeling Tutorial Dustin Hillard and Sarah Petersen Lecture notes courtesy of Prof. Mari Ostendorf Outline: • Statistical Language Model (LM) Basics • n-gram models • Class LMs • Cache LMs • Mixtures • Empirical observations (Goodman CSL 2001) • Factored LMs Part I: Statistical Language Model (LM) Basics • What is a statistical LM and why are they interesting? • Evaluating LMs • History equivalence classes What is a statistical language model? A stochastic process model for word sequences. A mechanism for computing the probability of: p(w1, . , wT ) 1 Why are LMs interesting? • Important component of a speech recognition system – Helps discriminate between similar sounding words – Helps reduce search costs • In statistical machine translation, a language model characterizes the target language, captures fluency • For selecting alternatives in summarization, generation • Text classification (style, reading level, language, topic, . ) • Language models can be used for more than just words – letter sequences (language identification) – speech act sequence modeling – case and punctuation restoration 2 Evaluating LMs Evaluating LMs in the context of an application can be expensive, so LMs are usually evaluated on the own in terms of perplexity:: ˜ 1 PP = 2Hr where H˜ = − log p(w , . , w ) r T 2 1 T where {w1, . , wT } is held out test data that provides the empirical distribution q(·) in the cross-entropy formula H˜ = − X q(x) log p(x) x and p(·) is the LM estimated on a training set. Interpretations: • Entropy rate: lower entropy means that it is easier to predict the next symbol and hence easier to rule out alternatives when combined with other models small H˜r → small PP • Average branching factor: When a distribution is uniform for a vocabulary of size V , then entropy is log2 V , and perplexity is V .
    [Show full text]
  • A General Language Model for Information Retrieval Fei Song W
    A General Language Model for Information Retrieval Fei Song W. Bruce Croft Dept. of Computing and Info. Science Dept. of Computer Science University of Guelph University of Massachusetts Guelph, Ontario, Canada N1G 2W1 Amherst, Massachusetts 01003 [email protected] [email protected] ABSTRACT However, estimating the probabilities of word sequences can be expensive, since sentences can be arbitrarily long and the size of Statistical language modeling has been successfully used for a corpus needs to be very large. In practice, the statistical speech recognition, part-of-speech tagging, and syntactic parsing. language model is often approximated by N-gram models. Recently, it has also been applied to information retrieval. Unigram: P(w ) = P(w )P(w ) ¡ P(w ) According to this new paradigm, each document is viewed as a 1,n 1 2 n language sample, and a query as a generation process. The Bigram: P(w ) = P(w )P(w | w ) ¡ P(w | w ) retrieved documents are ranked based on the probabilities of 1,n 1 2 1 n n−1 producing a query from the corresponding language models of Trigram: P(w ) = P(w )P(w | w )P(w | w ) ¡ P(w | w ) these documents. In this paper, we will present a new language 1,n 1 2 1 3 1,2 n n−2,n−1 model for information retrieval, which is based on a range of data The unigram model makes a strong assumption that each word smoothing techniques, including the Good-Turing estimate, occurs independently, and consequently, the probability of a curve-fitting functions, and model combinations.
    [Show full text]
  • A Unified Multilingual Handwriting Recognition System Using
    1 A Unified Multilingual Handwriting Recognition System using multigrams sub-lexical units Wassim Swaileha,∗, Yann Soullarda, Thierry Paqueta aLITIS Laboratory - EA 4108 Normandie University - University of Rouen Rouen, France ABSTRACT We address the design of a unified multilingual system for handwriting recognition. Most of multi- lingual systems rests on specialized models that are trained on a single language and one of them is selected at test time. While some recognition systems are based on a unified optical model, dealing with a unified language model remains a major issue, as traditional language models are generally trained on corpora composed of large word lexicons per language. Here, we bring a solution by con- sidering language models based on sub-lexical units, called multigrams. Dealing with multigrams strongly reduces the lexicon size and thus decreases the language model complexity. This makes pos- sible the design of an end-to-end unified multilingual recognition system where both a single optical model and a single language model are trained on all the languages. We discuss the impact of the language unification on each model and show that our system reaches state-of-the-art methods perfor- mance with a strong reduction of the complexity. c 2018 Elsevier Ltd. All rights reserved. 1. Introduction Offline handwriting recognition is a challenging task due to the high variability of data, as writing styles, the quality of the input image and the lack of contextual information. The recognition is commonly done in two steps. First, an optical character recognition (OCR) model is optimized for recognizing sequences of characters from input images of handwritten texts (Plotz¨ and Fink (2009); Singh (2013)).
    [Show full text]
  • Word Embeddings: History & Behind the Scene
    Word Embeddings: History & Behind the Scene Alfan F. Wicaksono Information Retrieval Lab. Faculty of Computer Science Universitas Indonesia References • Bengio, Y., Ducharme, R., Vincent, P., & Janvin, C. (2003). A Neural Probabilistic Language Model. The Journal of Machine Learning Research, 3, 1137–1155. • Mikolov, T., Corrado, G., Chen, K., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. Proceedings of the International Conference on Learning Representations (ICLR 2013), 1–12 • Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Distributed Representations of Words and Phrases and their Compositionality. NIPS, 1–9. • Morin, F., & Bengio, Y. (2005). Hierarchical Probabilistic Neural Network Language Model. Aistats, 5. References Good weblogs for high-level understanding: • http://sebastianruder.com/word-embeddings-1/ • Sebastian Ruder. On word embeddings - Part 2: Approximating the Softmax. http://sebastianruder.com/word- embeddings-softmax • https://www.tensorflow.org/tutorials/word2vec • https://www.gavagai.se/blog/2015/09/30/a-brief-history-of- word-embeddings/ Some slides were also borrowed from From Dan Jurafsky’s course slide: Word Meaning and Similarity. Stanford University. Terminology • The term “Word Embedding” came from deep learning community • For computational linguistic community, they prefer “Distributional Semantic Model” • Other terms: – Distributed Representation – Semantic Vector Space – Word Space https://www.gavagai.se/blog/2015/09/30/a-brief-history-of-word-embeddings/ Before we learn Word Embeddings... Semantic Similarity • Word Similarity – Near-synonyms – “boat” and “ship”, “car” and “bicycle” • Word Relatedness – Can be related any way – Similar: “boat” and “ship” – Topical Similarity: • “boat” and “water” • “car” and “gasoline” Why Word Similarity? • Document Classification • Document Clustering • Language Modeling • Information Retrieval • ..
    [Show full text]
  • Language Modelling for Speech Recognition
    Language Modelling for Speech Recognition • Introduction • n-gram language models • Probability estimation • Evaluation • Beyond n-grams 6.345 Automatic Speech Recognition Language Modelling 1 Language Modelling for Speech Recognition • Speech recognizers seek the word sequence Wˆ which is most likely to be produced from acoustic evidence A P(Wˆ |A)=m ax P(W |A) ∝ max P(A|W )P(W ) W W • Speech recognition involves acoustic processing, acoustic modelling, language modelling, and search • Language models (LMs) assign a probability estimate P(W ) to word sequences W = {w1,...,wn} subject to � P(W )=1 W • Language models help guide and constrain the search among alternative word hypotheses during recognition 6.345 Automatic Speech Recognition Language Modelling 2 Language Model Requirements � Constraint Coverage � NLP Understanding � 6.345 Automatic Speech Recognition Language Modelling 3 Finite-State Networks (FSN) show me all the flights give restaurants display • Language space defined by a word network or graph • Describable by a regular phrase structure grammar A =⇒ aB | a • Finite coverage can present difficulties for ASR • Graph arcs or rules can be augmented with probabilities 6.345 Automatic Speech Recognition Language Modelling 4 Context-Free Grammars (CFGs) VP NP V D N display the flights • Language space defined by context-free rewrite rules e.g., A =⇒ BC | a • More powerful representation than FSNs • Stochastic CFG rules have associated probabilities which can be learned automatically from a corpus • Finite coverage can present difficulties
    [Show full text]
  • A Pretrained Language Model for Scientific Text
    SCIBERT: A Pretrained Language Model for Scientific Text Iz Beltagy Kyle Lo Arman Cohan Allen Institute for Artificial Intelligence, Seattle, WA, USA fbeltagy,kylel,[email protected] Abstract neural architectures. Leveraging the success of un- supervised pretraining has become especially im- Obtaining large-scale annotated data for NLP portant especially when task-specific annotations tasks in the scientific domain is challenging are difficult to obtain, like in scientific NLP. Yet and expensive. We release SCIBERT, a pre- while both BERT and ELMo have released pre- trained language model based on BERT (De- vlin et al., 2019) to address the lack of high- trained models, they are still trained on general do- quality, large-scale labeled scientific data. main corpora such as news articles and Wikipedia. SCIBERT leverages unsupervised pretraining In this work, we make the following contribu- on a large multi-domain corpus of scien- tions: tific publications to improve performance on (i) We release SCIBERT, a new resource demon- downstream scientific NLP tasks. We eval- strated to improve performance on a range of NLP uate on a suite of tasks including sequence tasks in the scientific domain. SCIBERT is a pre- tagging, sentence classification and depen- dency parsing, with datasets from a variety trained language model based on BERT but trained of scientific domains. We demonstrate sta- on a large corpus of scientific text. tistically significant improvements over BERT (ii) We perform extensive experimentation to and achieve new state-of-the-art results on sev- investigate the performance of finetuning ver- eral of these tasks. The code and pretrained sus task-specific architectures atop frozen embed- models are available at https://github.
    [Show full text]
  • Improving Text Recognition Using Optical and Language Model Writer Adaptation
    Improving text recognition using optical and language model writer adaptation Yann Soullard, Wassim Swaileh, Pierrick Tranouez, Thierry Paquet Clement´ Chatelain LITIS lab LITIS lab Normandie University, Universite de Rouen Normandie INSA Rouen Normandie 76800 Saint Etienne du Rouvray, France 76800 Saint Etienne du Rouvray, France Email: ffi[email protected] Email: [email protected] Abstract—State-of-the-art methods for handwriting text shown to be very efficient on various recognition tasks recognition are based on deep learning approaches and lan- such as handwriting recognition, signature identification or guage modeling that require large data sets during training. medical imaging [4], [5]. In addition, the language itself, but In practice, there are some applications where the system processes mono-writer documents, and would thus benefit from also lexicons as well as word frequencies may not match being trained on examples from that writer. However, this is the statistical distributions learned during language model not common to have numerous examples coming from just one training. Statistical language model adaptation algorithms writer. In this paper, we propose an approach to adapt both the [6] have also been designed in this respect. optical model and the language model to a particular writer, This paper presents an approach to adapt a generic text from a generic system trained on large data sets with a variety of examples. We show the benefits of the optical and language recognition system to a particular writer. In this respect, both model writer adaptation. Our approach reaches competitive the optical model and the language model of the generic sys- results on the READ 2018 data set, which is dedicated to model tem are considered during the adaptation phase.
    [Show full text]
  • Exploring the Limits of Language Modeling
    Exploring the Limits of Language Modeling Rafal Jozefowicz [email protected] Oriol Vinyals [email protected] Mike Schuster [email protected] Noam Shazeer [email protected] Yonghui Wu [email protected] Google Brain Abstract language models improves the underlying metrics of the In this work we explore recent advances in Re- downstream task (such as word error rate for speech recog- current Neural Networks for large scale Lan- nition, or BLEU score for translation), which makes the guage Modeling, a task central to language un- task of training better LMs valuable by itself. derstanding. We extend current models to deal Further, when trained on vast amounts of data, language with two key challenges present in this task: cor- models compactly extract knowledge encoded in the train- pora and vocabulary sizes, and complex, long ing data. For example, when trained on movie subti- term structure of language. We perform an ex- tles (Serban et al., 2015; Vinyals & Le, 2015), these lan- haustive study on techniques such as character guage models are able to generate basic answers to ques- Convolutional Neural Networks or Long-Short tions about object colors, facts about people, etc. Lastly, Term Memory, on the One Billion Word Bench- recently proposed sequence-to-sequence models employ mark. Our best single model significantly im- conditional language models (Mikolov & Zweig, 2012) as proves state-of-the-art perplexity from 51.3 down their key component to solve diverse tasks like machine to 30.0 (whilst reducing the number of param- translation (Sutskever et al., 2014; Cho et al., 2014; Kalch- eters by a factor of 20), while an ensemble of brenner et al., 2014) or video generation (Srivastava et al., models sets a new record by improving perplex- 2015a).
    [Show full text]