Efficiently Fusing Pretrained Acoustic and Linguistic Encoders for Low

Total Page:16

File Type:pdf, Size:1020Kb

Efficiently Fusing Pretrained Acoustic and Linguistic Encoders for Low JOURNAL OF CLASS FILES, VOL. *, NO. *, JANUARY 2021 1 Efficiently Fusing Pretrained Acoustic and Linguistic Encoders for Low-resource Speech Recognition Cheng Yi, Shiyu Zhou, and Bo Xu, Member, IEEE Abstract—End-to-end models have achieved impressive results space to explore, which is hard to train with limited labeled on the task of automatic speech recognition (ASR). For low- data. resource ASR tasks, however, labeled data can hardly satisfy Pre-training can help the end-to-end model work well in the demand of end-to-end models. Self-supervised acoustic pre- training has already shown its amazing ASR performance, while the target ASR task on the low-resource condition [12]. the transcription is still inadequate for language modeling in Supervised pre-training, also known as supervised transfer end-to-end models. In this work, we fuse a pre-trained acous- learning [13], uses the knowledge learned from other tasks and tic encoder (wav2vec2.0) and a pre-trained linguistic encoder applies it to the target one [8]. However, this solution requires (BERT) into an end-to-end ASR model. The fused model only sufficient and domain-similar labeled data, which is hard to needs to learn the transfer from speech to language during fine- tuning on limited labeled data. The length of the two modalities is satisfy. Another solution is to partly pretrain the end-to-end matched by a monotonic attention mechanism without additional model with unlabeled data. For example, [14] pre-trains the parameters. Besides, a fully connected layer is introduced for acoustic encoder of Transformer by masked predictive coding the hidden mapping between modalities. We further propose a (MPC), and the model gets further improvement over a strong scheduled fine-tuning strategy to preserve and utilize the text ASR baseline. Unlike the encoder, the decoder of the S2S context modeling ability of the pre-trained linguistic encoder. Experiments show our effective utilizing of pre-trained modules. model cannot be separately pre-trained since it is conditioned Our model achieves better recognition performance on CALL- on the acoustic representation. In other words, it is difficult to HOME corpus (15 hours) than other end-to-end models. guarantee the consistence between pre-training and fine-tuning Index Terms—end-to-end modeling, low-resource ASR, pre- for the decoder. training, wav2vec, BERT In this work, we abandon realizing linguistic pre-training for the S2S model. Instead, we turn to fuse a pre-trained acoustic encoder (wav2vec2.0) and a pre-trained linguistic encoder I. INTRODUCTION (BERT) into a single end-to-end ASR model. The fused model IPELINE methods decompose the task of automatic has separately exposed to adequate speech and text data, so P speech recognition (ASR) into three components to that it only needs to learn the transfer from speech to lan- model: acoustics, pronunciation, and language [1]. It can guage during fine-tuning with limited labeled data. To bridge dramatically decrease the difficulty of ASR tasks, requiring the length gap between speech and language modalities, a much less labeled data to converge. With self-supervised monotonic attention mechanism without additional parameters pre-trained acoustic model, the pipeline method can achieve is applied. Besides, a fully connected layer is introduced for impressive recognition accuracy with as few as 10 hours the mapping between hidden states of the two modalities. Our of transcribed speech [2]–[5]. However, it is criticized that model works in the way of non-autoregressive (NAR) [15] due the three components are combined by two fixed weights to the absent of a well-defined decoder structure. NAR models arXiv:2101.06699v2 [cs.CL] 24 Jan 2021 (pronunciation and language), which is inflexible [6]. have the speed advantage [16] and can perform comparable On the contrary, end-to-end models integrate the three results with autoregressive ones [15]. Different with self- components into one and directly transform the input speech training, acoustic representation is fed to the linguistic encoder features to the output text. Among end-to-end models, the during fine-tuning. The inconsistency can severely influence sequence-to-sequence (S2S) model is composed of an encoder the representation ability of the linguistic encoder. We help and a decoder, which is the dominant structure [7]–[10]. this module get along with the acoustic encoder by a scheduled The end-to-end modeling achieves better results than pipeline fine-tuning strategy. methods on most of public datasets [9], [11]. Nevertheless, it requires at least hundreds of hours of transcribed speech for training. A deep neural network has an enormous parameter II. RELATED WORK A lot of works propose methods to leverage text data for the Cheng Yi is with the Institute of Automation, Chinese Academy of Sciences, China and University of Chinese Academy of Sciences, China (e- end-to-end model. Deep fusion [17] and cold fusion [18], [19] mail: [email protected]). integrate a pre-trained auto-regressive language model (LM) Shiyu Zhou is with the Institute of Automation, Chinese Academy of into a S2S model. In this setting, the S2S model is randomly Sciences, China (e-mail: [email protected]). Bo Xu is with the Institute of Automation, Chinese Academy of Sciences, initialized and still needs a lot of labeled data for training. China (e-mail: [email protected]). Knowledge distillation [20] requires a seed end-to-end ASR JOURNAL OF CLASS FILES, VOL. *, NO. *, JANUARY 2021 2 CIF output BERT add input to_vocab_1 to_vocab_0 (768) embedding modality sum FC BERT encoder mapping looking up to_vocab_2 (767) random mix embedding table mul cif labels 'i' 's' 'a' 'c' 'a' 't' embeded (768) (768) w2v encoder embbeding encoder output speech labels Fig. 1. The structure of w2v-cif-bert. On the left part, a variant CIF mechanism converts hAC to l without any additional module; on the right part, hLM is not directly fed into BERT but mixed with embedded labels in advance during training. Modules connected by dot lines are ignored during inference. Numbers in “()” indicate the size of hidden in the model. model, and it is not as convenient as the pretrain-and-finetune Wav2vec2.0 is composed of a feature encoder and a context paradigm. network. The feature encoder outputs with a stride of about [10] builds a S2S model with a pre-trained acoustic encoder 20ms between each frame, and a receptive field of 25ms of and a multilingual linguistic decoder. The decoder is part of a audio. The context network is a stack of self-attention blocks S2S model (mBART [21]) pre-trained on text data. Although for context modeling [25]. In this work, wav2vec2.0 is used this model achieves great results in the task of speech-to-text to convert speech x to acoustic representation hAC (w2v translation, it is not verified in ASR tasks. Besides, this work encoder) in our model, which is colored blue in Fig. 1. does not deal with the inconsistency we mentioned above. [16] takes advantage of an NAR decoder to revise the greedy CTC outputs from the encoder where low-confidence B. Modality Adaptation tokens are masked. The decoder is required to predict the It is normal to apply global attention to connect acoustic tokens corresponding to those mask tokens by taking both and language representation [25]. However, this mechanism is the unmasked context and the acoustic representation into to blame for the poor generalization of text length [11], which account. [22] iteratively predicts masked tokens based on is worse under sample scarcity. Instead, we use continuous partial results. As mentioned above, however, these NAR integrate-and-fire (CIF) mechanism [7] to bridge the discrepant decoders cannot be separately pre-trained since they rely on sequence lengths. CIF constrains a monotonic alignment the acoustic representation. between the acoustic and linguistic representation, and the reasonable assumption drastically decreases the difficulty of III. METHODOLOGY learning the alignment. In the original work [7], CIF uses a local convolution We propose an innovative end-to-end model called w2v-cif- module to assign the attention value to each input frame. To bert, which consists of wav2vec2.0 (pre-trained on speech cor- avoid introducing additional parameters, the last dimension of pus), BERT (pre-trained on text corpus), and CIF mechanism the hAC is regarded as the raw scalar attention value (before to connect the above two modules. The detailed realization of sigmoid operation), as demonstrated in the left part of Fig. 1. our model is demonstrated in Fig. 1. It is worth noting that The normalized attention value αt are accumulated along the only the fully connection in the middle (marked as red) does time dimension T and a linguistic representation lu outputs not participate in any pre-training. whenever the accumulated αt surpasses 1.0. During training, the sum of attention values for one input sample is resized to ∗ A. Acoustic Encoder the number of output tokens y (n = jjyjj). The formalized operations in CIF are: We choose wav2vec2.0 as the acoustic encoder since it has been well verified [4], [10], [23]. Wav2vec2.0 is a pre-trained encoder that converts raw speech signals into acoustic repre- AC αt = sigmoid(ht [−1]) (1) sentation [4]. During pre-training, it masks the speech input on α0 = resize(α jα ; n∗) (2) the latent space and solves a contrastive task. Wav2vec2.0 can t t T X 0 AC outperform previous semi-supervised methods simply through lu = αt ∗ ht [: −1] (3) fine-tuning on transcribed speech with CTC criterion [24]. t JOURNAL OF CLASS FILES, VOL. *, NO. *, JANUARY 2021 3 , where hAC represents acoustic vectors with length of T , l E.
Recommended publications
  • Malware Classification with BERT
    San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research Spring 5-25-2021 Malware Classification with BERT Joel Lawrence Alvares Follow this and additional works at: https://scholarworks.sjsu.edu/etd_projects Part of the Artificial Intelligence and Robotics Commons, and the Information Security Commons Malware Classification with Word Embeddings Generated by BERT and Word2Vec Malware Classification with BERT Presented to Department of Computer Science San José State University In Partial Fulfillment of the Requirements for the Degree By Joel Alvares May 2021 Malware Classification with Word Embeddings Generated by BERT and Word2Vec The Designated Project Committee Approves the Project Titled Malware Classification with BERT by Joel Lawrence Alvares APPROVED FOR THE DEPARTMENT OF COMPUTER SCIENCE San Jose State University May 2021 Prof. Fabio Di Troia Department of Computer Science Prof. William Andreopoulos Department of Computer Science Prof. Katerina Potika Department of Computer Science 1 Malware Classification with Word Embeddings Generated by BERT and Word2Vec ABSTRACT Malware Classification is used to distinguish unique types of malware from each other. This project aims to carry out malware classification using word embeddings which are used in Natural Language Processing (NLP) to identify and evaluate the relationship between words of a sentence. Word embeddings generated by BERT and Word2Vec for malware samples to carry out multi-class classification. BERT is a transformer based pre- trained natural language processing (NLP) model which can be used for a wide range of tasks such as question answering, paraphrase generation and next sentence prediction. However, the attention mechanism of a pre-trained BERT model can also be used in malware classification by capturing information about relation between each opcode and every other opcode belonging to a malware family.
    [Show full text]
  • Adapting to Trends in Language Resource Development: a Progress
    Adapting to Trends in Language Resource Development: A Progress Report on LDC Activities Christopher Cieri, Mark Liberman University of Pennsylvania, Linguistic Data Consortium 3600 Market Street, Suite 810, Philadelphia PA. 19104, USA E-mail: {ccieri,myl} AT ldc.upenn.edu Abstract This paper describes changing needs among the communities that exploit language resources and recent LDC activities and publications that support those needs by providing greater volumes of data and associated resources in a growing inventory of languages with ever more sophisticated annotation. Specifically, it covers the evolving role of data centers with specific emphasis on the LDC, the publications released by the LDC in the two years since our last report and the sponsored research programs that provide LRs initially to participants in those programs but eventually to the larger HLT research communities and beyond. creators of tools, specifications and best practices as well 1. Introduction as databases. The language resource landscape over the past two years At least in the case of LDC, the role of the data may be characterized by what has changed and what has center has grown organically, generally in response to remained constant. Constant are the growing demands stated need but occasionally in anticipation of it. LDC’s for an ever-increasing body of language resources in a role was initially that of a specialized publisher of LRs growing number of human languages with increasingly with the additional responsibility to archive and protect sophisticated annotation to satisfy an ever-expanding list published resources to guarantee their availability of the of user communities. Changing are the relative long term.
    [Show full text]
  • Bros:Apre-Trained Language Model for Understanding Textsin Document
    Under review as a conference paper at ICLR 2021 BROS: A PRE-TRAINED LANGUAGE MODEL FOR UNDERSTANDING TEXTS IN DOCUMENT Anonymous authors Paper under double-blind review ABSTRACT Understanding document from their visual snapshots is an emerging and chal- lenging problem that requires both advanced computer vision and NLP methods. Although the recent advance in OCR enables the accurate extraction of text seg- ments, it is still challenging to extract key information from documents due to the diversity of layouts. To compensate for the difficulties, this paper introduces a pre-trained language model, BERT Relying On Spatiality (BROS), that represents and understands the semantics of spatially distributed texts. Different from pre- vious pre-training methods on 1D text, BROS is pre-trained on large-scale semi- structured documents with a novel area-masking strategy while efficiently includ- ing the spatial layout information of input documents. Also, to generate structured outputs in various document understanding tasks, BROS utilizes a powerful graph- based decoder that can capture the relation between text segments. BROS achieves state-of-the-art results on four benchmark tasks: FUNSD, SROIE*, CORD, and SciTSR. Our experimental settings and implementation codes will be publicly available. 1 INTRODUCTION Document intelligence (DI)1, which understands industrial documents from their visual appearance, is a critical application of AI in business. One of the important challenges of DI is a key information extraction task (KIE) (Huang et al., 2019; Jaume et al., 2019; Park et al., 2019) that extracts struc- tured information from documents such as financial reports, invoices, business emails, insurance quotes, and many others.
    [Show full text]
  • Information Extraction Based on Named Entity for Tourism Corpus
    Information Extraction based on Named Entity for Tourism Corpus Chantana Chantrapornchai Aphisit Tunsakul Dept. of Computer Engineering Dept. of Computer Engineering Faculty of Engineering Faculty of Engineering Kasetsart University Kasetsart University Bangkok, Thailand Bangkok, Thailand [email protected] [email protected] Abstract— Tourism information is scattered around nowa- The ontology is extracted based on HTML web structure, days. To search for the information, it is usually time consuming and the corpus is based on WordNet. For these approaches, to browse through the results from search engine, select and the time consuming process is the annotation which is to view the details of each accommodation. In this paper, we present a methodology to extract particular information from annotate the type of name entity. In this paper, we target at full text returned from the search engine to facilitate the users. the tourism domain, and aim to extract particular information Then, the users can specifically look to the desired relevant helping for ontology data acquisition. information. The approach can be used for the same task in We present the framework for the given named entity ex- other domains. The main steps are 1) building training data traction. Starting from the web information scraping process, and 2) building recognition model. First, the tourism data is gathered and the vocabularies are built. The raw corpus is used the data are selected based on the HTML tag for corpus to train for creating vocabulary embedding. Also, it is used building. The data is used for model creation for automatic for creating annotated data.
    [Show full text]
  • NLP with BERT: Sentiment Analysis Using SAS® Deep Learning and Dlpy Doug Cairns and Xiangxiang Meng, SAS Institute Inc
    Paper SAS4429-2020 NLP with BERT: Sentiment Analysis Using SAS® Deep Learning and DLPy Doug Cairns and Xiangxiang Meng, SAS Institute Inc. ABSTRACT A revolution is taking place in natural language processing (NLP) as a result of two ideas. The first idea is that pretraining a deep neural network as a language model is a good starting point for a range of NLP tasks. These networks can be augmented (layers can be added or dropped) and then fine-tuned with transfer learning for specific NLP tasks. The second idea involves a paradigm shift away from traditional recurrent neural networks (RNNs) and toward deep neural networks based on Transformer building blocks. One architecture that embodies these ideas is Bidirectional Encoder Representations from Transformers (BERT). BERT and its variants have been at or near the top of the leaderboard for many traditional NLP tasks, such as the general language understanding evaluation (GLUE) benchmarks. This paper provides an overview of BERT and shows how you can create your own BERT model by using SAS® Deep Learning and the SAS DLPy Python package. It illustrates the effectiveness of BERT by performing sentiment analysis on unstructured product reviews submitted to Amazon. INTRODUCTION Providing a computer-based analog for the conceptual and syntactic processing that occurs in the human brain for spoken or written communication has proven extremely challenging. As a simple example, consider the abstract for this (or any) technical paper. If well written, it should be a concise summary of what you will learn from reading the paper. As a reader, you expect to see some or all of the following: • Technical context and/or problem • Key contribution(s) • Salient result(s) If you were tasked to create a computer-based tool for summarizing papers, how would you translate your expectations as a reader into an implementable algorithm? This is the type of problem that the field of natural language processing (NLP) addresses.
    [Show full text]
  • Characteristics of Text-To-Speech and Other Corpora
    Characteristics of Text-to-Speech and Other Corpora Erica Cooper, Emily Li, Julia Hirschberg Columbia University, USA [email protected], [email protected], [email protected] Abstract “standard” TTS speaking style, whether other forms of profes- sional and non-professional speech differ substantially from the Extensive TTS corpora exist for commercial systems cre- TTS style, and which features are most salient in differentiating ated for high-resource languages such as Mandarin, English, the speech genres. and Japanese. Speakers recorded for these corpora are typically instructed to maintain constant f0, energy, and speaking rate and 2. Related Work are recorded in ideal acoustic environments, producing clean, consistent audio. We have been developing TTS systems from TTS speakers are typically instructed to speak as consistently “found” data collected for other purposes (e.g. training ASR as possible, without varying their voice quality, speaking style, systems) or available on the web (e.g. news broadcasts, au- pitch, volume, or tempo significantly [1]. This is different from diobooks) to produce TTS systems for low-resource languages other forms of professional speech in that even with the rela- (LRLs) which do not currently have expensive, commercial sys- tively neutral content of broadcast news, anchors will still have tems. This study investigates whether traditional TTS speakers some variance in their speech. Audiobooks present an even do exhibit significantly less variation and better speaking char- greater challenge, with a more expressive reading style and dif- acteristics than speakers in found genres. By examining char- ferent character voices. Nevertheless, [2, 3, 4] have had suc- acteristics of f0, energy, speaking rate, articulation, NHR, jit- cess in building voices from audiobook data by identifying and ter, and shimmer in found genres and comparing these to tra- using the most neutral and highest-quality utterances.
    [Show full text]
  • Unified Language Model Pre-Training for Natural
    Unified Language Model Pre-training for Natural Language Understanding and Generation Li Dong∗ Nan Yang∗ Wenhui Wang∗ Furu Wei∗† Xiaodong Liu Yu Wang Jianfeng Gao Ming Zhou Hsiao-Wuen Hon Microsoft Research {lidong1,nanya,wenwan,fuwei}@microsoft.com {xiaodl,yuwan,jfgao,mingzhou,hon}@microsoft.com Abstract This paper presents a new UNIfied pre-trained Language Model (UNILM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirec- tional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UNILM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UNILM achieves new state-of- the-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at https://github.com/microsoft/unilm. 1 Introduction Language model (LM) pre-training has substantially advanced the state of the art across a variety of natural language processing tasks [8, 29, 19, 31, 9, 1].
    [Show full text]
  • Question Answering by Bert
    Running head: QUESTION AND ANSWERING USING BERT 1 Question and Answering Using BERT Suman Karanjit, Computer SCience Major Minnesota State University Moorhead Link to GitHub https://github.com/skaranjit/BertQA QUESTION AND ANSWERING USING BERT 2 Table of Contents ABSTRACT .................................................................................................................................................... 3 INTRODUCTION .......................................................................................................................................... 4 SQUAD ............................................................................................................................................................ 5 BERT EXPLAINED ...................................................................................................................................... 5 WHAT IS BERT? .......................................................................................................................................... 5 ARCHITECTURE ............................................................................................................................................ 5 INPUT PROCESSING ...................................................................................................................................... 6 GETTING ANSWER ........................................................................................................................................ 8 SETTING UP THE ENVIRONMENT. ....................................................................................................
    [Show full text]
  • 100000 Podcasts
    100,000 Podcasts: A Spoken English Document Corpus Ann Clifton Sravana Reddy Yongze Yu Spotify Spotify Spotify [email protected] [email protected] [email protected] Aasish Pappu Rezvaneh Rezapour∗ Hamed Bonab∗ Spotify University of Illinois University of Massachusetts [email protected] at Urbana-Champaign Amherst [email protected] [email protected] Maria Eskevich Gareth J. F. Jones Jussi Karlgren CLARIN ERIC Dublin City University Spotify [email protected] [email protected] [email protected] Ben Carterette Rosie Jones Spotify Spotify [email protected] [email protected] Abstract Podcasts are a large and growing repository of spoken audio. As an audio format, podcasts are more varied in style and production type than broadcast news, contain more genres than typi- cally studied in video data, and are more varied in style and format than previous corpora of conversations. When transcribed with automatic speech recognition they represent a noisy but fascinating collection of documents which can be studied through the lens of natural language processing, information retrieval, and linguistics. Paired with the audio files, they are also a re- source for speech processing and the study of paralinguistic, sociolinguistic, and acoustic aspects of the domain. We introduce the Spotify Podcast Dataset, a new corpus of 100,000 podcasts. We demonstrate the complexity of the domain with a case study of two tasks: (1) passage search and (2) summarization. This is orders of magnitude larger than previous speech corpora used for search and summarization. Our results show that the size and variability of this corpus opens up new avenues for research.
    [Show full text]
  • Syntactic Annotation of Spoken Utterances: a Case Study on the Czech Academic Corpus
    Syntactic annotation of spoken utterances: A case study on the Czech Academic Corpus Barbora Hladká and Zde ňka Urešová Charles University in Prague Institute of Formal and Applied Linguistics {hladka, uresova}@ufal.mff.cuni.cz text corpora, e.g. the Penn Treebank, Abstract the family of the Prague Dependency Tree- banks, the Tiger corpus for German, etc. Some Corpus annotation plays an important corpora contain a semantic annotation, such as role in linguistic analysis and computa- the Penn Treebank enriched by PropBank and tional processing of both written and Nombank, the Prague Dependency Treebank in spoken language. Syntactic annotation its highest layer, the Penn Chinese or the of spoken texts becomes clearly a topic the Korean Treebanks. The Penn Discourse of considerable interest nowadays, Treebank contains discourse annotation. driven by the desire to improve auto- It is desirable that syntactic (and higher) an- matic speech recognition systems by notation of spoken texts respects the written- incorporating syntax in the language text style as much as possible, for obvious rea- models, or to build language under- sons: data “compatibility”, reuse of tools etc. standing applications. Syntactic anno- A number of questions arise immediately: tation of both written and spoken texts How much experience and knowledge ac- in the Czech Academic Corpus was quired during the written text annotation can created thirty years ago when no other we apply to the spoken texts? Are the annota- (even annotated) corpus of spoken texts tion instructions applicable to transcriptions in has existed. We will discuss how much a straightforward way or some modifications relevant and inspiring this annotation is of them must be done? Can transcriptions be to the current frameworks of spoken annotated “as they are” or some transformation text annotation.
    [Show full text]
  • Named Entity Recognition in Question Answering of Speech Data
    Proceedings of the Australasian Language Technology Workshop 2007, pages 57-65 Named Entity Recognition in Question Answering of Speech Data Diego Molla´ Menno van Zaanen Steve Cassidy Division of ICS Department of Computing Macquarie University North Ryde Australia {diego,menno,cassidy}@ics.mq.edu.au Abstract wordings of the same information). The system uses symbolic algorithms to find exact answers to ques- Question answering on speech transcripts tions in large document collections. (QAst) is a pilot track of the CLEF com- The design and implementation of the An- petition. In this paper we present our con- swerFinder system has been driven by requirements tribution to QAst, which is centred on a that the system should be easy to configure, extend, study of Named Entity (NE) recognition on and, therefore, port to new domains. To measure speech transcripts, and how it impacts on the success of the implementation of AnswerFinder the accuracy of the final question answering in these respects, we decided to participate in the system. We have ported AFNER, the NE CLEF 2007 pilot task of question answering on recogniser of the AnswerFinder question- speech transcripts (QAst). The task in this compe- answering project, to the set of answer types tition is different from that for which AnswerFinder expected in the QAst track. AFNER uses a was originally designed and provides a good test of combination of regular expressions, lists of portability to new domains. names (gazetteers) and machine learning to The current CLEF pilot track QAst presents an find NeWS in the data. The machine learn- interesting and challenging new application of ques- ing component was trained on a develop- tion answering.
    [Show full text]
  • Reducing Out-Of-Vocabulary in Morphology to Improve the Accuracy in Arabic Dialects Speech Recognition
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of Birmingham Research Archive, E-theses Repository Reducing Out-of-Vocabulary in Morphology to Improve the Accuracy in Arabic Dialects Speech Recognition by Khalid Abdulrahman Almeman A thesis submitted to The University of Birmingham for the degree of Doctor of Philosophy School of Computer Science The University of Birmingham March 2015 University of Birmingham Research Archive e-theses repository This unpublished thesis/dissertation is copyright of the author and/or third parties. The intellectual property rights of the author or third parties in respect of this work are as defined by The Copyright Designs and Patents Act 1988 or as modified by any successor legislation. Any use made of information contained in this thesis/dissertation must be in accordance with that legislation and must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the permission of the copyright holder. Abstract This thesis has two aims: developing resources for Arabic dialects and improving the speech recognition of Arabic dialects. Two important components are considered: Pro- nunciation Dictionary (PD) and Language Model (LM). Six parts are involved, which relate to finding and evaluating dialects resources and improving the performance of sys- tems for the speech recognition of dialects. Three resources are built and evaluated: one tool and two corpora. The methodology that was used for building the multi-dialect morphology analyser involves the proposal and evaluation of linguistic and statistic bases. We obtained an overall accuracy of 94%. The dialect text corpora have four sub-dialects, with more than 50 million tokens.
    [Show full text]