
Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence Improving Domain-Independent Cloud-Based Speech Recognition with Domain-Dependent Phonetic Post-Processing Johannes Twiefel, Timo Baumann, Stefan Heinrich, and Stefan Wermter University of Hamburg, Department of Informatics Vogt-Kolln-Straße¨ 30, D - 22527 Hamburg, Germany Abstract To address this gap, we propose a simple post-processing technique based on phonetic similarity that aligns the output Automatic speech recognition (ASR) technology has been developed to such a level that off-the-shelf dis- from Google ASR (or any other similar service with ‘unre- tributed speech recognition services are available (free strictable’ recognition output) to a specified sub-language of cost), which allow researchers to integrate speech that is more suitable for natural language understanding into their applications with little development effort or (NLU) in a given domain. The result string from the speech expert knowledge leading to better results compared recognizer is transformed back to a sequence of phonemes with previously used open-source tools. which is then rescored to a language model based on do- Often, however, such services do not accept language main knowledge. We consider phonemes to be the appropri- models or grammars but process free speech from any ate level of detail that (a) can still be recovered from ASR domain. While results are very good given the enor- result strings, and (b) remains relatively stable to different mous size of the search space, results frequently contain ASR errors that are caused by inadequate language mod- out-of-domain words or constructs that cannot be un- elling. Our method is based on Sphinx-4 (Walker et al. 2004) derstood by subsequent domain-dependent natural lan- guage understanding (NLU) components. We present a and hence works with various kinds of language specifica- versatile post-processing technique based on phonetic tions such as grammars, statistical language models (SLMs), distance that integrates domain knowledge with open- or blends of both kinds. domain ASR results, leading to improved ASR perfor- In the remainder of the paper, we discuss related previ- mance. Notably, our technique is able to make use of ous work in the following section and detail our approach domain restrictions using various degrees of domain to post-processing in Section 3 and considerations on the knowledge, ranging from pure vocabulary restrictions implementation in Section 4. We present an experiment in via grammars or N-Grams to restrictions of the accept- which we use our post-processing technique in Section 5 and able utterances. We present results for a variety of cor- discuss the results in Section 6. pora (mainly from human-robot interaction) where our combined approach significantly outperforms Google ASR as well as a plain open-source ASR solution. 2 Related Work Search by Voice developed by Google Inc. described by 1 Introduction Schalkwyk et al. (2010) is a distributed speech recognition system. While voice activity detection (VAD) and feature Google, Apple, Bing, and similar services offer very good extraction may be performed on the client (depending on and easily retrievable cloud-based automatic speech recog- the client’s capabilities), the computationally expensive de- nition (ASR) for many languages free of charge, which coding step of speech recognition is performed on Google’s can also take advantage from constant improvements on the servers, which then returns a list of hypotheses back to the server side (Schalkwyk et al. 2010). client. The system employs acoustic models derived from However, results that are received from such a service GOOG-411 (Van Heerden, Schalkwyk, and Strope 2009), a cannot be limited to a domain (e.g. by restricting the recog- telephony service that has been operated by Google. GOOG- nizer to a fixed vocabulary, grammar, or statistical language 411 enabled its users to search telephone numbers of busi- model) which may result in poor application performance nesses by using speech recognition and web search, and (Morbini et al. 2013). For example, out-of-vocabulary er- has collected large amounts of acoustic training data (about rors for custom-made natural language understanding com- 5,000 hours of training data until 2010). Given this origin, ponents may be higher with such generic services than with a disadvantage of Search by Voice is the language model simpler and less reliable open-source speech recognition which is optimized for web searches. In addition, there is solutions, e.g. based on Sphinx (especially for languages no public interface to change Google’s ASR to a custom where good open-source acoustic models are unavailable). domain-dependent language model. Therefore, the benefit Copyright c 2014, Association for the Advancement of Artificial of good acoustic models cannot be exploited well in domain- Intelligence (www.aaai.org). All rights reserved. specific projects. 1529 2.1 Distributed vs. Local Speech Recognition Zgank and Kacic (2012) propose to estimate acoustic con- fusability of words. Actual and recognized word tokens are In the analysis of Morbini et al. (2013), several speech transformed to their phoneme representations and the con- recognition systems (including cloud-based recognizers) fusability score is computed based on normalized Leven- are compared in terms of their applicability for develop- shtein distances between the phoneme representations. Word ing domain-restricted dialogue systems. The results show confusabilities are used to estimate expected ASR perfor- that distributed speech recognition systems can offer better mance given some currently available user commands. In recognition accuracy than local customizable systems even contrast, we aim to improve ASR results by comparing a if they are not adjusted to the domain; however, there is word’s phonemes using Levenshtein distance either for indi- no clear ‘best’ system in their evaluation. It appears plau- vidual words, for whole sentences, or for more expressive sible that a combination of cloud-based recognizers with language models, by including Levenshtein computations domain-restricted local system leads to an overall better sys- into the Viterbi decoder. tem, which we aim at in this paper. In addition, Morbini et al. show that better ASR performance correlates with better 2.5 Reranking the N-best List NLU performance. Morbini et al. (2012) compare the performance of speech 2.2 Sphinx-4 recognition systems including Google Search by Voice and taking N-best results into account. As had been previously Sphinx-4 is an open-source speech recognition software found (Rayner et al. 1994), they state that often not the best developed at CMU (Walker et al. 2004) that is modular, matching hypothesis is at the top of the N-best list of results, easily embeddable and extensible. The frontend includes which is why we will make use of N-best information below. voice activity detection, and the decoder is based on time- synchronous Viterbi search using the token-pass algorithm 3 General Approach (Young, Russell, and Thornton 1989). A main shortcoming are the weak open-source acoustic models which, for En- As described in Section 2, distributed speech recognition glish, are often based on the HUB-4 data set (Garofolo, Fis- systems like Google Search by Voice can provide better cus, and Fisher 1997) and use MFCC features (Mermelstein performance than local speech recognition systems because 1976). However, the Sphinx’ versatility can be exploited to they often provide better trained acoustic models and possi- completely change the feature representation (and meaning) bly more advanced frontend processing, e.g. based on deep as well as the search space representation (called ‘linguist’ belief networks (Jaitly et al. 2012). Their disadvantage is in Sphinx terminology), which we aim at as explained in that they often cannot be adjusted to specific domains by Section 4.3 below. way of a task-specific language model, as domain knowl- edge cannot be externally provided. 2.3 Post-processing Google’s Search by Voice Providing domain knowledge to the speech recognition service is impossible under this black box paradigm, and Milette and Stroud (2012, ch. 17) present an approach of hence, we instead resort to post-processing the results of the post-processing results from Google ASR on Android. The system using the available domain knowledge. It turns out list of available commands and the results from Google’s that there is hidden knowledge contained in the ‘raw’ re- speech recognition are transformed using a phonetic index- sults delivered by Google that our post-processing is able to ing algorithm such as Soundex or Metaphone. However, recover: In preliminary test runs, we compared the word er- phonetic indexing algorithms are meant to de-duplicate dif- ror rate (WER) and phoneme error rates (PER) of Google’s ferent spellings of the same names (Jurafsky and Martin speech recognition for the same hypotheses. PER was com- 2009, p. 115). Thus, they (a) are optimized for names but not puted based on transforming words to phonemes for both necessarily for other words of a language, and (b) may re- reference text and recognition hypothesis and aligning them sult in multiple target commands being reduced to the same with standard Levenshtein distance. Those preliminary tests representation. In addition, their matching is limited to in- indicated that the PER is much lower than WER for the hy- dividual words such that mis-segmentation in the ASR re- pothesis returned from Google, which supports our initial as- sults cannot be recovered from. We extend the approach by sumption that there is more information contained in the re- Milette and Stroud (2012) using a more flexible rescoring sults received from Google than is visible at the word level: method based on Levenshtein distance between recognition e.g., in our tests, the word “learn” was often mis-recognized result and accepted language and enabling rescoring across as “Lauren”, which, despite being quite different graphemi- word boundaries.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-