A Comparison-Based Approach to Mispronunciation Detection

A Comparison-Based Approach to Mispronunciation Detection

A COMPARISON-BASED APPROACH TO MISPRONUNCIATION DETECTION Ann Lee, James Glass MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, Massachusetts 02139, USA fannlee, [email protected] ABSTRACT more than a million speakers, while language-specific ASR The task of mispronunciation detection for language technology is available for approximately 80 languages [2]. learning is typically accomplished via automatic speech Given these estimates, it is reasonable to say that over 98% recognition (ASR). Unfortunately, less than 2% of the world’s of the world’s languages do not have ASR capability. While languages have an ASR capability, and the conventional pro- popular languages receive much of the attention and finan- cess of creating an ASR system requires large quantities of cial resources, we seek to explore how speech technology can expensive, annotated data. In this paper we report on our ef- help in situations where less financial support for developing forts to develop a comparison-based framework for detecting conventional ASR capability is available. word-level mispronunciations in nonnative speech. Dynamic In this paper, a comparison-based mispronunciation de- time warping (DTW) is carried out between a student’s (non- tection framework is proposed and evaluated. The approach is native speaker) utterance and a teacher’s (native speaker) inspired by the previous success in applying posteriorgram- utterance, and we focus on extracting word-level and phone- based features to the task of unsupervised keyword spot- level features that describe the degree of mis-alignment in the ting [3, 4], which is essentially a comparison task. In our warping path and the distance matrix. Experimental results framework, a student’s utterance is directly compared with on a Chinese University of Hong Kong (CUHK) nonnative a teacher’s through dynamic time warping (DTW). The as- corpus show that the proposed framework improves the rel- sumption is that the student reads the given scripts and that ative performance on a mispronounced word detection task for every script in the teaching material, there is at least one by nearly 50% compared to an approach that only considers recording from a native speaker of the target language, and DTW alignment scores. we have word-level timing information of the recording for the native speaker. Although this is a relatively narrow CALL Index Terms— language learning, mispronunciation de- application, it is quite reasonable for students to practice their tection, dynamic time warping initial speaking skills this way, and it would not be difficult to obtain spoken examples of read speech from native speakers. 1. INTRODUCTION With these components, we seek to detect word-level mis- Computer-Aided Language Learning (CALL) systems have pronunciation by locating poorly matching alignment regions gained popularity due to the flexibility they provide to em- based on features extracted from either conventional spectral power students to practice their language skills at their own or posteriorgram representations. pace. A more specific CALL sub-area called Computer- The remainder of the paper is organized as follows. After Aided Pronunciation Training (CAPT) focuses on topics such introducing background and related work in the next section, as detecting mispronunciation in nonnative speech. we discuss in detail the two main components: word segmen- Automatic speech recognition (ASR) technology is a nat- tation and mispronunciation detection. Following this, we ural component of both CALL and CAPT systems, and there present experimental results and suggest future work based has been considerable ASR research in both of these areas. on our findings. However, conventional ASR technology is language specific, and the process of training a recognizer for a new language 2. BACKGROUND AND RELATED WORK typically requires extensive (and expensive) human efforts to This section reviews previous work on individual pronuncia- record and annotate the necessary training data. While ASR tion error detection and pattern matching techniques, which technology can be used for students learning English or Man- motivate the core design of our framework. darin, such practices become much more problematic for stu- dents trying to learn a rare language. To put this issue in a 2.1. Pinpoint Pronunciation Error Detection more global context, there are estimates of around 7,000 lan- ASR technology can be applied to CAPT in many different guages in the world [1], among which 330 languages are with ways. Kewley-Port et. al [5] used an isolated-word, template- 978-1-4673-5126-3/12/$31.00 ©2012 IEEE 382 SLT 2012 Fig. 1: System overview (with single reference speaker) based recognizer which coded the spectrum of an input ut- an unsupervised manner. For the unsupervised case, given terance into a series of 16-bit binary vectors, and compared an utterance U = (u1; u2; :::; un), where n is the number of that to the stored templates by computing the percentage of frames, the Gaussian Posteriorgram (GP) for the ith frame is matching bits, which was then used as the score for articula- defined as tion. gp = [P (C ju );P (C ju ); :::; P (C ju )]; (1) HMM-based log-likelihood scores and log-posterior ui 1 i 2 i D i probability scores have been extensively used for mispronun- where Cj is a component from a D-component Gaussian ciation detection. Witt and Young [6] proposed a goodness of mixture model (GMM) which can be trained from a set of pronunciation (GOP) score based on the log-likelihood scores unlabeled speech. Zhang et. al [4] explored the use of GPs normalized by the duration of each phone segment. In this on unsupervised keyword detection by sorting the alignment model, phone dependent thresholds are set to judge whether scores. Their subsequent work [13] showed that posterior- each phone from the forced alignment is mispronounced or grams decoded from Deep Boltzmann Machines can further not. Franco et. al [7] trained three recognizers by using data improve the system performance. Besides the alignment of different levels of nativeness and considered the ratio of scores, Muscariello et. al [14] also investigated some image the log-posterior probability-based scores. processing techniques to compare the self-similarity matrices Other people have focused on extracting useful informa- (SSMs) of two words. By combining the DTW-based scores tion from the speech signal. Strik et. al [8] explored the use of with the SSM-based scores, the performance on spoken term acoustic phonetic features, such as log root-mean-square en- detection can be improved. ergy and zero-crossing rate, for distinguishing velar fricative and velar plosive. Minematsu et. al [9] proposed an acoustic 3. WORD SEGMENTATION universal structure in speech which excludes non-linguistic information. Fig. 1 shows the flowchart of our system. Our system detects Some approaches incorporate the knowledge of the stu- mispronunciation at the word level, so the first stage is to lo- dents’ native language into consideration. Meng et. al [10] cate word boundaries in the student’s utterance. A common incorporated possible phonetic confusions which were pre- property of nonnative speech is that there can sometimes be a dicted by systematically comparing phonology between En- long pause between words. Here we propose incorporating a glish and Cantonese into a lexicon for speech recognition. silence model when running DTW. In this way, we can align Harrison et. al [11] considered context-sensitive phonologi- the two utterances while also detecting and removing silence cal rules rather than context-insensitive rules. Most recently, in the student’s utterance. Wang and Lee [12] further integrated GOP scores with er- Given a teacher frame sequence T = (ft1 ; ft2 ; :::; ftn ) ror pattern detectors to improve the performance on detecting and student frame sequence S = (fs1 ; fs2 ; :::; fsm ), an n×m mispronunciation within a group of students from 36 different distance matrix, Φts, can be built, where countries learning Mandarin Chinese. Φts(i; j) = D(fti ; fsj ); (2) 2.2. Posteriorgram-based Pattern Matching and D(fti ; fsj ) denotes any possible distance metric between Recently, posterior features with dynamic time warping the speech representation fti and fsj . Here n is the total num- (DTW) alignment have been successfully applied to the fa- ber of frames of the teacher’s utterance and m the student’s. cilitation of unsupervised spoken keyword detection [3, 4]. A If we use Mel-frequency cepstral coefficients (MFCCs) to posteriorgram is a vector of posterior probabilities over some represent fti ’s and fsj ’s, D(fti ; fsj ) can be the Euclidean predefined classes. It can be viewed as a compact represen- distance between them. If we choose a Gaussian posterior- tation of speech, and can be trained either in a supervised or gram (GP) as the representation, D(fti ; fsj ) can be defined 383 (a) spectrogram (a) /ey ch ix s/ (b) /ey k s/ (b) MFCC-based silence vector (c) teacher: /ey k s/ (d) (e) (c) GP-based silence vector Fig. 3: (a) and (b) are the self-similarity matrices of two Fig. 2: An example of a spectrogram and the corresponding students saying “aches”, (c) is a teacher saying “aches”, (d) silence vectors shows the alignment between (a) and the teacher, and (e) shows the alignment between (b) and the teacher. The dotted lines in (c) are the boundaries detected by the unsupervised as − log (fti · fsj ) [3, 4]. Given a distance matrix, DTW phoneme-unit segmentor, and those in (d) and (e) are the seg- searches for the path starting from (1; 1) and ending at (n; m), mentation based on the detected boundaries and the aligned along which the accumulated distance is the minimum. path. We further define a 1 × m silence vector, φsil, which records the average distance between each frame in S and r 4. MISPRONUNCIATION DETECTION silence frames in the beginning of T .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us