Sentence Boundary Detection Based on Parallel Lexical and Acoustic Models

Sentence Boundary Detection Based on Parallel Lexical and Acoustic Models

INTERSPEECH 2016 September 8–12, 2016, San Francisco, USA Sentence Boundary Detection Based on Parallel Lexical and Acoustic Models Xiaoyin Che, Sheng Luo, Haojin Yang, Christoph Meinel Hasso Plattner Institute, University of Potsdam Prof.-Dr.-Helmert-Str. 2-3, 14482, Potsdam, Germany fxiaoyin.che, sheng.luo, haojin.yang, [email protected] Abstract models on ASR-available TED Talks and run experiments about In this paper we propose a solution that detects sentence bound- our 2-stage joint decision scheme with different combination of ary from speech transcript. First we train a pure lexical model models. In the end comes the conclusion. with deep neural network, which takes word vectors as the only input feature. Then a simple acoustic model is also prepared. 2. Multi-Modal Structure Analysis Because the models work independently, they can be trained with different data. In next step, the posterior probabilities of The structures of multi-modal solutions can be different and both lexical and acoustic models will be involved in a heuristic have been discussed before [12]. Some researchers propose a 2-stage joint decision scheme to classify the sentence boundary single hybrid model, which takes all possible features, no matter positions. This approach ensures that the models can be up- lexical or acoustic, together as the model input [13, 14, 15, 16], dated or switched freely in actual use. Evaluation on TED Talks as shown in Figure 1-a. Some others apply a structure of shows that the proposed lexical model can achieve good results: sequential models, in which the output of model A is fed 75.5% accuracy on error-involved ASR transcripts and 82.4% into model B, as shown in Figure 1-b, where model A ac- on error-free manual references. The joint decision scheme can cepts either lexical or acoustic features only, while model B further improve the accuracy by 3∼10% when acoustic data is takes the other type of features together with model A’s output available. [17, 18, 19, 20, 21]. Index Terms: Sentence Boundary Detection, Parallel Models, But all approaches with these two structures have a limi- Deep Neural Network, Word Vector tation: the training data must be word-level synchronized tran- scripts and audios, which largely limits the range of data collec- 1. Introduction tion. Many reports claimed that the classification performance can be largely influenced by the scale of training data. Further- Sentence boundary detection, or addressed as punctuation more, if ASR transcript is used as training data, the inevitable restoration, is an important task in ASR (Automated Speech ASR errors, some of which are acoustically understandable but Recognition) post-processing. With proper segmenting, the lexically ridiculous, such as misrecognizing “how can we find readability of the speech transcript is largely improved and information in the web” as “how can we fight formation in the some downstream NLP (Natural Languages Processing) tasks, web”, will definitely downgrade the functionality of the lexical such as machine translation, can also benefits from it [1, 2, 3]. model trained. In actual use, subtitling for example, the quality of the auto- However, a structure of parallel models, as shown in Figure matically generated subtitles might increase when the sentence 1-c, can overcome this limitation. Models can be trained sepa- boundaries detected are more accurate [4]. Perhaps it is still not rately with different data. It means the lexical model can take good enough in user’s point of view, but at least a semi-finished any kind of textual materials as training data, which is almost subtitle with better quality can definitely help the human subti- endless, extremely easy to prepare and could be grammatically tle producer. error-free. For the acoustic model, all available training data of Many efforts have already been made in sentence bound- previous two structures are still available. The next step is to ary detection. Generally the researchers focus on two types of fuse their posterior probabilities. resources: lexical features in the textual data and acoustic fea- tures in audio track. Most of the lexical approaches take LM Gotoh et al. and Liu et al. trained models with different scores (Language Model), tokens or POS tags (Part-of-Speech) feature sources separately and interpolated their posterior prob- of several continuous words as the features to train the lexical abilities for the final prediction [22, 23]. Lee and Glass applied model [5, 6, 7, 8]. And the frequently used features in acoustic log-linear model to combine outputs from different models [24], approaches include pause, pitch, energy, speaker switch and so so did Cho et al. [25]. Pappu et al., on the other hand, used lo- on [9, 10, 11]. However, multi-modal approaches using both gistic regression model for the fusion [26]. All these approaches lexical and acoustic features are more popular. offer predictions in one step and need an activation dataset to In this paper, we first analyze the structure of those existing adjust fusion model parameters. multi-modal solutions and propose our own sentence boundary However, we would like to apply a 2-stage scheme. Dif- detection framework. Then we introduce the independent lex- ferent from some earlier multi-pass attempts [10, 27], which ical and acoustic models used in our framework and explain first predict punctuation positions and then distinguish punctu- the 2-stage joint decision scheme in detail. These contents can ation mark types, the two stages in our decision scheme is like be found in Section 2∼4 respectively. In the following evalu- “segmenting” and “sub-segmenting”. Therefore the lexical or ation phase, we evaluate the performance of proposed lexical acoustic model can be updated or switched freely in actual use. models with both ASR and manual transcripts, test the acoustic Details will be introduced in Section 4. Copyright © 2016 ISCA 2528 http://dx.doi.org/10.21437/Interspeech.2016-257 (a) (b) (c) Figure 1: Three types of multi-modal structures 3. Proposed Models Figure 2: The process of data generation for lexical training 3.1. Lexical Model Word vectors are learned through neural language models [28]. Table 1: Two configurations of our lexical training. (“Voc.”, “n” Then a word can be represented by a real-valued vector, which ,“m” and “k” represent vocabulary, vector dimension, sliding is much lower dimensional when comparing with the traditional window size and supposed boundary position respectively) Word Vector Sample Size one-hot representation of words. It is suggested that the seman- Config. tic distance between words can be measured by the mathemat- Source Voc. n m k ical distance of corresponding word vectors [29, 30]. Cho et LMC-1 GloVe 400k 50 5 3 al. has included word vector in punctuation prediction task to- LMC-2 Word2Vec 3M 300 8 4 gether with many other lexical features [25]. However, the solo usage of word vectors has already been proven effective in vari- ous NLP applications [31, 32, 33] and will make the data prepa- data. The detailed settings can be found in Table 1. We use ration much easier. Therefore, word vector is used as the only the word “this” as the default substitute for out-of-vocabulary feature for our lexical model. words, since most of them are proper nouns which exist in spe- The training data are extracted from punctuated textual cial context only, such as “Word2Vec” we mentioned above, and files, which will be first transformed into a long word sequence “I use this in...” is grammatically similar to “I use Word2Vec with a parallel sequence of punctuation marks. Then an m- in...” in purpose of sentence boundary detection. The neural words sliding window will traverse the word sequence to cre- network is constructed by CAFFE framework [36]. ate samples. The classification question is that whether there is a sentence boundary after the k-th word of a sample. Orig- 3.2. Acoustic Model inally we introduced three categories to represent boundaries: In this approach we also apply two models to handle acoustic Comma, Period and Question. All other punctuation marks will information. Different from the 4-classes lexical model, acous- be switched into one of them or just ignored based on their func- tic model outputs only 2 classes: boundary or not boundary. tionalities. However, these categories can be combined freely Our first acoustic model is a simplest one. For a word W in when needed. “O” is used for “not a boundary” samples. i the ASR transcript, we simply calculate the pause duration p Then each word in the sample will be represented by an between W and W and use a variant of Sigmoid function: n-dimensional word vector which is stored in a pre-trained dic- i i+1 tionary. A default vector will be taken as the substitute of any 1 − e−4p m × n Pa = ; p 2 [0; +1) (1) words out of the vocabulary. As the result, we obtain an 1 + e−4p feature matrix as the lexical model input for the sample. During the training process, the value of all word vectors is kept static. to project p into Pa, while Pa 2 [0; 1). Approximately when The whole process is illustrated in Figure 2. the pause is longer than 0.28 second, it will be acknowledged as We choose a typical deep neural network for lexical train- a sentence boundary by this simplest model, which we would ing. Its structure is comparatively simple, with three sequential like to address as “Pause”. fully-connected hidden layers of 2048, 4096 and 2048 neurons The second acoustic model takes more features into consid- respectively. Therefore, the computational cost for the training eration.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us