Utterance Segmentation Using Combined Approach Based on Bi-Directional N-Gram and Maximum Entropy

Utterance Segmentation Using Combined Approach Based on Bi-Directional N-Gram and Maximum Entropy

Utterance Segmentation Using Combined Approach Based on Bi-directional N-gram and Maximum Entropy Ding Liu Chengqing Zong National Laboratory of Pattern Recognition National Laboratory of Pattern Recognition Institute of Automation Institute of Automation Chinese Academy of Sciences Chinese Academy of Sciences Beijing 100080, China. Beijing 100080, China. [email protected] [email protected] Output (text Abstract Input speech Speech Language or speech) analysis and recognition generation This paper proposes a new approach to segmentation of utterances into sentences using a new linguistic model based upon Figure 1. System with speech input. Maximum-entropy-weighted Bi- directional N-grams. The usual N-gram In these systems, the language analysis module takes the output of speech recognition as its input, algorithm searches for sentence bounda- representing the current utterance exactly as pro- ries in a text from left to right only. Thus nounced, without any punctuation symbols mark- a candidate sentence boundary in the text ing the boundaries of sentences. Here is an is evaluated mainly with respect to its left context, without fully considering its right example: 这边请您坐电梯到 9 楼服务生将在那 context. Using this approach, utterances 里等您并将您带到 913 号房间 . (this way please are often divided into incomplete sen- please take this elevator to the ninth floor the floor tences or fragments. In order to make use attendant will meet you at your elevator entrance of both the right and left contexts of can- there and show you to room 913.) As the example didate sentence boundaries, we propose a shows, it will be difficult for a text analysis module new linguistic modeling approach based to parse the input if the utterance is not segmented. on Maximum-entropy-weighted Bi- Further, the output utterance from the speech rec- directional N-grams. Experimental results ognizer usually contains wrongly recognized indicate that the new approach signifi- words or noise words. Thus it is crucial to segment cantly outperforms the usual N-gram al- the utterance before further language processing. gorithm for segmenting both Chinese and We believe that accurate segmentation can greatly English utterances. improve the performance of language analysis modules. Stevenson et al. have demonstrated the difficul- 1 Introduction ties of text segmentation through an experiment in which six people, educated to at least the Bache- Due to the improvement of speech recognition lor’s degree level, were required to segment into technology, spoken language user interfaces, spo- sentences broadcast transcripts from which all ken dialogue systems, and speech translation sys- punctuation symbols had been removed. The ex- tems are no longer only laboratory dreams. perimental results show that humans do not always Roughly speaking, such systems have the structure agree on the insertion of punctuation symbols, and shown in Figure 1. that their segmentation performance is not very good (Stevenson and Gaizauskas, 2000). Thus it is a great challenge for computers to perform the task automatically. To solve this problem, many meth- They applied word-based N-gram language models ods have been proposed, which can be roughly to utterance segmentation, and then combined classified into two categories. One approach is them with prosodic models. Compared with N- based on simple acoustic criteria, such as non- gram language models, their combined models speech intervals (e.g. pauses), pitch and energy. achieved an improvement of 0.5% and 2.3% in We can call this approach acoustic segmentation. precision and recall respectively. The other approach, which can be called linguistic Beeferman et al. (1998) used the CYBERPUNC segmentation, is based on linguistic clues, includ- system to add intra-sentence punctuation (espe- ing lexical knowledge, syntactic structure, seman- cially commas) to the output of an automatic tic information etc. Acoustic segmentation can not speech recognition (ASR) system. They claim that, always work well, because utterance boundaries do since commas are the most frequently used punc- not always correspond to acoustic criteria. For ex- tuation symbols, their correct insertion is by far the ample: 您好<pause>请问<pause>明天的单人间 most helpful addition for making texts legible. 还有吗<pause>或者<pause>标准间也行. Since CYBERPUNC augmented a standard trigram the simple acoustic criteria are inadequate, linguis- speech recognition model with lexical information tic clues play an indispensable role in utterance concerning commas, and achieved a precision of segmentation, and many methods relying on them 75.6% and a recall of 65.6% when testing on 2,317 have been proposed. sentences from the Wall Street Journal. This paper proposes a new approach to linguis- Gotoh et al. (1998) applied a simple non-speech tic segmentation using a Maximum-entropy- interval model to detect sentence boundaries in weighted Bi-directional N-gram-based algorithm English broadcast speech transcripts. They com- (MEBN). To evaluate the performance of MEBN, pared their results with those of N-gram language we conducted experiments in both Chinese and models and found theirs far superior. However, English. All the results show that MEBN outper- broadcast speech transcripts are not really spoken forms the normal N-gram algorithm. The remain- language, but something more like spoken written der of this paper will focus on description of our language. Further, radio broadcasters speak for- new approach for linguistic segmentation. In Sec- mally, so that their reading pauses match sentence tion 2, some related work on utterance segmenta- boundaries quite well. It is thus understandable that tion is briefly reviewed, and our motivations are the simple non-speech interval model outperforms described. Section 3 describes MEBN in detail. the N-gram language model under these conditions; The experimental results are presented in Section 4. but segmentation of natural utterances is quite dif- Finally, Section 5 gives our conclusion. ferent. Zong et al. (2003) proposed an approach to ut- 2 Related Work and Our Motivations terance segmentation aiming at improving the per- formance of spoken language translation (SLT) systems. Their method is based on rules which are 2.1 Related Work oriented toward key word detection, template Stolcke et al. (1998, 1996) proposed an approach matching, and syntactic analysis. Since this ap- to detection of sentence boundaries and disfluency proach is intended to facilitate translation of Chi- locations in speech transcribed by an automatic nese-to-English SLT systems, it rewrites long recognizer, based on a combination of prosodic sentences as several simple units. Once again, cues modeled by decision trees and N-gram lan- these results cannot be regarded as general-purpose guage models. Their N-gram language model is utterance segmentation. Furuse et al. (1998) simi- mainly based on part of speech, and retains some larly propose an input-splitting method for translat- words which are particularly relevant to segmenta- ing spoken language which includes many long or tion. Of course, most part-of-speech taggers re- ill-formed expressions. The method splits an input quire sentence boundaries to be pre-determined; so into well-balanced translation units, using a seman- to require the use of part-of-speech information in tic dictionary. utterance segmentation would risk circularity. Cet- Ramaswamy et al. (1998) applied a maximum tolo et al.’s (1998) approach to sentence boundary entropy approach to the detection of command detection is somewhat similar to Stolcke et al.’s. boundaries in a conversational natural language user interface. They considered as their features but it can’t take into account the distant right con- words and their distances to potential boundaries. text to the candidate. This is the reason that N- They posited 400 feature functions, and trained gram methods often wrongly divide some long their weights using 3000 commands. The system sentences into halves or multiple segments. For then achieved a precision of 98.2% in a test set of example:小王病了一个星期. The N-gram method 1900 commands. However, command sentences is likely to insert a boundary mark between “了” for conversational natural language user interfaces and “一”, which corresponds to our everyday im- contain much smaller vocabularies and simpler pression that, if reading from the left and not structures than the sentences of natural spoken lan- considering several more words to the right of the guage. In any case, this method has been very current word, we will probably consider “小王病 helpful to us in designing our own approach to ut- 了 terance segmentation. ” as a whole sentence. However, we find that, if There are several additional approaches which are we search the sentence boundaries from right to not designed for utterance segmentation but which left, such errors can be effectively avoided. In the can nevertheless provide useful ideas. For example, present example, we won’t consider “一个星期” Reynar et al. (1997) proposed an approach to the as a whole sentence, and the search will be contin- disambiguation of punctuation marks. They con- ued until the word “小” is encountered. Accord- sidered only the first word to the left and right of ingly, in order to avoid segmentation errors made any potential sentence boundary, and claimed that by the normal N-gram method, we propose a re- examining wider context was not beneficial. The verse N-gram segmentation method (RN) which features they considered included the candidate’s does seek sentence boundaries from right to left. prefix and suffix; the presence of particular charac- Further, we simply integrate the two N-gram ters in the prefix or suffix; whether the candidate methods and propose a bi-directional N-gram was honorific (e.g. Mr., Dr.); and whether the can- method (BN), which takes into account both the didate was a corporate designator (e.g. Corp.). The left and the right context of a candidate segmenta- system was tested on the Brown Corpus, and tion site. Since the relative usefulness or signifi- achieved a precision of 98.8%.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us