Using Natural Language Processing and Discourse Features to Identify Understanding Errors in a Spoken Dialogue System

Using Natural Language Processing and Discourse Features to Identify Understanding Errors in a Spoken Dialogue System

Using Natural Language Processing and Discourse Features to Identify Understanding Errors in a Spoken Dialogue System Marilyn Walker [email protected] Jerry Wright [email protected] AT&T Labs—Research, 180 Park Avenue, Florham Park, NJ, 07932-0971, USA Irene Langkilde [email protected] USC Information Sciences Institute, 4676 Admiralty Way, Marina del Rey, CA 90292, USA Abstract Table 1. Sample Successful Dialogue While it has recently become possible to build spoken dialogue systems that interact with users S1: AT&T How may I help you? in real-time in a range of domains, systems that U1: I need to [ uh ] put a call onmy callingcard please support conversational natural language are still S2: May I have your card number, please? subject to a large number of spoken language U2:76543210987654 understanding (SLU) errors. Endowing such S3: What number would you like to call? systems with the ability to reliably distinguish U3:8147776666(misunderstood) SLU errors from correctly understood utterances S4: May I have that number again? might allow them to correct some errors auto- U4:8147776666 matically or to interact with users to repair them, S5: Thank you. thereby improving the system’s overall perfor- mance. We report experiments on learning to au- tomatically distinguish SLU errors in 11,787 spo- standing (SLU) errors, which have a large impact on the ken utterances collected in a field trial of AT&T’s system’s performance (Walker et al., 2000b; Walker, 2000). How May I Help You system interacting with live If it were possible to automatically recognize these errors, customer traffic. We apply the automatic classi- we could improve the system manually, or even endow the fier RIPPER (Cohen 96) to train an SLU classifier system with the ability to correct some errors automatically using features that are automatically obtainable or to interact with users to repair them. in real-time. The classifer achieves 86% accu- The How May I Help You (HMIHY) conversational dialogue racy on this task, an improvement of 23% over system developed at AT&T is designed to automatically the majority class baseline. We show that the handle a number of customer care tasks, such as person-to- most important features are those that the natu- person dialing, or receiving credit for a misdialed number. ral language understanding module can compute, As an example of the benefit of SLU error detection, con- suggesting that integrating the trained classifier trast the dialogues in Tables 1 and 2 between human callers into the NLU module of the How May I Help You 1 and the HMIHY system. HMIHY completed the dialogue in system should be straightforward. Table 1 successfully. Note that after detecting the SLU er- ror in utterance U3, HMIHY reprompts the user to repeat the calling card number in utterance S4. However, the system 1. Introduction frequently does not detect that an SLU error has occurred. Spoken dialogue systems promise efficient and natural ac- In Table 2, the system misunderstands the caller’s utterance cess to a large variety of information sources and ser- in U2 as a request to make a third-numbercall. The system vices from any phone. Several hundred systems that sup- continues the dialogue with utterance S3 by asking which number the caller would like to bill the call to. The dia- port system-initiative dialogues are currently being field- tested and deployed. However systems that support mixed- logue ends without completing the user’s task. initiative, conversational, natural language interaction are 1 The phone numbers, card numbers, and pin numbers in the still subject to a large number of spoken language under- sample dialogues are artificial. The corpus of dialogues illustrated in Tables 1 and 2 was Table 2. HMIHY fails to complete the user’s task. collected in several experimental trials of HMIHY on live customer traffic (Riccardi & Gorin, 2000; E. Ammicht S1: AT&T How may I help you? & Alonso, 1999; Riccardi & Gorin, 2000). These trials U1: (silence) were carried out at an AT&T customer care site, with the S2: Sorry. Please briefly tell me how I may help you? HMIHY system constantly monitored by a human customer U2: I wanta find out how much my bill’sworth. care agent who could take over the call if s/he perceived S3: What phone number would you like the call billed to? that the system was having trouble completing the caller’s U3: Um the access number. task. The system was also given the ability to automatically S4: Sorry. Could I have that phone number again? transfer the call to a human agent if it believed that it was U4: 18hundred2255288. having trouble completing the caller’s task. S5: What number would you like to call? U5: 800 225 5288 (touchtone input) The HMIHY system consists of an automatic speech recog- S6: Thank you. nizer, a natural language understanding module, a dialogue manager, and a computer telephony platform. During the trial, the behaviors of all the system modules were automat- The goal of the work described in this paper is to im- ically recorded in a log file, and later the dialogues were prove HMIHY’s ability to detect such SLU errors. We dis- transcribed by humans and labelled with one or more of tinguish three classes of SLU outcomes: RCORRECT, a cor- the 15 task categories, representing the task that the caller rectly understood utterance; RPARTIAL-MATCH a partially was asking HMIHY to perform, on a per utterance basis. We understood utterance; and RMISMATCH a misunderstood will refer to this label as the HUMAN LABEL. The natural utterance. We describe experiments on learning to auto- language understanding module (NLU) also logged what it matically distinguish these three classes of SLU outcomes believed to be the correct task category. We will call this in 11,787 spoken utterances collected in a field trial of label the NLU LABEL. This paper focuses on the problem AT&T’s How May I Help You system interacting with live of improving HMIHY’s ability to automatically detect when customer traffic. Using ten-fold cross-validation, we show the NLU LABEL is wrong. As mentioned above, this ability that the resulting SLU error classifier can correctly identify would allow HMIHY to make better decisions about when whether an utterance is an SLU error 86% of the time, an to transfer to a human customer care agent, but it might also improvement of 23% over the majority class baseline. In support repairing such misunderstandings, either automati- addition we show that the most important features are those cally or by interacting with a human caller. that the natural language understanding (NLU) module can compute, suggesting that it will be straightforward to inte- 3. Experimental System, Data and Method grate our SLU error classifier into the NLU module of the HMIHY system. The experiments reported here primarily utilize the rule learning program RIPPER (Cohen, 1996) to automatically 2. The How May I Help You Corpus induce an SLU error classification model from the 11,787 utterances in the HMIHY corpus. Although we had several The How May I Help You (HMIHY) system has been under learners available to us, we had a number of reasons for development at AT&T Labs for several years (Gorin et al., choosing RIPPER. First, we believe that the if-then rules 1997; Boyce & Gorin, 1996). Its design is based on the that are used to express the learned classification model notion of call routing (Gorin et al., 1997; Chu-Carroll & are easy for people to understand and would affect the Carpenter, 1999). In the HMIHY call routing system, ser- ease with which we could integrate the learned rules back vices that the user can access are classified into 14 cate- into our spoken dialogue system. Second, RIPPER supports gories, plus a category called other for calls that cannot be continuous, symbolic and textual bag features, while other automated and must be transferred to a human customer learners do not support textual bag features. Finally, our care agent (Gorin et al., 1997). Each of the 14 categories application utilized RIPPER’s facility for changing the loss describes a different task, such as person-to-person dialing, ratio, since some classes of errors are more catastrophic or receiving credit for a misdialed number. The system de- than others. termines which task the caller is requesting on the basis of its understanding of the caller’s response to the open- However, since the HMIHY corpus is not publicly avail- ended system greeting AT&T, How May I Help You?. Once able, we also ran several other learners on our data in order the task has been determined, the information needed for to provide baseline comparisons on classification accuracy completing the caller’s request is obtained using dialogue with several feature sets, including a feature set whose per- submodules that are specific for each task (Abella & Gorin, formance is statistically indistinguishable from our com- 1999). plete feature set. We report experiments from running automatically logged by one of the system modules or de- Table 3. Features for Spoken Utterances in the HMIHY Corpus rived from these features. The system modules that we ex- tracted features from were the acoustic processer/automatic ASR Features speech recognizer (Riccardi & Gorin, 2000), the natural – recog, recog-numwords, asr-duration, dtmf-flag, rg- language understanding module (Gorin et al., 1997), and modality, rg-grammar, tempo the dialogue manager, along with a representation of the NLU Features discourse history (Abella & Gorin, 1999). Because we – a confidence measure for all of the possible tasks that want to examine the contribution of different modules to the user could be trying to do the problem of predicting SLU error, we trained a classifier – salience-coverage, inconsistency, context-shift, top- that had access to all the features and compared its perfor- task, nexttop-task, top-confidence, diff-confidence, mance to classifiers that only had access to ASR features, confpertime, salpertime to NLU features, and to discourse contextual features.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us