Converting the Point of View of Messages Spoken to Virtual Assistants

Converting the Point of View of Messages Spoken to Virtual Assistants

Converting the Point of View of Messages Spoken to Virtual Assistants Isabelle G. Lee Vera Zu Sai Srujana Buddi Dennis Liang Purva Kulkarni Jack G.M. FitzGerald Amazon Alexa f isabelll, xinzu, buddisa, liangxin, kulkpurv, jgmf [email protected] Abstract Messaging can be implemented in such a way that the messages are snipped from the utterance in their spoken Virtual Assistants can be quite literal at times. form (Mohit, 2014). For example, a user may say tell If a user says tell Bob I love him, most virtual Bob that I’m running late. The Named Entity Recogni- assistants will extract the message I love him tion (NER) model could extract I’m running late as the and send it to the user’s contact named Bob, message content and pass that to the recipient directly. rather than properly converting the message to Such an approach works in many cases, but it does not I love you. We designed a system that takes a perform well if the user says something like ask bob if voice message from one user, converts the point he’s coming for dinner, for which the recipient would of view of the message, and then delivers the receive (if) he’s coming for dinner using a simple NER result to its target user. We developed a rule- snipping approach. In this way, direct messages are dis- based model, which integrates a linear text clas- tinguished from indirect messages (Li, 1986). Indirect sification model, part-of-speech tagging, and messages require further natural language processing constituency parsing with rule-based transfor- (NLP) to convert the point of view (POV). Crucially, mation methods. We also investigated Neural the model needs to identify the co-reference relation Machine Translation (NMT) approaches, in- between Bob, the recipient, and the anaphor he. cluding traditional recurrent networks, Copy- Additional difficulty arises when we use the VA as Net, and T5. We explored 5 metrics to gauge an intermediary between the source and the recipient both naturalness and faithfulness automatically, of the message, not simply a voice machine. Instead of and we chose to use BLEU plus METEOR for reciting the message I’m running late verbatim to Bob, faithfulness, as well as relative perplexity using to achieve natural, believable human-robot interaction, a separately trained language model (GPT) for the VA should say something like Joe says he’s running naturalness. Transformer-Copynet and T5 per- late or simply Joe is running late. That is, for the VA formed similarly on faithfulness metrics, with to become a true intermediary in the conversation, the T5 scoring 63.8 for BLEU and 83.0 for ME- POV conversion must apply to direct messages as well. TEOR. CopyNet was the most natural, with a The VA-assisted conversation is exemplified in Table relative perplexity of 1.59. CopyNet also has 37 1. It’s a two step process—messages are relayed from times fewer parameters than T5. We have pub- a source to the VA, and then from the VA to a recipi- licly released our dataset, which is composed ent. The message and its context (e.g., who dictates the of 46,565 crowd-sourced samples. message, when and where) are interpreted by the VA, undergo POV conversion, and then are reproduced for Introduction the recipient. Virtual Assistants (VAs), such as Amazon Alexa or Google Assistant, are cloud-based software systems that Table 1: Examples of Human-VA Interaction ingest spoken utterances from individual users, detect the users’ intents from the utterances, and perform the Example 1 desired tasks (Kongthon et al., 2009; Tunstall-Pedoe, Joe!VA Tell bob I’m running late 2014; Memeti and Pllana, 2018). Common VA tasks VA!Bob Joe says he’s running late include playing music, setting timers, answering ency- Example 2 clopedic questions, and controlling smart home devices Joe!VA Ask Bob if he’s coming for dinner (Lopez´ et al., 2017). In addition, VAs are used for com- VA!Bob Joe asks if you are coming for dinner munication between two users. With the help of VAs, users can make and receive calls, as well as send a voice The most direct approach to solving POV conversion message or text message to their contacts. This paper is to author a suite of rules for grammatical conversion. focuses on voice messaging. These rules could be used in conjunction with the named 154 Findings of the Association for Computational Linguistics: EMNLP 2020, pages 154–163 November 16 - 20, 2020. c 2020 Association for Computational Linguistics entity recognition that is already being performed by the writing (Lin et al., 2020; Yu et al., 2020). VA. A rule-based approach is deterministic and easily traceable. If operationalized, it would be trivial to debug Problem Statement the model’s behavior. Given our assumption that the user name, contact name, There are a few disadvantages of a rule-based ap- and message content are known, our objective is to con- proach, however. First, it requires that someone must vert the POV of the voice message. Whether each step hand-author the rules, which is particularly burdensome is performed explicitly, as with the rule-based model, or as the number of languages scale. Second, depend- whether the model learns them, as with Seq2Seq mod- ing on the types of rules implemented, a rule-based els, the POV conversion in our design subsumes the approach may output converted utterances that sound following sub-tasks. robotic or unnatural. For these reasons, we also con- When the VA relays a message from Joe to Jill, the sidered encoder-decoder approaches, which learn all source contact name, Joe, is a crucial piece of infor- necessary transformations directly from the data and mation for Jill. Yet it is often missing from the input. which can perform natural language generation with The first step of our POV conversion model, therefore, some amount of variety in phrasing. POV conversion is to add the name of the source contact to the output bears similarities to the machine translation (Edunov utterance. On the other hand, the contact name, Jill, is a et al., 2018), paraphrasing (Witteveen and Andrews, redundant piece of information for Jill herself and can 2019), text generation (Gatt and Krahmer, 2018), ab- optionally be removed. Note that with our dataset we stractive summarization (Gupta and Gupta, 2019), and do include the contact name in the converted output, question-and-answer (Zhang et al., 2020) tasks, all of because we assume that the VA is a communal device which have performed well using architectures that have with multiple users (thereby making the contact name either an encoder, a decoder, or both. relevant even in the converted output). As a basic benchmark for encoder-decoder sequence Voice messages, like all sentences, come in differ- to sequence (Seq2Seq) model, we first consider a clas- ent varieties and perform different functions. They can sic “LSTM-LSTM” model with dot product attention be used to give a statement, make a request, or ask a (Sutskever et al., 2014; Bahdanau et al., 2014). From question (Austin, 1975). In our rule-based model, we there, we tried a variety of encoders, decoders, and at- conducted a classification task to categorize messages tention. A quick modification from this classic structure into different types. Accordingly, we needed to pair is a transformer block as the encoder (Vaswani et al., appropriate reporting verbs, i.e., verbs used to introduce 2017). A decoder structure that seems particularly well quoted or paraphrased utterances, with distinct message suited for our task was CopyNet, which recognizes cer- types. Messages used to inquire (are you coming for din- tain tokens to be copied directly from the input and ner) work better with the verb ask (Joe asks if you are injected into the output (Gu et al., 2016). coming for dinner or Joe is asking if you are coming for As of the time of writing, high-performing systems dinner), whereas messages used to make an announce- for many NLP tasks are based on transformer architec- ment (dinner’s ready) work better with the verb say (Joe tures (Vaswani et al., 2017; Devlin et al., 2018; Rad- says dinner’s ready). ford et al., 2019; Lample and Conneau, 2019) that are A POV change often leads to changes in pronouns. first pretrained on large corpora of unlabeled data us- This is illustrated in Table2. Among other things, our ing masked language modeling and its variants, next model needs to be able to convert a first person (I) to sentence prediction and its variants, and related tasks. a third person (he), and a third person (he) to a second The models are then fine-tuned on the desired down- person (you). Accompanied with the change of pronoun stream task. The Text To Text Transfer Transformer is a change in verb forms (i.e., am to is, and is to are), (T5) model (Raffel et al., 2019) is our choice of encoder- since the grammar requires that the verb and its subject decoder transformer, which achieved state of the art agree in person. performance on SQuAD, SuperGLUE, and other bench- marks in 2019. Table 2: Examples of Pronominal Change We are not aware of prior work specifically targeted at messaging point of view conversion for virtual assis- Example 1 tants. This initial investigation into perspective switch- Input (Tell Bob) I am running late ing begins to formulate what people frequently do in Output (Joe says) He is running late natural conversations. By extending this formulation to Example 2 VAs, we provide mechanisms to parse out messy natural Input (Ask Bob) if he is coming for dinner speech and maximize informational gain.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us