Socially-Aware Dialogue System

Total Page:16

File Type:pdf, Size:1020Kb

Socially-Aware Dialogue System Socially-Aware Dialogue System Ran Zhao CMU-LTI-19-008 Language Technologies Institute School of Computer Science Carnegie Mellon University 5000 Forbes Ave., Pittsburgh, PA 15213 www.lti.cs.cmu.edu Thesis Committee: Alexander I.Rudnicky, CMU (Chair) Alan W.Black, CMU (Co-Chair) Louis-Philippe Morency, CMU Amanda Stent, Bloomberg Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy In Language and Information Technologies c 2019, Ran Zhao Keywords: socially-aware framework, multimodal machine learning, temporal sequence learning, cognitive architecture, neural dialogue model, rapport, conversational strategy, spoken dialogue system, discourse analysis, social reasoning, natural language generation, negotiation Abstract In the past two decades, spoken dialogue systems, such as those commonly found in cellphones and other interactive devices, have emerged as a key factor in human- computer interaction. For instance, Apple’s Siri, Microsoft’s Cortana, and Amazon’s Alexa help human users complete tasks more efficiently. However, research in this area has yet to produce dialogue systems that build interpersonal closeness over the course of a conversation while also carrying out the task. This thesis attempts to address that shortcoming. Specifically, research in computational linguistics (Bick- more and Cassell, 1999) has shown that people pursue multiple conversational goals in dialogue, which include those that fulfill propositional functions to contribute information to the dialogue; those that fulfill interactional functions to manage con- versational turn-taking; and those that fulfill interpersonal functions to manage the relationships between interlocutors. Although spoken dialogue systems have greatly advanced in modeling the propositional and, to a lesser extent, interactional functions of human communication, these systems fall short in replicating the interpersonal functions of conversation. We propose that this interpersonal deficiency is due to insufficient models of interpersonal goals and strategies in human communication. As dialogue systems become more common and are used more frequently, propo- sitional content and interactional content will not suffice. In this thesis, therefore, we address these challenges by proposing a socially-aware intelligent framework that exploits a path to systematically generate dialogues that fulfill interpersonal functions. In Zhao et al., (2014), we clarify that a socially-aware intelligent framework can explain how humans in dyadic interactions build, maintain, and tear down social bonds through specific conversational strategies that fulfill specific social goals and that are instantiated in particular verbal and nonverbal behaviors. In order to operationalize this framework, we argue that four capabilities are needed to achieve a socially-aware intelligent system. The system must (1) automatically infer human users’ social intentions by recognizing their social conversational strategies, (2) accurately estimate social dynamics by observing dyadic interactions, (3) reason through appropriate conversational strategies while accounting for both the task goal and social goal, and (4) realize surface-level utterances that blend task and social conversation. Our socially-aware dialogue system focuses on blended conversations that mix a goal-oriented task with social chat. As a proof of concept, we exploit a knowledge-inspired socially-aware personal assistant to aid conference attendees by eliciting their preferences through building rapport, and then making informed personalized recommendations about sessions to attend and people to meet. Finally, we leverage the power of neural networks to model negotiation dialogue within our socially-aware intelligent framework. We present a novel learning method and a two-phase computational model to blend negotiation utterances and social conversation. Our method requires less human efforts than traditional knowledge- inspired approaches. We conduct comprehensive experiments to show that the system can facilitate negotiation while building a social bond with a human user. iv Acknowledgments I would like to sincerely express my gratitude, first and foremost, to my won- derfully supportive and helpful committee, Prof. Alexander I. Rudnicky, Prof. Alan Black, Prof. Louis-Philippe Morency and Dr. Amanda Stent. My PhD advisor, Prof. Rudnicky, allowed me freedom to explore the field of human-computer interaction, guided my future research directions, and appreciated and supported my ideas. Our relationship is deeper than the typical advisor-advisee relationship. Prof. Rudnicky generously shared with me his insights on the fundamental philosophy and theory of human communication, which sparked my curiosity about dialogue systems and inspired me to learn state-of-the-art models. Our meetings left me imaging new research directions that I had previously not thought possible, and to ensure rigor, he guided me to craft evaluation metrics for each study. Without his help, I could not have produced the quality work that follows. Prof. Black offered helpful and creative advice when I strayed off path. He taught me how to ground computational theories of speech and language with methods of practical implementation. He made me believe that one day a computer could speak like a human. I am privileged to have had the opportunity to learn about multimodal machine learning from Prof. Morency. His excellent teaching and thoughtful questions equipped me with both high-level methodology design and low-level model development on the research of human communication dynamics. Dr. Stent at Bloomberg provided me an invaluable internship experience which engaged my great passion for applying deep learning to finance. She directed me to inspiring resources that broadened my horizons, and she taught me how to shape research ideas and design experiments. Even more, she helped me present our work at an international conference. I am extremely grateful for having four world-renowned experts on my committee. Thank you all for supporting my thesis work. During my PhD journey, I was fortunate to work on different projects and collab- orate with several faculty and talented researchers, including Prof. Justine Cassell, Prof. Mark Dredze, Dr. Arun Verma, Dr. David Rosenberg, Tanmay Sinha, Os- car J. Romero, Tiancheng Zhao, Yuntian Deng, Yoichi Matasuyama, Alexandros Papangelis, Arjun Bhardwaj, Sushma Akoju, Zhao Meng and Orson Xu. Thank you all for offering me precious research experience and lifelong friendship. As a student at CMU, I have learned so much about machine learning and natural language processing, and honed skills to implement that knowledge. Here, I want to thank professors who taught me while at CMU: Tom Mitchell, Manuela Veloso, Ruslan Salakhutdinov, Graham Neubig, Eduard Hovy, Eric Nyberg, Reid Simmons, Chris Dyer, Noah Smith and Carolyn Penstein Rose. In the past six years, I have attended many conferences, which gave me opportu- nities to discuss and interact with many smart people, such as Prof. Jonathan Gratch, Prof. David Traum, Dr. Jason Weston, Prof. Stefan Scherer, Prof. Stefan Kopp, Prof. Timothy Bickmore, Prof. Yukiko Nakano, Prof. Hannes Vilhjalmssoon, Natasha Jaques, Dr. Bing Liu, Elena Corina Grigore, Dr. Arno Hartholt, and many more than I can list here. I am inspired by their passion and grateful for their contributions to our field. In addition, I want to thank my sponsors, Yahoo! and Amazon, for their generous funding; Dr. Doug Phillips for his professional English revision; and my undergradu- ate research assistants for their annotation work. I am grateful for your assistance, which sped up my research and made my life much easier. Finally, to my dearest family: Thank you. To my parents, for your unconditional love and support. Thank you, Jingjing, my wonderful wife and soulmate, for facing with me the challenges of the last six years and giving me the courage to overcome the difficulties. Thank you, Winston and Harry, my two cute sons for making my life meaningful and enjoyable. This dissertation is as much yours as it is mine. vi Contents 1 Introduction 1 1.1 Thesis Statement . .4 1.2 Thesis Structure . .4 2 Theoretical Framework 7 2.1 Introduction . .7 2.2 Related Work . .8 2.3 Theoretical Framework for Rapport Management . .9 2.4 Computational Model of Rapport Management . 13 2.5 Examples from Corpus Data . 15 2.6 Conclusions . 15 3 A Knowledge-inspired Socially-aware Personal Assistant (SAPA) 17 3.1 Overview of Socially-aware Personal Assistant . 17 3.1.1 Introduction . 17 3.1.2 Computational Architecture . 18 3.1.3 Sample Dialogues . 19 3.2 Predictive Model for Conversational Strategies Recognition . 20 3.2.1 Introduction . 20 3.2.2 Related Work . 20 3.2.3 Ground Truth . 21 3.2.4 Understanding Conversational Strategies . 21 3.2.5 Machine Learning Modeling . 25 3.2.6 Results and Discussion . 26 3.2.7 Post-experiment . 27 3.2.8 Conclusions . 29 3.3 Predictive Model for Rapport Assessment . 30 3.3.1 Introduction and Motivation . 30 3.3.2 Related Work . 31 3.3.3 Study Context . 33 3.3.4 Method . 34 3.3.5 Experimental Results . 35 3.3.6 Validation and Discussion . 36 3.3.7 Conclusions and Future Work . 40 vii 3.4 Conversational Strategy Planning for Social Dialogue . 40 3.4.1 Introduction and Motivation . 40 3.4.2 Related Work . 41 3.4.3 System Architecture . 42 3.4.4 Computational Model . 42 3.4.5 Design of the Decision-Making module . 45 3.4.6 Experimentation and Results . 46 3.4.7 Conclusions . 49 4 A Neural Social Intelligent Negotiation Dialogue System (SOGO) 50 4.1 Introduction and Motivation . 50 4.2 Study Context . 52 4.3 Related Work . 52 4.3.1 Negotiation Agent . 52 4.3.2 Social Intelligent Agent . 54 4.4 SOGO 1.0: A Semi-automated Socially-aware Negotiation System . 54 4.4.1 Two-phase Computational Model . 54 4.4.2 System Architecture . 61 4.4.3 Pilot Study with SOGO 1.0 System . 62 4.4.4 Evaluation . 63 4.4.5 Discussion . 66 4.5 SOGO 2.0: A Fully-automated Socially-aware Negotiation System . 67 4.5.1 Proposed Framework . 69 4.5.2 Rapport Estimator . 69 4.5.3 Socially-aware Dialogue Model .
Recommended publications
  • Deep Learning for Dialogue Systems
    Deep Learning for Dialogue Systems Yun-Nung Chen Asli Celikyilmaz Dilek Hakkani-Tur¨ National Taiwan University Microsoft Research Google Research Taipei, Taiwan Redmond, WA Mountain View, CA [email protected] [email protected] [email protected] Abstract Goal-oriented spoken dialogue systems have been the most prominent component in todays vir- tual personal assistants, which allow users to speak naturally in order to finish tasks more effi- ciently. The advancement of deep learning technologies has recently risen the applications of neural models to dialogue modeling. However, applying deep learning technologies for build- ing robust and scalable dialogue systems is still a challenging task and an open research area as it requires deeper understanding of the classic pipelines as well as detailed knowledge of the prior work and the recent state-of-the-art work. Therefore, this tutorial is designed to focus on an overview of dialogue system development while describing most recent research for building dialogue systems, and summarizing the challenges, in order to allow researchers to study the po- tential improvements of the state-of-the-art dialogue systems. The tutorial material is available at http://deepdialogue.miulab.tw. 1 Tutorial Overview With the rising trend of artificial intelligence, more and more devices have incorporated goal-oriented spoken dialogue systems. Among popular virtual personal assistants, Microsoft’s Cortana, Apple’s Siri, Amazon Alexa, and Google Assistant have incorporated dialogue system modules in various devices, which allow users to speak naturally in order to finish tasks more efficiently. Traditional conversational systems have rather complex and/or modular pipelines.
    [Show full text]
  • A Survey on Dialogue Systems: Recent Advances and New Frontiers
    A Survey on Dialogue Systems: Recent Advances and New Frontiers Hongshen Cheny, Xiaorui Liuz, Dawei Yiny, and Jiliang Tangz yData Science Lab, JD.com zData Science and Engineering Lab, Michigan State University [email protected], [email protected],fxiaorui,[email protected] ABSTRACT May I know your name ? Dialogue systems have attracted more and more attention. Dialogue State NLU Recent advances on dialogue systems are overwhelmingly Tracking contributed by deep learning techniques, which have been employed to enhance a wide range of big data applications such as computer vision, natural language processing, and recommender systems. For dialogue systems, deep learning I am Robot. can leverage a massive amount of data to learn meaningful NLG Policy learning feature representations and response generation strategies, while requiring a minimum amount of hand-crafting. In this article, we give an overview to these recent advances on di- Figure 1: Traditional Pipeline for Task-oriented Systems. alogue systems from various perspectives and discuss some possible research directions. In particular, we generally di- vide existing dialogue systems into task-oriented and non- certain tasks (e.g. finding products, and booking accommo- task-oriented models, then detail how deep learning tech- dations and restaurants). The widely applied approaches to niques help them with representative algorithms and finally task-oriented systems are to treat the dialogue response as a discuss some appealing research directions that can bring pipeline as shown in Figure1. The systems first understand the dialogue system research into a new frontier. the message given by human, represent it as a internal state, then take some actions according to the policy with respect 1.
    [Show full text]
  • A Robust Dialogue System with Spontaneous Speech
    A Robust Dialogue System with Spontaneous Speech Understanding and Cooperative Response Toshihiko ITOH, Akihiro DENDA, Satoru KOGURE and Seiichi NAKAGAWA Department of Information and Computer Sciences Toyohashi University of Technology Tenpaku-cho, Toyohashi-shi, Aichi-ken, 441, Japan E-mail address • {itoh, akihiro, kogure, nakagawa}@slp.tutics.tut.ac.jp 1 Introduction 2 A Multi-Modal Dialogue System A spoken dialogue system that can understand spon- The domain of our dialogue system is "Mt. Fuji taneous speech needs to handle extensive range of sightseeing guidance (the vocabulary size is 292 speech in comparision with the read speech that has words for the recognizer and 948 words for the inter- been studied so far. The spoken language has looser preter, and the test-set word perplexity is 103)", The restriction of the grammar than the written language dialogue system is composed of 4 parts: Input by and has ambiguous phenomena such as interjections, speech recognizer and touch screen, graphical user ellipses, inversions, repairs, unknown words and so interface, interpreter, and response generator. The on. It must be noted that the recognition rate of the latter two parts are described in Sections 3 and 4. speech recognizer is limited by the trade-off between the looseness of linguistic constraints and recogni- 2.1 Spontaneous Speech Recognizer tion precision , and that the recognizer may out- The speech recognizer uses a frame-synchronous one put a sentence as recognition results which human pass Viterbi algorithm and Earley like parser for would never say. Therefore, the interpreter that re- context-free grammar, while using HMMs as sylla- ceives recognized sentences must cope not only with ble units.
    [Show full text]
  • Dialog Systems and Chatbots
    Speech and Language Processing. Daniel Jurafsky & James H. Martin. Copyright c 2018. All rights reserved. Draft of August 12, 2018. CHAPTER 25 Dialog Systems and Chatbots Les lois de la conversation sont en gen´ eral´ de ne s’y appesantir sur aucun ob- jet, mais de passer leg´ erement,` sans effort et sans affectation, d’un sujet a` un autre ; de savoir y parler de choses frivoles comme de choses serieuses´ The rules of conversation are, in general, not to dwell on any one subject, but to pass lightly from one to another without effort and without affectation; to know how to speak about trivial topics as well as serious ones; The 18th C. Encyclopedia of Diderot, start of the entry on conversation The literature of the fantastic abounds in inanimate objects magically endowed with sentience and the gift of speech. From Ovid’s statue of Pygmalion to Mary Shelley’s Frankenstein, there is something deeply moving about creating something and then having a chat with it. Legend has it that after finishing his sculpture Moses, Michelangelo thought it so lifelike that he tapped it on the knee and commanded it to speak. Per- haps this shouldn’t be surprising. Language is the mark conversation of humanity and sentience, and conversation or dialog dialog is the most fundamental and specially privileged arena of language. It is the first kind of language we learn as children, and for most of us, it is the kind of language we most commonly indulge in, whether we are ordering curry for lunch or buying spinach, participating in busi- ness meetings or talking with our families, booking air- line flights or complaining about the weather.
    [Show full text]
  • Survey on Chatbot Design Techniques in Speech Conversation Systems
    (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 6, No. 7, 2015 Survey on Chatbot Design Techniques in Speech Conversation Systems Sameera A. Abdul-Kader Dr. John Woods School of Computer Science and Electronic School of Computer Science and Electronic Engineering/University of Essex Colchester/ UK Engineering/University of Essex Colchester/ UK Diyala University/ Diyala/ Iraq Abstract—Human-Computer Speech is gaining momentum as selected works are compared with those used in Loebner-Prize a technique of computer interaction. There has been a recent Chatbots. The findings are discussed and conclusions are upsurge in speech based search engines and assistants such as drawn at the end. Siri, Google Chrome and Cortana. Natural Language Processing (NLP) techniques such as NLTK for Python can be applied to II. BACKGROUND analyse speech, and intelligent responses can be found by designing an engine to provide appropriate human like A. Human-Computer Speech interaction responses. This type of programme is called a Chatbot, which is Speech recognition is one of the most natural and sought the focus of this study. This paper presents a survey on the after techniques in computer and networked device interaction techniques used to design Chatbots and a comparison is made has only recently become possible (last two decades) with the between different design techniques from nine carefully selected advent of fast computing. papers according to the main methods adopted. These papers are representative of the significant improvements in Chatbots in the Speech is a sophisticated signal and happens at different last decade. The paper discusses the similarities and differences levels: “semantic, linguistic, articulatory, and acoustic” [3].
    [Show full text]
  • User Concepts for In-Car Speech Dialogue Systems and Their Integration Into a Multimodal Human-Machine Interface
    User Concepts for In-Car Speech Dialogue Systems and their Integration into a Multimodal Human-Machine Interface Von der Philosophisch-Historischen Fakultät der Universität Stuttgart zur Erlangung der Würde eines Doktors der Philosophie (Dr. phil.) genehmigte Abhandlung Vorgelegt von Sandra Mann aus Aalen Hauptberichter: Prof. Dr. Grzegorz Dogil Mitberichter: Apl. Prof. Dr. Bernd Möbius Tag der mündlichen Prüfung: 02.02.2010 Institut für Maschinelle Sprachverarbeitung der Universität Stuttgart 2010 3 Acknowledgements This dissertation developed during my work at Daimler AG Group Research and Advanced Engineering in Ulm, formerly DaimlerChrysler AG. Graduation was accomplished at the University of Stuttgart, Institute for Natural Language Processing (IMS) at the chair of Experimental Phonetics. I would like to thank Prof. Dr. Grzegorz Dogil from the IMS of the University of Stuttgart for supervising this thesis and supporting me in scientific matters. At this point I would also like to thank Apl. Prof. Dr. Bernd Möbius for being secondary supervisor. I also wish to particularly thank my mentors at Daimler AG, Dr. Ute Ehrlich and Dr. Susanne Kronenberg, for valuable advice on speech dialogue systems. They were always available to discuss matters concerning my research and gave constructive comments. Besides I would like to thank Paul Heisterkamp whose long-time experience in the field of human-machine interaction was very valuable to me. Special thanks go to all colleagues from the speech dialogue team, the recognition as well as the acoustics team. I very much acknowledge the good atmosphere as well as fruitful discussions, advice and criticism contributing to this thesis. I would especially like to mention Dr.
    [Show full text]
  • Influencing an Artificial Conversational Entity by Information Fusion
    University of West Bohemia Department of Computer Science and Engineering Univerzitn´ı8 30614 Plzeˇn Czech Republic Influencing an artificial conversational entity by information fusion Technical report Ing. Jarom´ırSalamon Technical report No. DCSE/TR-2020-04 June 2020 Distribution: public Technical report No. DCSE/TR-2020-04 June 2020 Influencing an artificial conversational entity by information fusion Technical report Ing. Jarom´ırSalamon Abstract The dissertation thesis proposes a new method of using an artificial conversational enti(later also dialogue system or chatbot) influenced by information or information fusion. This new method could potentially serve various purposes of use like fitness and well-being support, mental health support, mental illness treatment, and a like. To propose such a new method, one has to investigate two main topics. Whether it is possible to influence the dialogue system by information (fusion) and what kind of data needs to be collected and prepared for such influence. The dialogue system uses textual data from conversation to determine the context of human interaction and decide about next response. It presents correct behavior without external influence with data, but with the influence, the dialogue system needs to react without hiccups in the conversation adequately. The data which could be used for the dialogue system influencing can be a combina- tion of qualitative measure (from text extracted sentiment, from voice determined tone, from face revealed emotion), and measured quantitative values (wearable mea- sured heartbeat, on-camera correctly performed exercise, based on EEG found focus on activity). All the research found in the relation of those two topics is described in next more than 200 pages.
    [Show full text]
  • A Generative Dialogue System for Reminiscence Therapy
    Universitat Polit`ecnicade Catalunya Escola T`ecnicaSuperior d'Enginyeria de Telecomunicaci´ode Barcelona A Generative Dialogue System for Reminiscence Therapy Mariona Car´osRoca Advisors: Xavier Gir´o-i-Nieto,Petia Radeva A thesis submitted in fulfillment of the requirements for the Master in Telecommunications Engineering Barcelona, September 2019 Abstract With people living longer than ever, the number of cases with neurodegenerative diseases such as Alzheimer's or cognitive impairment increases steadily. In Spain it affects more than 1.2 million patients and it is estimated that in 2050 more than 100 million people will be affected. While there are not effective treatments for this terminal disease, therapies such as reminiscence, that stimulate memories of the patient's past are recommended, as they encourage the communication and produce mental and emotional benefits on the patient. Currently, reminiscence therapy takes place in hospitals or residences, where the therapists are located. Since people that receive this therapy are old and may have mobility difficulties, we present an AI solution to guide older adults through reminiscence sessions by using their laptop or smartphone. Our solution consists in a generative dialogue system composed of two deep learning archi- tectures to recognize image and text content. An Encoder-Decoder with Attention is trained to generate questions from photos provided by the user, which is composed of a pretrained Con- volution Neural Network to encode the picture, and a Long Short-Term Memory to decode the image features and generate the question. The second architecture is a sequence-to-sequence model that provides feedback to engage the user in the conversation.
    [Show full text]
  • A Context-Aware Conversational Agent in the Rehabilitation Domain
    future internet Article A Context-Aware Conversational Agent in the Rehabilitation Domain Thanassis Mavropoulos 1,*, Georgios Meditskos 1,*, Spyridon Symeonidis 1, Eleni Kamateri 1, Maria Rousi 1, Dimitris Tzimikas 2, Lefteris Papageorgiou 2, Christos Eleftheriadis 3, George Adamopoulos 4, Stefanos Vrochidis 1 and Ioannis Kompatsiaris 1 1 Centre for Research and Technology Hellas, 57001 Thessaloniki, Greece; [email protected] (S.S.); [email protected] (E.K.); [email protected] (M.R.); [email protected] (S.V.); [email protected] (I.K.) 2 Entranet Ltd., 54249 Thessaloniki, Greece; [email protected] (D.T.); [email protected] (L.P.) 3 ELBIS P.C., 57001 Thessaloniki, Greece; [email protected] 4 Evexia S.A., 63080 Thessaloniki, Greece; [email protected] * Correspondence: [email protected] (T.M.); [email protected] (G.M.) Received: 30 September 2019; Accepted: 30 October 2019; Published: 1 November 2019 Abstract: Conversational agents are reshaping our communication environment and have the potential to inform and persuade in new and effective ways. In this paper, we present the underlying technologies and the theoretical background behind a health-care platform dedicated to supporting medical stuff and individuals with movement disabilities and to providing advanced monitoring functionalities in hospital and home surroundings. The framework implements an intelligent combination of two research areas: (1) sensor- and camera-based monitoring to collect, analyse, and interpret people behaviour and (2) natural machine–human interaction through an apprehensive virtual assistant benefiting ailing patients. In addition, the framework serves as an important assistant to caregivers and clinical experts to obtain information about the patients in an intuitive manner.
    [Show full text]
  • A Dialogue-Act Taxonomy for a Virtual Coach Designed to Improve the Life of Elderly
    Multimodal Technologies and Interaction Article A Dialogue-Act Taxonomy for a Virtual Coach Designed to Improve the Life of Elderly César Montenegro 1,†, Asier López Zorrilla 2,†, Javier Mikel Olaso 2,†, Roberto Santana 1,*, Raquel Justo 2, Jose A. Lozano 1,3 and María Inés Torres 2,* 1 ISG-UPV/EHU, Faculty of Computer Science , Paseo Manuel sde Lardizabal 1, 20018 Donostia-San Sebastian, Gipuzkoa, Spain 2 SPIN-UPV/EHU, Department of Electrical and Electronics, Faculty of Science and Technology, Campus de Leioa, 48940 Leioa, Bizkaia, Spain 3 BCAM, Alameda de Mazarredo 14, 48009 Bilbao, Spain * Correspondence: [email protected] (R.S.); [email protected] (M.I.T.) † These authors contributed equally to this work. Received: 11 June 2019; Accepted: 4 July 2019; Published: 11 July 2019 Abstract: This paper presents a dialogue act taxonomy designed for the development of a conversational agent for elderly. The main goal of this conversational agent is to improve life quality of the user by means of coaching sessions in different topics. In contrast to other approaches such as task-oriented dialogue systems and chit-chat implementations, the agent should display a pro-active attitude, driving the conversation to reach a number of diverse coaching goals. Therefore, the main characteristic of the introduced dialogue act taxonomy is its capacity for supporting a communication based on the GROW model for coaching. In addition, the taxonomy has a hierarchical structure between the tags and it is multimodal. We use the taxonomy to annotate a Spanish dialogue corpus collected from a group of elder people.
    [Show full text]
  • Response Selection for End-To-End Retrieval-Based Dialogue Systems Basma El Amel Boussaha
    Response Selection for End-to-End Retrieval-Based Dialogue Systems Basma El Amel Boussaha To cite this version: Basma El Amel Boussaha. Response Selection for End-to-End Retrieval-Based Dialogue Systems. Computation and Language [cs.CL]. Université de Nantes (UN), 2019. English. tel-02926608 HAL Id: tel-02926608 https://hal.archives-ouvertes.fr/tel-02926608 Submitted on 31 Aug 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. THÈSE DE DOCTORAT DE L’UNIVERSITE DE NANTES COMUE UNIVERSITE BRETAGNE LOIRE Ecole Doctorale N°601 Mathèmatique et Sciences et Technologies de l’Information et de la Communication Spécialité : Informatique Par Basma El Amel BOUSSAHA Response Selection for End-to-End Retrieval-Based Dialogue Systems Thèse présentée et soutenue à NANTES le 23 Octobre 2019 Unité de recherche : Laboratoire des Sciences du Numérique de Nantes (LS2N) Rapporteurs avant soutenance : Frédéric BÉCHET Professeur des universités, Aix Marseille Université, LIS Sophie ROSSET Directrice de recherche, Université Paris-Sud, LIMSI Composition du jury : Président : Yannick ESTÈVE Professeur des universités, Université d’Avignon, LIA Examinateurs : Frédéric BÉCHET Professeur des universités, Aix Marseille Université, LIS Sophie ROSSET Directrice de recherche, Université Paris-Sud, LIMSI Yannick ESTÈVE Professeur des universités, Université d’Avignon, LIA Dir.
    [Show full text]
  • Challenges in Designing Natural Language Interfaces for Complex Visual Models
    Challenges in Designing Natural Language Interfaces for Complex Visual Models Henrik Voigt1,2, Monique Meuschke1, Kai Lawonn1 and Sina Zarrieß2 1University of Jena 2University of Bielefeld [email protected] [email protected] Abstract In the VIS community, interaction with visual models plays an important role and natural lan- Intuitive interaction with visual models be- guage interaction represents a big part of it (Bacci comes an increasingly important task in the et al., 2020; Kumar et al., 2020; Srinivasan et al., field of Visualization (VIS) and verbal interac- 2020). Natural Language Interfaces (NLIs) that tion represents a significant aspect of it. Vice versa, modeling verbal interaction in visual support interactive visualizations based on lan- environments is a major trend in ongoing re- guage queries have found increasing interest in search in NLP. To date, research on Language recent research (Narechania et al., 2020; Yu and & Vision, however, mostly happens at the in- Silva, 2020; Fu et al., 2020). However, from an tersection of NLP and Computer Vision (CV), NLP point of view, the methods applied in these and much less at the intersection of NLP and recent interfaces, mostly rely on established meth- Visualization, which is an important area in ods for implementing semantic parsers that map Human-Computer Interaction (HCI). This pa- natural language instructions to symbolic data base per presents a brief survey of recent work on interactive tasks and set-ups in NLP and Vi- queries, which are consecutively visualized by a sualization. We discuss the respective meth- visualization pipeline. In this paper, we argue that ods, show interesting gaps and conclude by there is space for further support of intuitive interac- suggesting neural, visually grounded dialogue tion with visual models using state-of-the-art NLP modeling as a promising potential for NLIs for methods that would also pose novel and interesting visual models.
    [Show full text]