
Assembling the jigsaw: How multiple open standards are synergistically combined in the HALEF multimodal dialog system Vikram Ramanarayanan†, David Suendermann-Oeft†, Patrick Lange†, Robert Mundkowsky‡, Alexei V. Ivanov†, Zhou Yu∗, Yao Qian† & Keelan Evanini‡ As dialog systems become increasingly multimodal and distributed in nature with advances in technology and computing power, they become that much more compli- cated to design and implement. However, open industry and W3C standards provide a silver lining here, allowing the distributed design of different components that are nonetheless compliant with each other. In this chapter we examine how an open- source, modular, multimodal dialog system – HALEF – can be seamlessly assem- bled, much like a jigsaw puzzle, by putting together multiple distributed components that are compliant with the W3C recommendations or other open industry standards. We highlight the specific standards that HALEF currently uses along with a perspec- tive on other useful standards that could be included in the future. HALEF has an open codebase to encourage progressive community contribution and a common standard testbed for multimodal dialog system development and benchmarking. 1 Introduction Dialog systems nowadays are becoming increasingly multimodal. In other words, dialog applications, which started off mostly based on voice and text [15], have in- creasingly started to encompass other input-output (I/O) modalities such as video [3], gesture [18, 3], electronic ink [11, 10], avatars or virtual agents [6, 27, 26], and even embodied agents such as robots [7, 29], among others. While the integration of such technologies provides a more immersive and natural experience for the users and enables an analysis of their non-verbal behaviors, it also makes the design of such multimodal dialog systems more complicated. This is because, among other things, one needs to ensure a seamless user experience without any reduction in quality of service – this includes issues such as latency, accuracy and sensitivity – while transporting data between each of these multimodal (and possibly disparate) I/O endpoints and the dialog system. In addition, dialog systems consist of multiple subsystems; for example, automatic speech recognizers (ASRs), spoken language understanding (SLU) modules, dialog managers (DMs) and speech synthezisers, † Educational Testing Service (ETS) R&D, San Francisco, CA ‡ Educational Testing Service (ETS) R&D, Princeton, NJ ∗ Carnegie Mellon University, Pittsburgh, PA Email: <vramanarayanan>@ets.org 1 2 Ramanarayanan et al. among others, interacting synergistically and often in real-time. Each of these sub- systems is complex and brings with it design challenges and open research questions in its own right. As a result, development of such multi-component systems that are capable of handling a large number of calls are typically done by large industrial companies and a handful of academic research labs since they require individual maintenance of multiple individual subsystems [5]. In such scenarios, it is essential to have industry-standard protocols and specification languages that ensure interop- erability and compatibility of different services, irrespective of who designed them or how they were implemented. Designing systems that adhere to such standards also allows generalization and accessibility of contributions from a large number of developers across the globe. The popularity of commercial telephony-based spoken dialog systems – also known as interactive voice response (IVR) systems – especially in automating cus- tomer service transactions in the late 1990s, drove industry developers to start work- ing on standards for such systems [19]. As a core component of an IVR, the voice browser, essentially responsible for interpreting the dialog flow while simultane- ously orchestrating all the necessary resources such as speech recognition, synthesis and telephony, was one of the early components subject to standardization resulting in the VoiceXML standard dating back to 19991 (see Section 4.1.1 for more details on VoiceXML). Since the vast majority of authors responsible for creating standards such as VoiceXML come from the industry, most implementations of spoken dialog systems adhering to these standards are commercial, proprietary, and closed-source applications. Examples of voice browser implementations include: • Voxeo Prophecy2 • TellMe Studio3 • Plum DEV4 • Cisco Unified Customer Voice Portal5 • Avaya Voice Portal6 In addition to over 20 commercial closed-source voice browsers7, we are aware of a single open-source implementation that has been actively developed over the past few years: • JVoiceXML8. We adopted this voice browser for the creation of the multimodal spoken dialog system HALEF (Help Assistant–Language-Enabled and Free), which serves as an example of a standards-based architecture in this chapter. 1 http://www.w3.org/TR/2000/NOTE-voicexml-20000505 2 https://voxeo.com/prophecy/ 3 https://studio.tellme.com/ 4 http://www.plumvoice.com/products/plum-d-e-v/ 5 http://www.cisco.com/c/en/us/products/customer-collaboration/unified-customer-voice-portal 6 https://support.avaya.com/products/P0979/voice-portal 7 Find a comprehensive list at https://www.w3.org/Voice/voice-implementations.html 8 https://github.com/JVoiceXML/JVoiceXML HALEF: an open-source standards-compliant modular SDS 3 Note that in addition to industrial implementations of spoken and multimodal dialog systems, there exists an active academic community engaging in research on such systems. Prominent examples include: • CMU’s Olympus [4] • Alex9, by the Charles University in Prague [12] • InproTK10, an incremental spoken dialog system • OpenDial11 • the Virtual Human Toolkit [9] • Metalogue12, a multimodal dialog system • IrisTK13, a multimodal dialog system Many of these examples, along with other (multimodal) dialog systems developed by the academic community, are built around very specific research objectives. For example, Metalogue provides a multimodal agent with metacognitive capabilities; InproTK was developed mainly for investigating the impact of incremental speech processing on the naturalness of human-machine conversations; OpenDial allows one to compare the traditional MDP/POMDP14 dialog management paradigm with structured probabilistic modelling [14]. Due to their particular foci, they often use special architectures, interfaces, and languages paying little attention to existing speech and multimodal standards (e.g. see the discussions in [2]). For example, none of the above research systems implements VoiceXML, MRCPv2, or EMMA (see Section 4 for more details on these standards). In this chapter, we describe a system that was designed to bridge the gap between the industrial demand for standardization and the openness, community engage- ment, and extensibility required by the scientific community. This system, HALEF, is an open-source cloud-based multimodal dialog system that can be used with dif- ferent plug-and-play back-end application modules [22, 25, 30]. In the following sections, we will first describe the overall architecture of HALEF (Section 2) in- cluding its operational flow explaining how multimodal interactions are carried out (in Section 3). We will then review major components of multimodal dialog systems that have previously been subject to intensive standardization activity by the inter- national community and discuss to what extent these standards are reflected in the HALEF framework. These include: • standards for dialog specification describing system prompts, use of speech recognition and interpretation, telephony functions, routing logic, etc. (primarily VoiceXML), see Section 4.1.1 (also see the chapter on the topic in this book [1]); • standards controlling properties of the speech recognizer, primarly grammars, statistical language models, and semantic interpretation (e.g., JSGF, SRGS, ARPA, WFST, SISR), see Section 4.1.2; • standards controlling properties of the speech synthesizer (primarily SSML); 9 https://github.com/UFAL-DSG/alex 10 https://bitbucket.org/inpro/inprotk 11 http://www.opendial-toolkit.net 12 http://www.metalogue.eu 13 http://www.iristk.net 14 Partially Observable Markov Decision Processes 4 Ramanarayanan et al. • standards controlling the communication between the components of the multi- modal dialog system (SIP, MRCPv2, WebRTC, EMMA), see Section 4.2; • standards describing the dialog flow and how modalities interact (SCXML, EMMA), see Section 5. 2 The HALEF Dialog System The multimodal HALEF framework [22, 25, 30] is composed of the following dis- tributed open-source modules (see Figure 1 for a schematic overview): • Telephony servers – Asterisk [16] and Freeswitch [17] – that are compatible with SIP (Session Initiation Protocol), PSTN (Public Switched Telephone Network) and WebRTC (Web Real-Time Communications) standards, and include support for voice and video communication. • A voice browser – JVoiceXML [23] – that is compatible with VoiceXML 2.1, can process SIP traffic, via a voice browser interface called Zanzibar [21] and incorporates support for multiple grammar standards such as JSGF (Java Speech Grammar Format), ARPA (Advanced Research Projects Agency), and WFST (Weighted Finited State Transducer), which are described in Section 4.1.2. • An MRCPv2 (Media Resource Control Protocol Version 2) speech server – which allows the voice browser to control media processing resources such as speech recorders, speech recognizers or speech synthezisers over
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages15 Page
-
File Size-