Large Scale Integrating Project

Grant Agreement no.: 257899

D7.1 – Multimodal interaction report and specification

SMART VORTEX –WP7-D7.1

Project Number FP7-ICT-257899 Due Date 2012-09-30 Actual Date 2012-09-12 Document Author/s: Stefan Radomski Dirk Schnelle-Walka Version: 1.1 Dissemination level Confidential Status Restricted draft M24 Contributing Sub-project and Work All partners in the project package Document approved by RTDC

Co-funded by the European Union

Document Version Control

Version Date Change Made (and if appropriate Initials of Commentator(s) reason for change) or Author(s) 0.2 07.08.2012 Initial structural Draft TUD

1.1 27.09.2012 Included RTDC comments TUD

Document Change Commentator or Author

Author Name of Author Institution Initials SR Stefan Radomski TUD

DS-W Dirk Schnelle-Walka TUD

Document Quality Control

Version QA Date Comments (and if appropriate reason for change) Initials of QA Person 1.1 30/09/2012 Minor changes IK

This document contains information which given under the Non-Disclosure Agreement of the SmartVortex consortium. Data in the tables which are shown in the document are confidential and cannot be used for any publication or scientific papers without explicit permission of the contributing company. Distribution of these data outside the SmartVortex Consortium members is forbidden and seen as a violation of the “Non-Disclosure Agreement”.

SMART VORTEX WP7/D7.1 2 / 17 - 2 -Page 2 of 17 PAGE 19/xxx

Catalogue Entry

Multimodal Interaction Report and Specification Title 1st Restricted Draft

Creators Stefan Radomski

Subject Deliverable SmartVortex D7.1

Description

Publisher

Contributor

Date September 30th 2012

ISBN

Type Delivery SmartVortex

Format

Language English

Rights SmartVortex Consortium

Citation Guidelines

SMART VORTEX WP7/D7.1 3 / 17 - 3 -Page 3 of 17 PAGE 19/xxx

EXECUTIVE SUMMARY

WP7 started at month 7 of the project and this document describes the ongoing work regarding multi-modal interfaces in the scope of WP7 within Smart Vortex. A framework to implement multi-modal interfaces following the recommendations of the W3C Multimodal Interaction Working Group is outlined and its application within Smart Vortex to achieve the objectives of WP7 is described. The work in the upcoming year 3 will focus on adding modalities to the framework; implement ISP specific requirements and a closer coordination and integration with related work packages.

SMART VORTEX WP7/D7.1 4 / 17 - 4 -Page 4 of 17 PAGE 19/xxx

TABLE OF CONTENTS

EXECUTIVE SUMMARY ...... 4 TABLE OF CONTENTS ...... 5 1 Introduction ...... 6 1.1 Objectives of WP 7 ...... 6 1.2 Approach ...... 6 2 The Multimodal Interaction Framework ...... 8 2.1 Multimodal Dialog Control ...... 9 2.2 Modality Specific Components ...... 10 2.2.1 Graphical / Text ...... 10 2.2.2 Spatial Audio ...... 11 2.2.3 3D Data ...... 11 2.2.4 Speech ...... 12 2.2.5 Notifications ...... 12 2.2.6 Multi-Touch Input ...... 12 2.2.7 Location...... 12 2.2.8 The Confero Suite from Alkit ...... 13 2.2.9 Visual Query Editor ...... 13 2.2.10 Additional Components ...... 13 2.3 The Event Bus ...... 13 3 Current State and Roadmap ...... 14 4 Key Performance Indicators ...... 15 5 References ...... 17

SMART VORTEX WP7/D7.1 5 / 17 - 5 -Page 5 of 17 PAGE 19/xxx

1 INTRODUCTION

1.1 Objectives of WP 7

The work in this package is concerned with enabling natural interaction for querying data- streams, in order to support a collaborative decision process. Here, natural refers to offering multi-modal interfaces to allow users to choose the input and output modalities most suited for the current context or have the system select suitable modalities with regard to meta-data, available devices and other sensors respectively.

The red outline in figure 1 shows WP6 along with WP7 in the overall layered architecture of Smart Vortex from deliverable 2.3. While WP6 focuses on graphical query editing, WP7 will provide multi-modal interfaces for those queries.

Figure 1: Contributions of WP6 and WP7 within the layered Smart Vortex architecture.

1.2 Approach

In a first step, we implemented several isolated applications, employing various modalities to work with data streams similar to those expected in Smart Vortex. Some of these applications were delivered to project partners, others demonstrated at the consortium meetings and workshops in order to get early user feedback and suggestions for subsequent iterations. These applications, also, helped to clarify requirements, demonstrated technical feasibility and form part of the basis for our future work within the project. They are described as part of the modality specific components we will provide for Smart Vortex below.

In order to integrate the various modalities and to offer a unified approach for employing multi-modal interfaces, we choose to integrate the isolated components in a coherent framework that will be part of the Smart Vortex suite.

SMART VORTEX WP7/D7.1 6 / 17 - 6 -Page 6 of 17 PAGE 19/xxx

The required capabilities and architecture of such a framework has been subject to extensive research since the original “Put That There” multi-modal application from Bolt in 1980 [1]. A multitude of approaches were proposed in subsequent work [2], and in 2002 the W3C formed the “Multimodal Interaction Working Group” to standardize multi-modal application development in the scope of the W3C MMI framework [3]. In august of 2012, the group proposed the W3C MMI architecture as a recommendation [4].

The framework for multi-modal interfaces developed as part of Smart Vortex closely follows the recommendations of this working group with custom adaptations to work with data- streams, ISP specific requirements of interfaces and support for collaboration. By following this approach, we expect to provide a coherent way to model adaptive multi-modal interfaces, with reusability of components between partners and applications and standardized extension points for future work. Furthermore, our experiences with employing the W3C MMI framework as part of Smart Vortex can help to identify additional requirements, for future iterations of the W3C recommendations.

We will implement the multi-modal specific functionality of the ISP demonstrators as applications within this framework.

SMART VORTEX WP7/D7.1 7 / 17 - 7 -Page 7 of 17 PAGE 19/xxx

2 THE MULTIMODAL INTERACTION FRAMEWORK

The MMI framework developed for Smart Vortex closely follows the recommendations of the W3C working group for a Multimodal Interaction Framework (W3C MMI framework) and its related recommendations. Within the W3C MMI framework, a multi-modal application is described as a set of loosely coupled modality components (MC), coordinated by interaction managers (IM), nested in a tree-like data structure (see figure 2). Communication among the components is achieved via a technology agnostic event-bus with defined life-cycle events and application specific data.

Figure 2: Course collaboration diagram in the W3C MMI Framework.

MCs provide access to user in- and system output on different levels of abstraction. At the lowest level, MCs provide access to modality specific in- and output components. That might be access to actual hardware or available modality specific interpreters (e.g. a HTML browser). Input from these MCs is processed in a chain of nested MC/IM components until it reaches an uppermost IM, where the input is transformed to an abstract system output. This output representation is then, again, processed and concretized by a chain of nested MC/IM components until it reaches a set of MCs at the lowest level, where the modality specific output is rendered.

Within Smart Vortex, streamed data from the DSMS will be available via a set of special MCs, registering queries at the DSMS and making the streamed data available for processing and ultimately the user interfaces modeled in the framework.

Figure 3 shows a general reference architecture for multi-modal interfaces, refined from several architectures proposed in research. By organizing the various responsibilities related to input, fusion, dialog control, fission and output as a set of nested MC/IM components, it becomes clear how the W3C approach can be applied for arbitrary multi-modal interfaces. Also within figure 3, available W3C standards related to the various stages of processing are identified.

SMART VORTEX WP7/D7.1 8 / 17 - 8 -Page 8 of 17 PAGE 19/xxx

Input Fusion Dialog Manager Fission Output

Text Feature Modality Knowledge Text (Keyboard) Fusion Selection (Monitor)

Speech Audio & Voice (Microphon) (Speakers) Semantic Dialog Content Fusion Control Generation Writing Video (Pen) (Monitor)

External ... Integration Synchronize ... Systems

Available W3C Standards

VoiceXML XHTML EMMA SCXML CC/PP XHTML SRGS InkML SCXML CCXML SMIL VoiceXML SISR EMMA SSML PLS PLS EmotionML Figure 3: MMI Reference Architecture and available W3C standards.

2.1 Multimodal Dialog Control

The actual modality components need to be coordinated to arrive at a coherent, user- observable system behavior. This responsibility is referred to as multi-modal “Dialog Control” (or “Dialog Management”) and is performed by the IMs. In the general reference architecture from figure 3, only the topmost/central IM is shown to be concerned with dialog management. The W3C architecture explicitly allows each nested MC/IM component to perform responsibilities in this regard. By integrating streamed data as another input modality, all the concepts related to dialog management are made available to work with streams.

Within related research, different approaches for dialog management were proposed, from simple state machines to rule-based approaches to collaborating agents. The more elaborate approaches have the problem that the system behavior is not obvious and that, e.g. the introduction of new rules leads to complexities and opaque side effects. This fact is acknowledged by the W3Cs proposal to use simpler means for dialog management. In the W3C MMI framework, SCXML[5] is the prime candidate for dialog management within the IMs. A SCXML document describes a finite state machine with nested and parallel states as formalized by Harel and Politi in 1998 [7]. By extending simple state machines with these concepts, a more compact representation of the usually very verbose descriptions is achieved while retaining the obvious system behavior. Whenever a state is entered, exited or a transition is taken, a SCXML interpreter can do things. Within the standard this is most prominently to send requests or invoke services. These services can be other IMs as additional SCXML instances, nested MC/IM components, modality specific interpreters or anything else the platform supports (e.g. data streams). Transitions are guarded by conditions and events. Conditions can use arbitrary boolean expressions regarding the state of the application contained in a data model attached to a SCXML instance. Events can be raised internally or received externally from other components. For the data model, ECMAScript is suggested as it provides a turing-complete addition to augment the dialog management and many developers are familiar with its concepts i.e. from dynamic HTML documents.

An interpreter for SCXML documents can be regarded as a multi-modal browser, integrating in- and output from various modality components into a coherent multi-modal application. TUD provides such a SCXML interpreter to coordinate the modality components described below. The SCXML interpreter will be available as a stand-alone browser, embedded in

SMART VORTEX WP7/D7.1 9 / 17 - 9 -Page 9 of 17 PAGE 19/xxx native applications and as an integrated component in the Smart Vortex dashboard (see below).

The benefit of using W3C standards to express the multi-modal interface behavior is the reduced effort to integrate other components developed within the W3C MMI framework and the availability of third-party tools to help with authoring and adapting SCXML documents (e.g. graphical editors). This ultimately enables end-user programming to express new and adapt existing multi-modal interfaces.

2.2 Modality Specific Components

Most recommendations applicable for MCs in the W3C MMI framework are concerned with graphical or spoken modalities, as related work from the W3C in these areas is most pronounced. Other, more specialized MCs applicable within the Smart Vortex, e.g. for multi- touch input, spatial audio output or video remain largely unspecified. As the W3C MMI framework still specifies the actual interface on the level of life-cycle events and message delivery, it is no problem to introduce application specific messages for custom components while retaining interoperability and reusability within the Smart Vortex suite. As other aspects become standardized, the respective components can be adapted to conform to the recommendations if desired. In the following sections, we briefly introduce modality components, either to be realized as simple MCs or nested MC/IM components for which TUD will provide implementations. While the list may seem exhaustive, we already implemented many of the modalities within prototypical demo applications and it is only a matter of integrating them into the Smart Vortex MMI framework.

2.2.1 Graphical / Text

The foremost W3C recommendation to render graphical content in most of its varieties is (X)HTML [8]. The consortium agreed at its meeting in Gothenburg and again at the PCA3 meeting in Rome this year to use (X)HTML in the context of the dashboard to provide a centralized access to all of the Smart Vortex suite. Within the dashboard, small portlets realize the graphical user interface for most of the Smart Vortex tools.

The MMI framework will integrate these portlets as MCs to render graphical output and provide text/form input. Furthermore, an administrative portlet will be available to upload SCXML documents to provide the means to define new and edit existing multi-modal interfaces within Smart Vortex (see figure 4). This way, integration into the central dashboard is achieved while enabling the possibility to employ other modalities, even from within the dashboard.

Figure 4: Portlets for MMI administration (mockup) and a MC for graphical HTML output.

SMART VORTEX WP7/D7.1 10 / 17 - 10 -Page 10 of 17 PAGE 19/xxx

2.2.2 Spatial Audio

Audio is an important modality for delivering information about the state of queries, listen into microphones attached to remote equipment and notifications in general. It is available ubiquitously and most often underutilized. We did some preliminary prototypes employing spatial audio on stereo and dolby X.1 systems with up to eight speakers to provide the illusion of audio arriving from different directions with convincing results. These implementations were already integrated into the Confero suite from Alkit and will be part of a modality component to render spatial audio as part of the MMI framework.

Spatial audio is very useful in conjunction with map displays for geo-referenced data streams, with the center of the map being the location of the listener and audio renderings from surrounding sensors arriving from their respective, relative location. Also mobile users can benefit from spatial audio with their actual location taking into account to render audio from surrounding data streams. It can give an indication about the health of surrounding sensors and whether it is necessary to engage into a problem solving session. We implemented geo- referenced spatial audio both for desktop and mobile devices with OpenAL and will provide such a component as part of the Smart Vortex suite.

There is no explicit standard for spatial audio. In general, RTP/RTCP is available, but does not take spatial alignment into account. The component integrated into Confero uses RTP/RTCP with special RTCP or XML messages to update the location of an arriving RTP stream.

2.2.3 3D Data

Especially for the use-case with FE-Design, displaying three-dimensional data is beneficial to visualize the set of iterations from the finite element computations. We developed an isolated collaborative 3D viewer, where participants can browse through the 3D representations and share their view of problematic areas (see figure 5). Other clients can subscribe to a colleagues view or navigate the 3D models on their own. This application was already delivered to FE-Design in order to get early user feedback.

Figure 5: Two collaborative 3D Viewers for FEM models from a session with three participants.

The application also integrates into the dashboard by providing a portlet to render and navigate 3D screenshots (see figure 6), but is not yet integrated as a modality component into the overall MMI framework.

SMART VORTEX WP7/D7.1 11 / 17 - 11 -Page 11 of 17 PAGE 19/xxx

Figure 6: Integration of 3D Models into the Dashboard

VRML is available as a standard to express such models and the application for FE-Design already processes VRML data among other formats.

2.2.4 Speech

The VoiceXML2.1 recommendation [9] provides a mature and established standard for voice user interfaces. And, with JVoiceXML, an open source VoiceXML interpreter, maintained by the TUD is available. While it is no problem to integrate voice user interfaces into the MMI framework, reliability of such interfaces for input remains problematic, especially given the different languages spoken in the expected target deployments. It is, nevertheless, beneficial to support spoken output for notifications. Therefore, the Smart Vortex suite will provide the means to trigger spoken output as part of its MMI framework.

We already developed voice user interfaces in the context of the W3C MMI framework1 but have yet to integrate them into the Smart Vortex suite.

2.2.5 Notifications

Sending notifications is one area where the benefit of multi-modality is most obvious. A multi-modal application within the framework can just issue a notification to a user or a group in the system and the respective component will determine the best way to deliver the notification. Actual delivery can then be achieved by using e-mail, a browser pop-up in the dashboard or instant messaging. We will provide a modality component to notify users and groups as part of the MMI framework.

2.2.6 Multi-Touch Input

With the advent of smart phones, multi touch input became popular. Today multi-touch is available on all mobile devices and an ever-increasing number of desktop environments. We already implemented applications employing multi-touch input (e.g. as part of the mobile client for the 3D viewer at FE-Design - a video of an early technology demo is available2) but have yet to integrate this modality into the MMI framework for Smart Vortex.

2.2.7 Location

While not strictly a modality, having the location of a users available while managing a multi- modal dialog can be beneficial for various aspects of the interaction. We already have

1 http://www.youtube.com/watch?v=edXjU5ZVVnM 2 http://www.youtube.com/watch?v=GrZA2ONhYb8

SMART VORTEX WP7/D7.1 12 / 17 - 12 -Page 12 of 17 PAGE 19/xxx developed isolated, location-aware applications and will make this information available as part of the MMI framework.

2.2.8 The Confero Suite from Alkit

Alkit provided their Confero Suite as a set of integrated tools for tele-presence to the Smart Vortex consortium. We had an informal TUD/Alkit workshop in june 2012 in Gothenburg where we evaluated possible approaches to integrate the Confero Suite into other tools within the Smart Vortex suite. As a result, Alkit agreed to make parts of the Confero Suite available via their new Miles middleware and we will, in turn, make some of its functionality available for multi-modal interfaces via the MMI framework. This has already taken shape by integrating spatial audio from miles into the multi-modal browser.

2.2.9 Visual Query Editor

Within WP6, Rome is developing the Visual Query Editor (see deliverable 6.2). While it is not yet integrated into the MMI framework, it is able to register queries in DSMS, after their translation from visual representation (web widgets) to SCSQL textual one. So far, queries are only produced through "keyboards and mouse" via a web based user interface, but we will investigate other modalities as well. The predominant output modality is graphical, because it is most suited for most types of data considered. Other output MCs shall be considered as well, according to the data type and the visualization need (e.g. notification for possible incorrect system behavior detected or 3D data for FE-Design use case).

2.2.10 Additional Components

In order to provide extension points for additional modality components, we will provide templates for implementations in C/C++ and Java. By extending / implementing these templates, additional modalities can be made available within the MMI framework.

One possible extension is the inclusion of gestures to be made available for dialog modeling. We implemented several isolated applications using the Microsoft Kinect platform, but do not see an application as part of the ISPs as of yet.

2.3 The Event Bus

The W3C MMI framework assumes the existence of an event bus for IMs and MCs to exchange life-cycle events and application specific messages. Generally, HTTP is available to realize message exchange between endpoints, but it does not provide the functionality required for ad-hoc discovery and the related adaptivity of user interfaces.

Another applicable technology is mundo[6], a platform-independent pub/sub middleware with support for node discovery, object serialization and remote procedure calls developed as part of the earlier Smart Products EU project at the TUD. We employed an adapted version of this middleware called umundo for most of our prototypes and are making it available to the consortium. While the actual middleware is custom-build, all of the employed technologies are established or even standardized. We elaborate the benefits in deliverable 7.2.

For the MMI framework in Smart Vortex, we will use a two-fold approach of supporting HTTP for interoperability while transparently using umundo if both endpoints support it.

SMART VORTEX WP7/D7.1 13 / 17 - 13 -Page 13 of 17 PAGE 19/xxx

3 CURRENT STATE AND ROADMAP

Most of the modalities we are about to include into the Smart Vortex MMI framework described above already exist as part of isolated technical demonstrators, some are integrated into work from project partners or even deployed as part of early-access prototypes.

The SCXML interpreter, as the core of the MMI framework to control the various modalities in an application, exists as a stand-alone interpreter with only spatial audio available as of now. We are confident to give a demonstration of a first working application modeled with the framework at the second review meeting in Uppsala. For a more detailed discussion about the services and tools available, refer to deliverable 7.2.

The MMI framework will form the basis for all our future work within Smart Vortex. In the coming year, we will focus on adding modalities to more explicitly support the use-cases of the ISP demonstrators and implement additional requirements, especially in the context of working with data-streams and query manipulation. Furthermore, we will strengthen our ambitions regarding integration with the other work packages.

SMART VORTEX WP7/D7.1 14 / 17 - 14 -Page 14 of 17 PAGE 19/xxx

4 KEY PERFORMANCE INDICATORS

The following table lists the key performance indicators for WP7 from deliverable 1.2. We will briefly outline how we plan to achieve success using these metrics in the scope of the proposed framework as applicable.

Title of KPI Multi-modal user interface quality and usability Defined 1. Responsiveness: ability of the environment to perform (near)real- time interaction modalities recognition 2. User adaptability and feedback: ability of the environment to adapt the interaction modality to users' needs and requirements, as well as to provide feedback about the correctness of the provided input (voice, gestures, etc.) 3. Learnability: learning rate and ability of the user to perform and remember available actions provided by the environment, to be weighted wrt complexity of the task, user experience and user cognitive and technical skills 4. Accuracy: ability of the environment to detect, track and recognize user inputs, actions and commands 5. Intuitiveness: ability of the environment to provide a clear cognitive association between available commands and the functions or actions they correspond to 6. Cognitive and physical load: mental and physical load required to the user to perform intended tasks (wrt to user abilities and skills) 7. Re-configurability: effort and time required to train the environment to support different types of users and interaction modalities

Measured Time spent Target Minimize

The first measure is related to our choice of technologies we detail in deliverable 7.2. For most of our components, we choose C/C++ as the programming language to have direct access to native APIs for the various modalities and related implementations. An important issue with regard to interaction is start-up times of components e.g. measured in milli-seconds to first interaction. This is an area where C/C++ still provides the best results as no intermediate interpreter needs to be loaded. The second KPI is related but focuses on the adaptivity of multi-modal interfaces with regard to available devices and associated in- and output capabilities to reduce errors and increase efficiency. The SCXML interpreter will support distributed interfaces via the event-bus. The 3rd measure is a function of the actual applications modeled in the framework and part of the research in the upcoming year. We hope to achieve success regarding this KPI by researching and providing multi-modal interaction metaphors suited to stream processing and query manipulation. The ability of the interaction environment to detect and recognize user input in the 4th KPI is related to the employed modalities. While some modalities provide virtually error-free recognition of the users intent, others rely on error correction and context to achieve satisfactory recognition rates. By making additional context available to the multi-modal browser via streams and integration with other work packages, we aim to reduce recognition errors or dynamically fall back to modalities with better recognition rates. The intuitiveness of the 5th KPI is, again, related to the 3rd KPI as it is primarily a question of suitable metaphors for multi-modal interfaces to streamed data that will determine the intuitiveness.

SMART VORTEX WP7/D7.1 15 / 17 - 15 -Page 15 of 17 PAGE 19/xxx

The 6th KPI measures the overall effort required to use the provided interfaces and can be regarded as a catch-all measure as deficiencies in any of the other KPIs will diminish this measure. For the 7th and last KPI, our approach to provide a MMI framework is very beneficial. Adaptions to the interfaces can be performed dynamically at runtime or explicitly by end- users or contractors with edited or new UI descriptions in SCXML.

SMART VORTEX WP7/D7.1 16 / 17 - 16 -Page 16 of 17 PAGE 19/xxx

5 REFERENCES

1. Bolt, R.A.: ”put-that-there”: Voice and gesture at the graphics interface. In: Proceedings of the 7th annual conference on Computer graphics and interactive techniques, SIGGRAPH ’80, pp. 262–270. ACM, New York, NY, USA (1980)

2. Dumas, B.: Multimodal interfaces: A survey of principles, models and frameworks. Human Machine Interaction pp. 1–25 (2009). URL http://www.springerlink.com/index/ 65J39M5P56341N49.pdf

3. Larson, J.A., Raman, T., Raggett, D., Bodell, M., Johnston, M., Kumar, S., Potter, S., Waters, K.: Multimodal Interaction Framework, W3C Note. http://www.w3.org/TR/2003/NOTE-mmi-framework-20030506/ (2003)

4. Barnett, J., Bodell, M., Dahl, D., Kliche, I., Larson, J., Porter, B., Raggett, D., Raman, T., Rodriguez, B.H., Selvaraj, M., Tumuluri, R., Wahbe, A., Wiechno, P., Yud- kowsky, M.: Multimodal Architecture and Interfaces, W3C Proposed Recommendation. http://www.w3.org/TR/2012/PR-mmi-arch-20120814/ (2012)

5. Barnett, J., Akolkar, R., Auburn, R., Bodell, M., Burnett, D.C., Carter, J., McGlashan, S., Lager, T., Helbing, M., Hosn, R., Raman, T., Reifenrath, K., Rosenthal, N.: State chart XML (SCXML): State machine notation for control abstraction. W3C working draft, W3C (2012). Http://www.w3.org/TR/2012/WD-scxml-20120216/

6. Aitenbichler, E., Kangasharju, J., u hlha user , M.: MundoCore: A Light-weight Infrastructure for Pervasive Computing. Pervasive and Mobile Computing (2007). Doi:10.1016/j.pmcj.2007.04.002 (332–361)

7. Harel, D., Politi, M.: Modeling Reactive Systems with Statecharts: The Statemate Ap- proach. McGraw-Hill, Inc. (1998)

8. McCarron, S., Ishikawa, M., Altheim, M.: XHTML 1.1 - Module-based XHTML - Second Edition, W3C Recommendation. http://www.w3.org/TR/2010/REC-xhtml11- 20101123/ (2011)

9. Oshry, M., Auburn, R., Baggia, P., Bodell, M., Burke, D., Burnett, D.C., Can- dell, E., Carter, J., McGlashan, S., Lee, A., Porter, B., Rehor, K.: Voice Extensible (VoiceXML) Version 2.1, W3C Recommendation. http://www.w3.org/TR/voicexml21/ (2007)

SMART VORTEX WP7/D7.1 17 / 17 - 17 -Page 17 of 17 PAGE 19/xxx