New Icad Templates

New Icad Templates

Proceedings of ICAD 05-Eleventh Meeting of the International Conference on Auditory Display, Limerick, Ireland, July 6-9, 2005 Lexical Semantics and Auditory Presentation in Virtual Storytelling Research paper for the ICAD05 workshop "Combining Speech and Sound in the User Interface" Minhua Ma and Paul Mc Kevitt School of Computing & Intelligent Systems, Faculty of Engineering, University of Ulster, Magee Campus Derry/Londonderry, BT48 7JL Northern Ireland. {m.ma,[email protected]} ABSTRACT Audio presentation is an important modality in virtual 2. BACKGROUND AND PREVIOUS WORK storytelling. In this paper we present our work on audio presentation in our intelligent multimodal storytelling system, We are developing an intelligent multimedia storytelling CONFUCIUS, which automatically generates 3D animation interpretation and presentation system called CONFUCIUS. It speech, and non-speech audio from natural language automatically generates 3D animation and speech from natural sentences. We provide an overview of the system and describe language sentences as shown in Figure 1. The input of speech and non-speech audio in virtual storytelling by using CONFUCIUS is sentences taken from children’s stories like linguistic approaches. We discuss several issues in auditory "Alice in Wonderland" or play scripts for children. display, such as its relation to verb and adjective ontology, CONFUCIUS’ multimodal output include 3D animation with concepts and modalities, and media allocation. Finally we speech and nonspeech audio, and a presentation agent—Merlin conclude that introducing linguistic knowledge provides more the narrator. Our work on virtual storytelling so far focussed on intelligent virtual storytelling, especially in audio presentation. generating virtual human animation and speech with particular emphasis on how to use visual and audio presentation to cover more verb classes. 1. INTRODUCTION Multimodal virtual reality applications such as online games, virtual environments, and virtual storytelling, are more and more demanding the ability to render not only visual but audio scenes. A goal of our work is to create rich auditory environments that can augment 3D animation in virtual Figure 1. Input and output of CONFUCIUS storytelling. This paper presents our work on auditory presentation in our intelligent multimodal storytelling system, Figure 2 shows the architecture of CONFUCIUS. The CONFUCIUS, and proposes a linguistically-based approach to dashed part in the figure is the knowledge base including transform written language into multimodal presentations, language knowledge (lexicons and a syntax parser) which is including speech and non-speech sounds. We believe that used in the Natural Language Processing (NLP) module, and integrating linguistic knowledge can achieve more intelligent visual/audio knowledge such as 3D models of characters, props, multimodal storytelling which best employs different modalities and animations of actions, which encapsulate their nonspeech to present stories, and the methodology we proposed here can auditory information and are used in the animation engine. The serve as a framework for researchers in auditory display. surface transformer takes natural language sentences as input First, in section 2 we introduce background of this study-- and manipulates surface text. The NLP module uses language the intelligent multimodal storytelling system, CONFUCIUS, knowledge to parse sentences and analyse their semantics. The and review various nonspeech audio that could be used in media allocator then generates an XML-based specification of virtual storytelling. Next in section 3, a linguistically-based the desired multimodal presentation and assigns content to three approach for auditory presentation is proposed. We discuss different media: animation and nonspeech audio, characters’ several issues of this approach such as the verb/adjective speech, and narration, e.g. it sends the parts bracketed in ontology for audio semantics. Then we describe the auditory quotation marks near a communication verb to the text-to- presentation of CONFUCIUS in section 4. Finally, section 5 speech engine. The animation engine takes semantic compares our work to related research on virtual storytelling representation and use visual knowledge to generate 3D and summarizes the work with a discussion of possible future animations. The animation engine and Text-to-Speech (TTS) work. operate in parallel. Their outputs are combined in the synchronizing module, which outputs a holistic 3D virtual world including animation and speech in VRML format. Finally the narration integration module integrates the VRML file with ICAD05-358 Proceedings of ICAD 05-Eleventh Meeting of the International Conference on Auditory Display, Limerick, Ireland, July 6-9, 2005 the presentation agent, Merlin the Narrator, to complete a source events, i.e. we do not seem to hear sounds, but instead multimedia story presentation. the sources of sound. Supposing that everyday listening is often We use VRML to model 3D objects and virtual characters the dominant mode of hearing sounds, Gaver argues that in our story world. VRML spares efforts on media coordination auditory displays should be built using real-world sounds. since its Sound node is responsible for describing how sound is Theoretically, the advantage of auditory icons seems to be in positioned and spatially presented within a scene. It can also the intuitiveness of the mapping between sounds and their describe a sound that will fade away at a specified distance meaning. Auditory icons accompanying daily life events are a from the Sound node by ProximitySensor. This facility is major source of nonspeech audio in CONFUCIUS. Certainly useful in presenting non-speech sound effects in storytelling. It the intuitiveness of this approach to auditory display will result enables us to encapsulate sound effects within object models, in more vivid story presentation. e.g. to encapsulate the engine hum within a car model and Earcons are melodic sounds, typically consisting of a small hence locate the sound at a certain point where the car is. The number of notes, with musical pitch relations (Gaver 1989,1). Sound node brings the power to imbue a scene with ambient They relate to computer objects, events, operations, or background noise or music, as well. interactions by virtue of a learned mapping from experience. The basic idea of earcons is that by taking advantage of sound dimensions, such as pitch, timbre, and rhythm, information can be communicated to the user efficiently. Of the four basic techniques for auditory display, earcons have been used in the largest number of computer applications. The simplest earcons are auditory alarms and warning sounds such as incoming e- mail notification, program error, and low battery alarm on mobile phones. The effectiveness of an earcon-based auditory display depends on how well the sounds are designed. Sonification is the technique of translating multi- dimensional data directly into sound dimensions. Typically, sound parameters such as amplitude, frequency, attack time, timbre, and spatial location are used to represent system variables (Bly et al. 1987). The goal is synthesizing and translating data from one modality, perhaps a spatial or visual Figure. 2. Architecture of CONFUCIUS one, to the auditory modality. Sonification has been widely applied to a wealth of different domains: synthesized sound used as an aid to data visualisation (especially abstract 2.1. Nonspeech audio quantitative data), for program comprehension, and monitoring performance of parallel programs. The use of nonspeech audio to convey information in intelligent In synthesized music, sounds are interpreted for consonance, multimedia presentation is referred to in the human factors rhythm, melodic content, and hence are able to present more literature as auditory display. Besides basic advantages such as advanced information such as emotional content. Computer- reducing visual clutter, avoiding visual overload, and not based music composition initiated in the mid 1950s when requiring focused attention, auditory displays have other Lejaren Hillier and Leonard Isaacson conducted their first benefits: detection times for auditory stimuli were shorter than experiments with computer generated music on the ILLIAC for visual stimuli [1], and short-term memory for some auditory computer at the University of Illinois. They employed both a information is superior to the short-term memory for visual rule-based system utilising strict counterpoint and a information. probabilistic method based on Markoff chains. The recent Current research in the use of nonspeech audio can history of automated music and computers is densely populated generally be divided into two approaches. The first focuses on with examples based on various theoretical rules from music developing the theory and applications of specific techniques of theory and mathematics. Developments in such theories have auditory display. The techniques of auditory icons, earcons, added to the repertoire of intellectual technologies applicable to sonification, and music synthesis have dominated this line of the computer. Amongst these are the Serial music techniques, research and are discussed in detail here below. The second line the application of music grammars, sonification of fractals, and of research examines the design of audio-only interfaces--much chaos equations, and connectionist pattern recognition of this work is concerned with

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us