
Computer Graphics Technical Reports CG-2007-4 Improving the Aesthetic Quality of Realtime Motion Data Sonification Christoph Henkelmann Computer Science Dept. II, University of Bonn, R¨omerstr. 164, D-53117 Bonn, Germany [email protected] Institut f¨urInformatik II Universit¨at Bonn D-53117 Bonn, Germany c Universit¨atBonn 2007 ISSN 1610-8892 ii CONTENTS 1. Introduction ............................... 1 1.1 Previous work . 3 1.2 Overview . 5 1.3 Used software . 6 2. The Basics ................................ 9 2.1 Definitions and Human Hearing . 9 2.1.1 Data and Audio Streams . 9 2.1.2 Frequency and Pitch . 10 2.1.3 Measures of Amplitude . 11 2.1.4 Timbre . 12 2.1.5 Masking and Psychoacoustics . 13 2.2 Sonification . 13 2.2.1 Audification . 13 2.2.2 Earcons and Auditory Icons . 14 2.2.3 Audio Beacons . 15 2.2.4 Model-based Sonification . 15 2.2.5 Parameter Mapping Sonification . 15 2.3 Summary . 16 3. Implementation of Realtime Sonification ............... 17 3.1 Sonification and MIDI . 17 3.2 Sonification and OSC . 20 3.3 Choosing an Appropriate Sonification Tool . 21 3.4 How Pd Works . 25 3.5 Using Pd as a Sonification Tool . 29 3.5.1 Loading and Saving Settings . 29 3.5.2 Scaling of Motion Data . 33 3.5.3 Audio Utilities . 34 3.6 Making Motion Data Available in Pd . 34 3.6.1 Input from Files . 34 3.6.2 Input via TCP/IP . 35 iv Contents 3.6.3 Input from Rowing Machine sensors . 36 3.7 Summary . 38 4. Continuous Parameter Mapping Sonification ............. 39 4.1 Audio Artifacts . 40 4.1.1 Zipper Noise . 40 4.1.2 Foldover . 42 4.2 Modulation of Pitch . 43 4.2.1 Formant Shift . 43 4.2.2 Problems with Musical Perception . 48 4.3 Modulation of Amplitude . 50 4.4 Modulation of Timbre . 51 4.4.1 Subtractive Synthesis . 52 4.4.2 Waveshaping and Frequency Modulation . 53 4.4.3 Formants & Vocal Sounds . 61 4.4.4 The Tristimulus Model . 64 4.5 Spatial Positioning . 70 4.6 Maintaining a Constant Amplitude . 71 4.7 Summary . 72 5. Applications ............................... 75 5.1 Rowing Motion Sonifications . 76 5.2 Walking Motion Sonifications . 77 6. A More Musical Approach ....................... 81 6.1 A “Musical” Sonification? . 81 6.2 Applying Sonification to Paradigms of Western European Music 82 6.3 A Melodic Sonification . 84 6.4 Re-introducing Fine Grained Parameter Mapping . 86 6.5 Creating Harmonic Progress . 86 6.6 Summary . 88 7. Further Prospects ............................ 91 7.1 MotionLab and OSC . 91 7.2 More Sound Based Methods . 92 7.3 Sophisticated Sound Design . 92 7.4 More General Methods for Audio Rendering . 93 7.5 Need for Psychoacoustic Evaluation . 94 7.6 Conclusion . 94 Contents v Appendix 95 A. List of Pd abstractions and externals ................. 97 A.1 arpeggiator ........................... 97 A.2 arpeggiator scale ........................ 98 A.3 channel ............................ 98 A.4 clipping ............................ 98 A.5 crossfading loop sampler .................... 98 A.6 derivative ........................... 99 A.7 ergometer ........................... 99 A.8 ergometer input ........................100 A.9 file input . 100 A.10 floatmap ............................100 A.11 fm1 ..............................101 A.12 hold note ............................101 A.13 master .............................101 A.14 median .............................102 A.15 midi channel . 102 A.16 norm mapping .........................102 A.17 paf ...............................103 A.18 paf vowel ............................103 A.19 pink noise . 103 A.20 reverb .............................104 A.21 sampleloop . 104 A.22 sampleloop filter ........................104 A.23 settings ............................104 A.24 sine pitch . 105 A.25 sonify bend . 105 A.26 sonify control .........................105 A.27 sonify note cont ........................105 A.28 sonify note dis .........................106 A.29 sonify scale . 106 A.30 ssymbol ............................107 A.31 subtractive1 . 107 A.32 subtractive2 . 107 A.33 subtractive3 . 107 A.34 subtractive4 . 108 A.35 subtractive5 . 108 A.36 sv ...............................108 A.37 tristimulus . 108 A.38 tristimulus model .......................109 vi Contents A.39 waveshaping1 . 109 B. Pitch Ranges .............................. 111 C. Audio Examples ............................ 113 Bibliography ................................ 123 1. INTRODUCTION Everybody is familiar with the subject of visualization. We all have seen numerous charts, diagrams, icons and other kinds of abstract graphic repre- sentations of information. If we see a chart relating e.g., income to taxes, or age to cancer risk, we immediately have an understanding of the structure of the corresponding dataset. We learn reading such charts, graphs, etc. from as early on as elementary school. Man being a mainly visual being, it is obvious that a visual representation is the first idea that springs to mind if faced with the task of conveying information (or datasets from which the recipient is to deduce said information). Representing data visually however has its drawbacks. For some appli- cations, another sense than vision is far more useful for monitoring data: hearing. If we use sound instead of images as our medium for representing data, we speak of sonification instead of visualization. That way, sonification can be thought of as the audio equivalent of visualization. Instead of creating graphs and charts of our data, we map the data to audio sample streams. This has certain advantages over a visual display: • Free movement: When we watch data, we always have to keep our eyes on the screen (or another output device). If data is presented to us as sound, we can move freely about. • Background monitoring: When some quantity is permanently super- vised aurally, we can focus our attention on other tasks, as long as the changes in the data create a monotone feedback. But as soon as the data changes drastically (which should be represented in a drastic change in sound), the sonification automatically comes to our attention. • Temporal resolution: The temporal resolution of hearing is about twice as high (20-30 ms) as the temporal resolution of vision (50-60 ms). When we take spatial location into account, human hearing can differ- entiate time intervals of down to 1 ms![Warren, 1993] • High dynamic ranges: We hear over a large range of amplitudes and pitches which allows for a high resolution of presentation. 2 1. Introduction Fig. 1.1: Realtime sonification of motion data illustrated. Certain quantities of a motion (in this case rowing) are constantly measured. This constant stream of data is fed into a tool (in this case Pure Data). This tool creates constant audio feedback to which the subject performing the mo- tion listens to. This sonification should improve his perception of the movement. This in turn influences the way he executes that movement. 1.1. Previous work 3 • Supplementary information: Audio feedback can be used in combina- tion with classical visualization. • New Impulses: As hearing is simply different than vision, a sonification may convey new structures in data that were not detected by visualizing the dataset. Especially the aspects of free movement and better temporal resolution make sonification interesting when the dataset in question is created by mea- surements of human movement. These measurements could be positions, velocities, forces and quantities derived from these measurements. The soni- fications could provide additional feedback for improving motion sequences in professional sports or medical therapy. Obviously, realtime feedback in these applications must not hinder the subject in his or her mobility. The better temporal resolution of hearing over vision is useful as exact timing of movements is crucial in sports. Also, we are used to get audio feedback from our movements anyway and are used to react to that (the sound of footsteps, the rustle of clothes, the impact of a ball...). 1.1 Previous work Sonification, though still not a widespread method, has already been applied to a number of data exploration tasks. Jameson [1994] uses Sonification to gain insight into the workflow of a program and debug it. In the field of geology, sonification is already a common tool [Hayward, 1994]. Ekdale and Tripp [2005] use sonification for the classification of fossils. Other applica- tions include the sonification of meteorological data [Bylund and Cole, 2001], the aural supervision of health measurements [Fitch and Kramer, 1994] and the navigation of maps for the visually impaired [Zhao et al., 2005]. Effenberg [1996] describes in detail the motivation for using sonification in the field of human movement. Taking into account various results from perceptual psychology and sports science, he assumes a positive effect on motion learning using sonifications derived from motion data. Especially with respect to temporal perception, he considers audio representations of movements promising. This assumption is further undermined by compara- tive analysis of the results of numerous studies using acoustic feedback for sports movements. In [Effenberg, 2004], Effenberg analysis amongst others the effectiveness of identifying motion patterns according to their sonifica- tions. Effenberg [2005] gives empirical results for identifying and reproducing the height of a jump using only visual and combined audiovisual feedback. 4 1. Introduction The use of sonification in sports lead to a cooperation of the Institut f¨ur Sportwissenschaft und Sport and department II of the Institut f¨urInformatik at the university of Bonn. One of the results of this cooperation is described in Melzer [2005]. Melzer presents an expansion to the MotionLab Software1 that is able to sonify various parameters of a motion sequence via param- eter mapping sonification (see section 2.2.5). The data streams are turned into streams of MIDI Messages that are sent to the built-in General MIDI Synthesizer of Microsoft Windows. Data streams can be either mapped to the pitch or amplitude of a certain MIDI Channel, arbitrary controller data cannot be sent. An short summary of this module can be found in [Effenberg et al., 2005]. This expansion module was then implemented as a standalone applica- tion that received the data streams via TCP/IP.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages133 Page
-
File Size-