Models and Algorithms for Control of Sounding Objects
Total Page:16
File Type:pdf, Size:1020Kb
SOb - the Sounding Object IST Project no. IST-2000-25287 Models and Algorithms for Control of Sounding Objects Report Version: 2.0 Report Preparation Date: January 10, 2003 Classification: Pub. Deliverable N. 8 Contract Start Date: 1 Jan. 2001 Duration: 2 years Project Co-ordinator: University of Verona (UNIVR) Partners: University of Udine (UNIUD), University of Limerick (UL), Royal Institute of Technology (KTH) - Stockholm Project funded by the European Community under the ‘Information Society Technologies’ Programme (1998-2002) Contents 1 Introduction 3 2 Toward a new model for sound control 5 2.1 Introduction . 5 2.2 Control models . 5 2.3 Walking and running . 6 2.3.1 Sounds . 6 2.3.2 Tempo . 7 2.4 Controlling the sounds of walking and running . 9 2.5 Pilot experiment: listening to walking and running sounds . 11 2.5.1 Stimuli . 11 2.5.2 Subjects and procedure . 11 2.5.3 Results and discussion . 11 2.6 Conclusions . 12 2.7 Links . 13 3 Scratching: from analysis to modelling 14 3.1 Introduction . 14 3.2 Common set-up for experiments . 14 3.2.1 Method . 15 3.2.2 Material . 15 3.2.3 Instrument line-up . 15 3.3 Experiment 1 - Playing the turntable . 16 3.3.1 Introduction . 16 3.3.2 Equipment . 17 3.3.3 Material . 19 3.3.4 Techniques . 20 3.4 Experiment 2 - Analysis of a genuine performance . 29 3.4.1 Analysis . 29 3.4.2 Method . 29 3.4.3 Equipment . 29 3.4.4 Calibrations . 30 3.4.5 Measurements outline . 32 3.4.6 Focal points . 33 3.4.7 Issues concerning rhythm and timing . 34 3.4.8 Crossfader . 34 3.4.9 Techniques . 35 3.4.10 Patterns . 36 1 3.4.11 Discussion . 36 3.5 Experiment 3 - Wearing of the vinyl . 37 3.5.1 Method . 37 3.5.2 Results . 38 3.5.3 Discussion . 41 3.6 Design issues for a control model for scratching . 44 3.6.1 Baby scratch model . 44 3.6.2 Overview over other scratching techniques . 45 3.6.3 Existing scratching software and hardware . 49 3.6.4 Reflections . 49 3.7 References . 50 4 Gestures in drumming and tactile vs. auditory feedback 52 4.1 Introduction . 52 4.2 Experiments . 54 4.2.1 Pilot experiments . 54 4.2.2 Main experiment . 55 4.3 Results . 57 4.4 Conclusions . 59 5 Models for sound control. Controlling the virtual bodhran - The Vodhran 61 5.1 Introduction . 61 5.1.1 Description of how the Bodhran is played . 61 5.1.2 Outlook . 62 5.2 Sound generation . 62 5.2.1 Implementation . 64 5.3 Experiment 1: Implementing measurement data of drummers . 64 5.3.1 The Radio baton as control device . 67 5.3.2 The ddrum as control device . 67 5.4 Experiment 2: Tracking users movements . 67 5.4.1 The software system . 69 5.4.2 Motion tracking . 69 5.4.3 System design . 71 5.4.4 Future developments . 72 5.5 Conclusions . 73 5.6 Acknowledgments . 73 A Modal synthesis 74 2 Chapter 1 Introduction R. Bresin1 The Deliverable 8 deals with models and algorithms for control of sounding objects. Indeed this is an almost unexplored field. There is almost no literature on the matter, and the little that has been found is all quite recent. Main applications and concerns about sound control have been found in the field of musical applications, in particular in the design of new instruments. Most of these applications concentrate on the design of new hardware interfaces, see works by [6] and [55] for an overview. Control of impact sounds has been used in robotics applications where the level of the sound produced by the collision between the endpoint of a robotic manipulator and an object is used to control the impact force of the robot itself, softer sound corresponding to less strength ([33]. In the IST European project MEGA, http://www.megaproject.org, gesture-tracking systems are used in artistic applications for real-time control of sound production. For instance, the movements of a dancer can be used for controlling the parameters of a sound synthesis algorithm. The main reason for this rather little and still ”in the early days” research in the field of sound control, could be the great effort that researchers have dedi- cated to sound synthesis algorithms and not to the development of control mod- els for them. This has led to systems with very sophisticated sound production but with poor control possibilities. In particular, physics-based sound models need specification of many parameters that translate into sound the physical gesture on the object represented by the sound model itself. For instance, if a wood bar is grasped with fingers or knocked with knuckles it produces two different sounds corresponding to the two different actions. The preliminary results presented in this Deliverable 8 of the SOb project are therefore a first effort to tackle the problem of designing suitable principles that can be applied to the control of sound-generating objects. Some of the main questions that the studies presented in this document try to provide an answer to, are the following: 1. What are the relevant gestures in producing a sound? 2. What are the variables involved? 3. What are the user’s intentions in producing a specific sound? 1Department of Speech, Music, and Hearing. KTH. [email protected] 3 4. Which feedback does the user couple with? The tactile or the acoustical? 5. Is it possible to design a control model that translates user gestures into sound in an intuitive and direct way? Starting points for providing answers to these questions are research tech- niques and outcomes from studies on music performance run at KTH in recent years. Three studies are presented. In the first study, a new model for sound control is proposed. It is based on models used in automatic music performance. In particular, the performance rules considered here are those that previous studies demonstrated to have links with body motion. Among them are the Legato and the Final ritard rules which were found to produce on note duration analogous effects to those of walking and running steps. These rules are applied to the control of walking and running sounds production. Listening test was used to validate the model. In a second study, the various scratching techniques used by a professional DJ were analysed, not only from a sound production point of view, but also through looking at the control movements made for obtaining those sounds. The analysis of these movements is a starting point for the design of a hyper turntablist where sound production with physics-based techniques is controlled by functions resembling a DJ’s movements. A third study is focused on the analysis of gestures in drumming. Partic- ular attention is given to the analysis of players’ behaviour when the auditory feedback from the sound production is artificially delayed relative to the impact time. Finally, a simple control model of drumming is proposed. This model is based on data from a previous study in which the movements of the hand, arm and the drumstick of four players were marked and filmed during different performing conditions. This version (1.3) of the document is to be considered not as the final version and it will be updated during the flow of the SOb project as soon as new outcomes will be revealed from further analyses and interpretations of the large amount of data available. Any comment the reader has about this document is very welcome. 4 Chapter 2 Toward a new model for sound control Roberto Bresin, Anders Friberg and Sofia Dahl 1 2.1 Introduction In the recent past, sound synthesis techniques have achieved remarkable re- sults in reproducing real sounds, like those of musical instruments. Unfortu- nately most of these techniques focus only on the ”perfect” synthesis of iso- lated sounds. For example, the concatenation of these synthesized sounds in computer-controlled expressive performances often leads to unpleasant effects and artifacts. In order to overcome these problems Dannenberg and Derenyi [17] proposed a performance system that generates functions for the control of instruments, which is based on spectral interpolation synthesis. Sound control can be more straightforward if sounds are generated with physics-based techniques that give access to control parameters directly con- nected to sound source characteristics, as size, elasticity, mass and shape. In this way sound models that respond to physical gestures can be developed. This is exactly what happens in music performance when the player acts with her/his body on the mechanics of the instrument, thus changing its acoustical behav- ior. It is therefore interesting to look at music performance research in order to identify the relevant gestures in sound control. In the following paragraphs outcomes from studies on automatic music performance are considered. 2.2 Control models The relations between music performance and body motion have been inves- tigated in the recent past. Musicians use their body in a variety of ways to produce sound. Pianists use shoulders, arms, hands, and feet; trumpet players make great use of their lungs and lips; singers put into actions their glottis, breathing system, phonatory system and use expressive body postures to ren- der their interpretation. Dahl [15] recently studied movements and timing of 1Department of Speech, Music and Hearing. KTH. {roberto,andersf,sofia}@speech.kth.se 5 percussionists when playing an interleaved accent in drumming. The movement analysis showed that drummers prepared for the accented stroke by raising the drumstick up to a greater height, thus arriving at the striking point with greater velocity.