Gestural Control of

Marcelo M. Wanderley £

IRCAM - 1, Pl. 75004 - Paris - France

[email protected]

Abstract Digital musical instruments do not depend on physical constraints faced by their acoustic counterparts, such as characteristics of tubes, membranes, strings, etc. This fact permits a huge diversity of possi- bilities regarding sound production, but on the other hand strategies to design and perform these new instruments need to be devised in order to provide the same level of control subtlety available in acous- tic instruments. In this paper I review various topics related to gestural control of music using digital musical instruments and identify possible trends in this domain.

1 Introduction Ð consisting of gesturally controlled, real time computer-generated sound Ð need to be considered The evolution of has brought to in order to obtain similar levels of control subtlety light a plethora of sound synthesis methods avail- as those available in acoustic instruments. able in general and inexpensive computer plat- This topic amounts to a branch of knowledge forms, allowing a large community direct access known as human-computer interaction (HCI) 4.In to real-time computer-generated sound. this context, various questions come to mind, such Both signal and physical models have already been as: considered as sufficiently mature to be used in con- cert situations, although much research continues ¯ Which are the specific constraints that exist to be carried on in the subject, constantly bring- in the musical context with respect to general ing innovative solutions and developments [8] [88] human-computer interaction? [64].

¯ Given the various contexts related to interac- On the other hand, input device technology that tion in sound generating systems, what are the captures different human movements can also be similarities and differences within these con- viewed as in an advanced stage [63] [7], consid- texts (interactive installations, digital musical ering both non-contact movements and manipula- instrument manipulation, dance-music inter- tion1. Specifically regarding manipulation, tactile faces)? and force feedback devices for both non-musical 2 and musical contexts have already been proposed ¯ How to design systems for these various mu- 3 [12] . sical contexts? Which system characteristics Therefore, the question of how to design and are common and which are context specific? perform new computer-based musical instruments

£ Current address: Faculty of Music - McGill University, 1.1 Human-Computer Interaction 555, Sherbrooke Street West, Montreal, Quebec - H3A 1E3 Canada and Music 1 With the exception of extreme conditions, such as 3- Gestural control of computer generated sound can dimensional whole-body acquisition in large spaces 2For a survey on haptic devices, check the Haptics Commu- be seen as a highly specialized branch of human- nity Web page at: http://haptic.mech.mwu.edu/ computer interaction (HCI) involving the simul- 3Even so, many users still use the traditional piano-like key- taneous control of multiple parameters, timing, board as the main input device for musical interaction. This sit- uation seems to be equivalent to the ubiquitous role played by 4For general information on human-computer interaction, the mouse and keyboard in traditional human-computer inter- the reader is directed to general textbooks, such as [25] or to action (HCI). the ACM SIGCHI webpage at: http://www.acm.org/sigchi/ rhythm, and user training [62]. According to A. roles have been minimized, but new contexts de- Hunt and R. Kirk [39]: rived from metaphors created in human-computer interaction are now available in music. In stark contrast to the commonly One of these metaphors is drag and drop, that has accepted choice-based nature of many been used in [104] with a graphical drawing tablet computer interfaces are the control in- as the input device, a sort of gesturally controlled terfaces for musical instruments and ve- sequencer, whereas in [80] and in [102] the same hicles, where the human operator is to- tablet was used in the sense of a more traditional tally in charge of the action. Many pa- instrument. rameters are controlled simultaneously Therefore, the same term interaction in a musical and the human operator has an overall context may mean [100]: view of what the system is doing. Feed-

back is gained not by on-screen prompts, ¯ instrument manipulation (performer- but by experiencing the moment-by- instrument interaction) in the context of moment effect of each action with the real-time sound synthesis control. whole body.

¯ device manipulation in the context of score- Hunt and Kirk consider various attributes as char- level control, e.g. a conductor’s baton used acteristics of a real-time multiparametric control for indicating the rhythm to a previously de- systems [39, pg. 232]. Some of these are: fined computer generated sequence [51] [3] [48]. Wessel and Wright use the term dipping ¯ There is no fixed ordering to the human- computer dialogue. to designate this context [104].

¯ There is no single permitted set of options ¯ other interaction contexts related to tradi- (e.g. choices from a menu) but rather a series tional HCI interaction styles, such as drag of continuous controls. and drop, scrubbing [104] or navigation [90].

¯ There is an instant response to the user’s

movements. ¯ device manipulation in the context of post- production activities, for instance in the case ¯ The control mechanism is a physical and 7 multi-parametric device which must be of gestural control of effects or learned by the user until the actions become sound spatialisation. automatic. ¯ interaction in the context of interactive mul-

¯ Further practice develops increased control timedia installations (where one person or intimacy and thus competence of operation. many people’s actions are sensed in or- der to provide input values for an au- ¯ The human operator, once familiar with the system, is free to perform other cognitive ac- dio/visual/haptic system). tivities whilst operating the system (e.g. talk- But also, to a different extent: ing while driving a car).

1.1.1 Interaction Context ¯ interaction in the context of dance (dance/music interfaces) [15]. Taking into account the specificities described above, let me consider the various existing con- ¯ computer games, i.e., manipulation of a com- texts5 in computer music6. puter game input device [22]. These different contexts are the result of the evolu- tion electronic technology allowing, for instance, a although in these two last cases the generation of same input device to be used in different situations, sound is not necessarily the primary goal of the e.g. to generate sounds (notes) or to control the interaction. temporal evolution of a set of pre-recorded notes. If traditionally these two contexts corresponded to 1.1.2 Music as Supervisory Control two separate roles in music Ð those of the per- Another way to consider the different contexts in former and the conductor, respectively Ð today not music is to relate them to supervisory control the- only the differences between different traditional ory. For instance, Sheridan [81] makes a parallel 5Sometimes called metaphors for musical control [104]. to supervisory control theory, where the notions of 6A detailed analysis of different interaction contexts has been proposed in [100], taking into account two points of view: 7Several papers on the control of digital au- system design (engineering) and semantical (human-computer dio effects are available from the DAFx site, at: interaction). http://echo.gaps.ssr.upm.es/COSTG6/goals.php3 zeroth, first and second order control correspond to contains a separate gestural interface (or gestu- different musical control levels, i.e., biomechanics ral controller unit) from a sound generation unit. (performer gestures and feedback), putting notes Both units are independent and related by mapping together, and composing and conducting the syn- strategies [34] [56] [76] [79]. This is shown in fig- chronization of musicians. ure 1. The term gestural controller10 can be defined here as the input part of the DMI, where physical inter- 2 Gestural Control of Sound action with the player takes place. Conversely, the Synthesis sound generation unit can be seen as the synthe- sis algorithm and its controls. The mapping layer I will now focus on the the main interest of this pa- refers to the liaison strategies between the outputs per and analyze the situation of performer-digital of the gestural controller and the input controls of 11 musical instrument interaction8. The focus of this the synthesis algorithm . work can be summarized as expert interaction by This separation is impossible in the case of tradi- means of the use of input devices to control real- tional acoustic instruments, where the gestural in- time sound synthesis software. terface is also part of the sound production unit. The suggested strategy to approach this subject If one considers, for instance, a clarinet, the reed, consists in dividing the subject of gestural control keys, holes, etc. are at the same time both the ges- of sound synthesis in four parts [98]: tural interface (where the performer interacts with the instrument) and the elements responsible for

¯ Definition and typologies of gesture the sound production. The idea of a DMI is analo- gous to ”splitting” the clarinet in a way where one

¯ Gesture acquisition and input device design could separate these two functions (gestural inter- face and sound generator) and use them indepen-

¯ Mapping of gestural variables to synthesis dently. variables Clearly, this separation of the DMI into two inde-

¯ Synthesis algorithms pendent units is potentially capable of extrapolat- ing the functionalities of a conventional musical The goal is to show that all four parts are equally instrument, the latter tied to physical constraints. important to the design of new digital musical in- On the other hand, basic characteristics of existing struments. This has been developed in detail in instruments may be lost and/or difficult to repro- [96]. duce, such as tactile/force feedback. Due to space constraints, I will focus here on item 2, gesture acquisition and input device de- sign. Item 1 has been studied in collaboration with 3.1 Gesture and Feedback Claude Cadoz in [14] and item 3 in collaboration with Andy Hunt and Ross Kirk in [40]. Synthesis In order to devise strategies concerning the de- algorithms have been explored in various articles sign of new digital musical instruments for ges- and textbooks, including [8], [72] and also in the tural control of sound synthesis, it is essential to perspective of real-time control in [18]. analyze the characteristics of actions produced by expert instrumentalists during performance. These actions are commonly referred to as gestures in 3 Digital Musical Instruments the musical domain. In order to avoid discussing all nuances of the meaning of gesture, let me ini- In this work, the term digital musical instrument tially consider performer gestures as performer ac- (DMI)9 is used to represent an instrument that tions produced by the instrumentalist during per-

8 formance. A detailed discussion is presented in Due to space constraints, other important and interesting 12 modalities of human-computer interaction in music will not be [14] . studied here. An electronic publication (CDROM), Trends in Gestural Control of Music, co-edited by the author and by M. a VMI as both the synthesis algorithm and the input device [56] Battier [97], may present useful guidelines for the study of other [24] [35], although in this case the digital musical instrument is modalities not discussed here. eventually much more real or tangible and less virtual. 9The term digital musical instrument [2] will be used in- 10The term gestural controller is use here meaning input de- stead of virtual musical instrument - VMI[57] due to the various vice for musical control. meanings of VMI, such as for example in the software package 11Details of each module of figure 1 will be considered in the Modalys, where a software-defined instrument is called a vir- rest of this document. tual instrument, without necessarily the use of an input device. 12I will use the term performer gesture throughout this doc- This is also the common usage of the term in the field of physi- ument meaning both actions such as prehension and manipula- cal modeling [88]. On the other hand, various authors consider tion, and non-contact movements. INPUT MAPPING Gestures

Gestural Sound Controller Production (Signal model) Primary feedback Physical model

Secondary feedback

Figure 1: A Digital Musical Instrument representation.

3.1.1 Performer Gestures tem that will capture these characteristics for fur- ther use in the interactive system. Instrumentalists simultaneously execute various In the case of performer - acoustic instrument in- types of gestures during performance. Some of teraction, this acquisition may be performed in them are necessary for the production of sound three ways: [11], others may not be clearly related to sound production [23] [99], but are nevertheless present

¯ Direct acquisition, where one or various sen- in most highly-skilled instrumentalists’ perfor- sors are used to monitor performer’s actions. mances. The signals from these sensors present iso- One can approach the study of gestures by either lated basic physical features of a gesture: analyzing the possible functions of a gesture dur- pressure, linear or angular displacement, and ing performance [69] or by analyzing the physical acceleration, for instance. A different sen- properties of the gestures taking place [17]. By sor is usually needed to capture each physical identifying gestural characteristics Ð functional, in variable of the gesture. a specific context, or physiological Ð one can ulti-

mately gain insight into the design of gestural ac- ¯ Indirect acquisition, where gestures are iso- quisition systems. lated from the structural properties of the Regarding both approaches, it is also important sound produced by the instrument [1] [44] to be aware of the existing feedback available to [68] [26] [61]. Signal processing techniques the performer, be it visual, auditory or tactile- can then be used in order to derive per- kinesthetic. Feedback can also be considered de- former’s actions by the analysis of the fun- pending on its characteristics, as: damental frequency of the sound, its spectral envelope, its power distribution, etc.

¯ Primary/secondary [92], where primary feed-

back encompasses visual, auditory (clarinet ¯ Physiological signal acquisition, the analysis key noise, for instance) and tactile-kinesthetic of physiological signals, such as EMG [45] feedback, and secondary feedback relates to [65]. Commercial systems have been devel- the sound produced by the instrument. oped based on the analysis of muscle tension and used in musical contexts [85] [5] [86]

¯ Passive/active [7], where passive feedback re- [49] [105]. Although capturing the essence lates to feedback provided through physical of the movement, this technique is hard to characteristics of the system (a switch noise, master since it may be difficult to separate the for instance) and active feedback is the one meaningful parts of the signal obtained from produced by the system in response to a cer- physiological measurement. tain user action (sound produced by the in- strument). Direct acquisition has the advantage of simplicity when compared to indirect acquisition, due to the Tactile-kinesthetic, or tactual [6] feedback is com- mutual influence of different parameters present posed by the tactile and proprioceptive senses [75]. in the resulting sound (i.e. instrument acoustics, room effect and performer actions). Nevertheless, 3.2 Gestural Acquisition due to the independence of the variables captured, direct acquisition techniques may underestimate Once the gesture characteristics have been ana- the interdependency of the various variables ob- lyzed, it is essential to devise an acquisition sys- tained. 3.2.1 Direct Acquisition computer inputs, they need to be sampled and con- verted in a suitable format, usually MIDI (Musi- Direct acquisition is performed by the use of dif- cal Instrument Digital Interface) [42] or more ad- ferent sensors to capture performer actions. De- vanced protocols [29]. pending on the type of sensors and on the combi- Various analog-to-MIDI converters have been pro- nation of different technologies in various systems, posed and are widely available commercially. The different movements may be tracked. first examples have been developed already in the Citing B. Bongers, a well-known alternate con- eighties [95] [84]. troller designer [7]:

Sensors are the sense organs of a ma- 3.2.4 Comparison of Analog-to-MIDI inter- chine. Sensors convert physical energy faces (from the outside world) into electric- Concerning the various discussions on the advan- ity (into the machine world). There are tages and drawbacks of the MIDI protocol and its sensors available for all known physical use [55], strictly speaking, nothing forces some- quantities, including the ones humans one to use MIDI or prevents the use of faster or use and often with a greater range. For different protocols. It is interesting to notice that instance, ultrasound frequencies (typi- many existing systems have used communication cally 40 kHz used for motion track- protocols other than MIDI in order to avoid speed ing) or light waves in the ultraviolet fre- and resolution limitations. One such system is quency range. the TGR (transducteur gestuel rectroactif« , from ACROE [13]. 3.2.2 Sensor Characteristics and Musical Ap- I compare here six commonly available models of plications sensor-to-MIDI interfaces in order to provide the reader an idea of their differences13. Their basic Some authors consider that most important sen- characteristics are seen in table 1. sor characteristics are: sensitivity, stability and re- One can notice that interface characteristics may peatability [83]. Other important characteristic re- differ to a significant amount, but most models lates to the linearity and selectivity of the sensor’s present comparable figures. As already pointed output, it’s sensitivity to ambient conditions, etc. out, the limiting factor regarding speed and reso- P. Garrett considers six descriptive parameters ap- lution is basically the specifications of the MIDI plicable to sensors as [32]: accuracy, error, preci- protocol, not the electronics involved in the design. sion, resolution, span, and range. In general instrumentation circuits, sensors typi- cally need to be both precise and accurate, and 3.2.5 Indirect Acquisition present a reasonable resolution. In the musical As opposed to direct acquisition, indirect acquisi- domain, it is often stressed that the choice of a tion provides information about performer actions transducer technology matching a specific musical from the evolution of structural properties of the characteristic relates to human performance and sound being produced by an instrument. In this perception: for instance, mapping of the output of case, the only sensor is a , i.e., a sen- a sensor that is precise but not very accurate to a sor measuring pressure or gradient of pressure. variable controlling loudness may be satisfactory, but if it is used to control pitch, its innacuracy will Due to the complexity of the information avail- probably be more noticeable. able in the instrument’s sound captured by a micro- phone, different real-time signal processing tech- Various texts describe different sensors and trans- niques are used in order to distinguish the effect of ducer technologies for general and musical appli- a performer’s action from environmental features, cations, such as [32] and [33] [7] [28], respectively. such as the influence of the acoustical properties of The reader is directed to these texts for further in- the room. formation. Generically, one could identify basic sound param- eters to be extracted in real-time. P. Depalle, cites 3.2.3 Analog to MIDI Conversion four parameters [98]:

For the case of gesture acquisition with the use of 13I have developed an extended comparison of nine commer- different sensors, the signals obtained at the sen- cially available models, including technical characteristics and sors outputs are usually available in an analog for- general features, based on information provided from manufac- turers and/or owner manuals/manufacturer’s web pages. This mat, basically in the form of voltage or current sig- comparison is available from the Gesture Research in Music nals. In order to be able to use these signals as home-page at http://www.ircam.fr/gesture Interface ADB I/O AtoMIC Pro Digitizer (I-Cube) MIDIBox Midicreator Sensorlab Manufacturer BeeHive Ircam Infusion Systems NOTAM York Elect. Centre STEIM Platform Macintosh Any Any Any Any Any

Max. SR [Hz] < 90 1000 200/225 (12/8 bits) Approx. 400 120 250 Analog IN 4 32 32 8 8 32, 2x3 Usound Digital IN 4/6/8 8 - 16 8 8x16 key Input Res. 8 bits 10/ 8 bits 12/ 8 bits 8 bits 8 bits 8/14bits Usound Outputs 8 8 + 4 MIDI 8 switch + MIDI 6 + MIDI MIDI MIDI Size(HWD)[mm] 29x122x65 38x165x225 34x121x94 20x100x100(PCB) 41x227x166 35x200x100

Table 1: Comparison of 6 Analog-to-MIDI commercial interfaces.

¯ Short-time energy, related to the dynamic 3.3 Gestural Controllers profile of the signal, indicates the dynamic Once one or several sensors are assembled as part level of the sound but also possible differ- of a unique device, this device is called an input ences of the instrument position related to the device or a gestural controller14. microphone. As cited above, the gestural controller is the part of the DMI where physical interaction takes place.

¯ Fundamental frequency, related to the Physical interaction here means the actions of sound’s melodic profile, gives information the performer, be they body movements, empty- about fingering, for instance. handed gestures or object manipulation, and the perception by the performer of the instrument’s

¯ Spectral envelope, representing the distribu- status and response by means of tactile-kinesthetic, tion of sound partial amplitudes, may give in- visual and auditory senses. formation about the resonating body of the in- Due to the large range of human actions to be cap- strument. tured by the controller15 and depending on the in- teraction context where it will be used (cf. section

¯ Amplitudes, frequencies and phases of sound 1.1.1), its design may vary from case to case. In or- partials, that can alone provide much of the der to analyze the various possibilities, we propose information obtained by the previous param- a three-tier classification of existing controller de- eters. signs as [98] [7]:

¯ Instrument-like controllers, where the input Obviously, in order to perform the analysis of the device design tends to reproduce each fea- above or other parameters during indirect acquisi- ture of an existing (acoustic) instrument in tion, it is important to consider the correct sam- detail. Many examples can be cited, such pling of the signal. According to the Nyquist the- as electronic keyboards, guitars, saxophones, orem, this frequency needs to be at least twice as marimbas, and so on. big as the maximum frequency of the signal to be A sub-division of this class of gestural con- sampled. trollers would be that of Instrument-inspired Although one could reasonably consider that fre- controllers, that although largely inspired by quencies of performer actions can be limited to few existing instrument’s design, are conceived hertz, fast actions can potentially present higher for another use. One example is the Super- frequencies. A sampling frequency value typically Polm violin, developed by S. Goto, A. Ter- proposed for gestural acquisition is 200 Hz [50]. rier, and P. Pierrot [66] [35], where the input Some systems may use higher values, from 300 to device is loosely based on a violin shape, but 1kHz [13]). Recently, researchers considered the is used as a general device for the control of ideal sampling frequency to be around 4 kHz [30] . [29]. As a general figure, one can consider 1 kHz as sufficient for most applications. ¯ Augmented Instruments, also called Hybrid Several works on indirect acquisition systems have Controllers, are instruments augmented by already been presented. They include both hy- 14Called input device in traditional human-computer interac- brid systems (using also sensors), such as the hy- tion. 15 percello [46] and pure indirect systems, such as According to A. Mulder, a virtual musical instrument (here called digital musical instrument) is ideally capable of captur- the analysis of clarinet performances [68] [26] and ing any gesture from the universe of all possible human move- guitar [61]. ments and use them to produce any audible sound [56]. the addition of extra sensors [4] [46]. Com- From an engineering point of view, it is important mercial augmented instruments included the to propose means to compare existing designs 17 in Yamaha Disklavier, used in for instance in order to evaluate their strong and weak points and pieces by J.-C. Risset [71] [70]. Other exam- eventually come up with guidelines for the design ples include the flute [67] [107] and the trum- of new input devices. pet [21] [41] [89], but any existing acoustic Some authors consider that new devices, designed instrument may be instrumented to different according to ergonomical and cognitive principles, degrees by the additions of sensors. could eventually become general tools for musical control [93] [57] [87] [58] 18.

¯ Alternate controllers, whose design does not follow an established instrument’s one. Some examples include the Hands [95], graphic 4.1 Design Rationale: Engineering drawing tablets [80], etc. For instance, a ges- versus Idiosyncratic approaches tural controller using the shape of the oral cavity has been proposed in [60]. The use of pure engineering/ergonomical ap- proaches can be challenged by the comparison with the evolution of input device design in For instrument-like controllers, although repre- human-computer interaction. In fact, researcher senting a simplified (first-order) model of the W. Buxton [10] considers HCI and ergonomics as acoustic instrument, many of the gestural skills failed sciences. He argues that although a sub- developed by the performer on the acoustic in- stantial volume of literature on input device evalu- strument can be readily applied to the controller. ation/design in these two areas has already been Conversely, for a non-expert performer, these con- proposed, current available devices have benefit trollers present roughly the same constraints as little from all this knowledge and therefore major those of an acoustic instrument16, technical diffi- innovations are not often proposed. culties inherent to the former may have to be over- come by the non-expert performer. The problem with both points of view Ð engineer- ing versus idiosyncratic Ð seems to be their appli- Alternate controllers, on the other hand, allow the cation context. Although one can always question use of other gesture vocabularies than those of the engineering approach by stressing the role of acoustic instrument manipulation, these being re- creativity against scientific design [20], the propo- stricted only by the technology choices in the con- sition of scientific methodologies is also a key troller design, thus allowing non-expert performers factor for the evaluation of existing gestural con- the use of these devices. Even so, performers still trollers. have to develop specific skills for mastering these new gestural vocabularies [95]. Conversely, engineering methodologies, shall not prevent the use of creativity in design, although this can be a side effect of structured design ratio- nales. But without a common basis for evaluation, 4 An Analysis of Existing Input the differentiation between input devices and sim- Devices ple gadgets turns out to be hazardous. As stated before, the design of a new input device A reasonable number of input devices have been for musical performance is generally directed to- proposed to perform real-time control of mu- wards the fulfillment of specific and sometimes id- sic [73] [63], most of them resulting from com- iosyncratic musical goals, but is always based on poser’s/player’s idiosyncratic approaches to per- an engineering corpus of knowledge. This tech- sonal artistic needs. These interfaces, although nical background allows the choice of transducer often revolutionary in concept, have mostly re- technologies and circuit designs that implement mained specific to the needs of their inventors. the interface needed to perform the initial musical 19 Four examples of gestural controllers are shown in goals . figures 2 and 3. 17A similar situation occurs in others areas, such as haptic The advantages and drawbacks of each controller devices [36]. type depends mostly on the user goals and back- 18On the other hand, other authors claim that effort demand- ground, but unfortunately systematic means of ing and hard-to-play instruments are the only ones that provide evaluating gestural controllers are not commonly expressive possibilities to a performer [77] [78] [94]. 19A description of several input device designs is proposed available. in [7], where Bongers review his work at STEIM, the (Den Haag) and in the Royal Academy of Arts in 16This fact can be modified by the use of different mapping Amsterdam. Another good review of different controllers has strategies, as shown in [76]. been presented by J. Paradiso in [63] Figure 2: Left: Joseph Butch Rovan holding a wx7, an instrument-like (saxophone) controller by Yamaha. Right: Suguru Goto and the SuperPolm, an instrument-inspired controller (violin).

Figure 3: Left: manipulating the Pacom, by Starkier and Prevot. Right: Jean-Philippe Viollet using the WACOM graphic tablet. Both devices are considered as alternate controllers.

Therefore, although the final development goals dragging tasks, where the size and distance be- are musical and consequently any criticism of tween target squares are used as tests parameters. these goals turns into a question related to aesthet- In 1994, R. Vertegaal and collaborators have pre- ical preferences, their design is based on engineer- sented a methodology, derived from standard HCI ing principles that can, and need, to be evaluated tests, that addressed the comparison of input de- and compared. This evaluation is essential, for in- vices in a timbre navigation task [90] [92]. Al- stance, in the selection of existing input devices for though pioneering in the field, the methodology performing different tasks [80], but it can also be used consisted of a pure selection (pointing and useful in the identification of promising new op- acquisition) task, i.e., the context of the test was a portunities for the design of novel input devices navigation in a four parameter timbral space [91], [16]. not a traditional musical context in the sense of in- strumental performance. 4.2 Gestural Controller Design In 1996, Vertegaal et al. [93] [87] proposed an at- It may also be useful to propose guidelines for the tempt to systematically relate an hypothetical mu- design of new input devices based on knowledge sical function (dynamic Ð absolute or relative Ð from related fields, such as experimental psychol- or static) to a specific sensor technology and to ogy, physiology and human-computer interaction the feedback available with this technology. This [62]. means that certain sensor technologies may outper- Taking the example of the research in human- form others for a specific musical function. The computer interaction, many studies have been car- interest of this work is that it allows a designer to ried out on the design and evaluation of input de- select a sensor technology based on the proposed vices for general (non-expert) interaction. The relationships, thus reducing the need for idiosyn- most important goal in these studies is the im- cratic solutions. An evaluation of this methodol- provement of accuracy and/or time response for a ogy is presented in [102]. certain task, following the relationship, known as Another attempt to address the evaluation of the Fitts’ law [47]. well-known HCI methodologies and their possible Also, standard methodologies for tests have been adaptation to the musical domain was presented in proposed and generally consist of pointing and/or [62]. Although one cannot expect to use method- ologies from other fields directly into the musical 5.2 Mapping Strategies domain, at least the analysis of similar develop- ments in better established fields may help finding Although simple one-to-one or direct mappings directions suitable for the case of computer music. are by far the most commonly used, other mapping strategies can be used. 5 Mapping of Gestural Vari- For instance, we have shown that for the same ges- tural controller and synthesis algorithm, the choice ables to Synthesis Inputs of mapping strategy may be the determinant factor concerning the expressivity of the instrument [76]. Once gesture variables are available either from in- The main idea was to challenge the main directions dependent sensors or as a result of signal process- in digital musical instrument design, i.e. input de- ing techniques in the case of indirect acquisition, vice design and research on different synthesis al- one then needs to relate these output variables to gorithms, and focus on the importance of different the available synthesis input variables. mappings using off-the-shelf controllers and stan- Depending on the sound synthesis method to be dard synthesis algorithms. used, the number and characteristics of these input Different mapping strategies were applied to the variables may vary. For signal model methods, one simulation of traditional acoustic single reed in- may have a) amplitudes, frequencies and phases of struments using the controller, based on the ac- sinusoidal sound partials for additive synthesis; b) tual functioning of the single reed. The three ba- an excitation frequency plus each formant’s cen- sic strategies were suggested: one-to-one, one-to- ter frequency, bandwidth, amplitude and skew for many and many-to-one, and were used in order to formant synthesis; c) carrier and modulation coef- propose different mappings, from a simple one-to- ficients (c:m ratio) for frequency modulation (FM) one to complex mappings simulating the physical synthesis, etc. behavior of the clarinet’s reed. This is shown in It is clear that the relationship between the gestural figure 4. variables and the synthesis inputs available is far We could show, from the experience of single reed from obvious. How does one relate a gesture to a instrument performers that tried the system that c:m ratio? the use of different mappings did influence the ex- For the case of physical models, the available vari- pressivity obtained during the playing, without any ables are usually the input parameters of an instru- modifications in either the input device or the syn- ment, such as blow pressure, bow velocity, etc. In thesis algorithm. a sense, the mapping of gestures to the synthesis inputs is more evident, since the relation of these inputs to the algorithm already encompasses the 5.3 Mapping for General Musical Per- multiple dependencies based on the physics of the formance particular instrument. The example above applying different mapping strategies to the simulation of instrumental perfor- 5.1 Systematic Study of Mapping mance has derived from the actual physical behav- ior of the acoustic instrument. But in the case of The systematic study of mapping is an area that an alternate digital musical instrument, the possi- is still underdeveloped. Only a few works have ble mapping strategies to be applied are far from been proposed that analyze the influence of map- obvious, since no model of the mappings strategies ping on digital musical instrument performance or to be used is available. Even so, it can be demon- suggested ways to define mappings to relate con- strated that complex mappings may influence user troller variables to synthesis inputs. Examples of performance for the manipulation of general input works include: [9], [43], [27], [106], [19], [54], devices in a musical context. [76], [59], [101], [38], [31], [39], [53], and [18]. An interesting work by A. Hunt and R. Kirk A detailed review of the existing literature on map- [37] [38] [39] presented a study on the influence ping was presented in [14], and in [96]. Discus- over time of the choice of mapping strategy on sions have also been carried on on the role of map- subject performance in real-time musical control ping in computer music in the Working Group on tasks. User performance was measured over a pe- Interactive Systems and Instrument Design in Mu- riod of several weeks and showed that complex sic, at ICMA and EMF20, where a complete bibli- mapping strategies used with the multi-parametric ography on the subject is available, as well as sug- instrument allowed better performance than sim- gestions on basic readings and links to existing re- pler mappings for complex tasks (various parame- search. ters changing simultaneously) and also that perfor- 20http://www.notam.uio.no/icma/interactivesystems/wg.html mance with complex mapping strategies improved WX7

breath dynamics (Y)

loudness lip pressure vibrato (X, relative)

fingering fundamental frequency (X, absolute)

breath dynamics (Y)

loudness lip pressure vibrato (X, relative) fingering fundamental frequency (X, absolute)

breath dynamics (Y)

loudness lip pressure vibrato (X, relative) fingering fundamental frequency (X, absolute)

Figure 4: Examples of different mapping strategies for the simulation of instrumental performance [76]. over time. 6 Developments in Sound Syn- thesis Methods

5.4 A Model of Mapping as two Inde- pendent Layers On the other extreme of current trends regarding the digital musical instruments, various develop- Mapping can be implemented as a single layer [9] ments on sound synthesis methods have been pro- [19]. In this case, a change of either the gestural posed, among them methods for both physical and controller or synthesis algorithm would mean the signal models. definition of a different mapping. One proposition to overcome this situation is the Physical models are specially useful for a realis- definition of mapping as two independent layers: tic simulation of a given acoustic instrument and a mapping of control variables to intermediate pa- today models of various instruments exist [82] rameters and a mapping of intermediate parame- and are commercially available. Disadvantages of ters to synthesis variables [101]. physical models include the lack of analysis meth- This means that the use of different gestural con- ods, difficulties regarding the continuous morph- trollers would necessitate the use of different map- ing between different models and most of all the pings in the first layer, but the second layer, be- complexity regarding the real-time performance tween intermediate parameters and synthesis pa- using these models, that correspond to the difficul- rameters, would remain unchanged. Conversely, ties encountered with the real acoustic instrument. changing the synthesis method involves the adap- tation of the second layer, considering that the Signal models, specifically additive synthesis [74], same abstract parameters can be used, but does not present the advantage of having well-developed interfere with the first layer, therefore being trans- analysis tools that allow the extraction of param- parent to the performer. eters corresponding to a given sound. Therefore, the morphing of parameters from different sounds The definition of those intermediate parameters or can lead to continuous transformations between an intermediate abstract parameter layer can be different instruments. Although not necessarily re- based on perceptual variables such as timbre, loud- producing the full behavior of the original instru- ness and pitch, but can be based on other percep- ment, the flexibility allowed by signal models may tual characteristics of sounds [103] [52] [31] or be interesting for the prototyping of control strate- have no relationship to perception, being then arbi- gies since the mapping is left to the instrument de- trarily chosen by the composer or performer [101]. signer. 7 Conclusions further our knowledge about general multi- parametric control situations and the influ- This paper has critically commented on various ence of mapping on performance. From these topics related to real-time, gesturally controlled studies there seems to exist a huge unexplored computer-generated sound. potential on the definition of mapping strate- I have discussed the specificities of the interac- gies for different contexts. tion between a human and a computer in various musical contexts. These contexts represent differ- ent interaction metaphors for music/sound control Acknowledgements and cannot be underestimated since they define the level of interaction with the computer. This work is a result of a PhD thesis recently completed I have then focused on the specific case of real- at IRCAM - Centre Pompidou, in Paris, thanks to fund- ing through a doctoral grant from CNPq, Brazil. time gestural control of sound synthesis, and pre- Thanks to my advisors Philippe Depalle and Xavier sented and discussed the various constituent parts Rodet, and to Marc Battier, Claude Cadoz, Shlomo Dub- of a digital musical instrument. nov, Andy Hunt, Fabrice Isart, Ross Kirk, Nicola Orio, I have claimed that a balanced analysis of these Joseph Butch Rovan, Norbert Schnell, Marie-H«el`ene constituent parts is an essential step towards the Serra, Hugues Vinet, and Jean-Philippe Viollet, for var- design of new instruments, although current devel- ious discussions, ideas, and comments throughout my opments many times tend to focus either on the de- stay at IRCAM. sign of new gestural controllers or on the proposi- This paper contains material already published by the tion of different synthesis algorithms. author and collaborators. These are dully referenced throughout the text. 8 Suggestions of Future Re- search Directions References [1] N. Bailey, A. Purvis, I. W. Bowler, and P. D. Manning. Finally, I present suggestions of topics for future Applications of the Phase Vocoder in the Control of research directions that I believe can eventually Real-Time Electronic Musical Instruments. Interface, lead to new directions on the design and perfor- 22:259Ð275, 1993. mance of digital musical instruments. [2] M. Battier. Les Musiques Electroacoustiques et l’environnement informatique. PhD thesis, University

¯ Concerning gestural acquisition, the use of in- of Paris X - Nanterre, 1981. direct acquisition through the analysis of the [3] R. Boie, M. V. Mathews, and A. Schloss. The Radio sound produced by acoustic instruments may Drum as a Controller. In Proc. of the 1989 help the design of new gestural controllers. International Computer Music Conference. San Fran- Although various works have been proposed cisco, Calif.: International Computer Music Associa- tion, pages 42Ð45, 1989. in this direction, a general overview showing the current state of the research in this area [4] B. Bongers. The use of Active Tactile and Force Feed- back in Timbre Controlling Electronic Instruments. In was missing and was presented in [96]. It can Proc. of the 1994 International Computer Music Con- serve as a basis for future developments in the ference. San Francisco, Calif.: International Computer field. Music Association, pages 171Ð174, 1994. [5] B. Bongers. An Interview with Sensorband. Computer

¯ The development of evaluation techniques for Music J., 22(1):13Ð24, 1998. multiparametric expert user tasks is a difficult [6] B. Bongers. Tactual Display of Sound Properties in problem that may improve our knowledge of Electronic Musical Instruments. Displays, 18:129Ð133, the actual motor behavior of performers in 1998. these circumstances. The definition of tasks [7] B. Bongers. Trends in Gestural Control of Music, chap- taking into account different musical contexts ter Physical Interfaces in the Electronic Arts. Interaction may also help the comparison of existing de- Theory and Interfacing Techniques for Real-time Perfor- mance. Ircam - Centre Pompidou, 2000. vices and suggest design methodologies for the developments of new digital musical in- [8] G. Borin, G. dePoli, and A. Sarti. Musical Signal Pro- cessing, chapter Musical Signal Synthesis, pages 5Ð30. struments. Swets and Zeitlinger, 1997.

¯ The study of mapping strategies is, in my [9] I. Bowler, A. Purvis, P. Manning, and N. Bailey. On opinion, quite underdeveloped. We have been Mapping N Articulation onto M Synthesiser-Control Pa- rameters. In Proc. of the 1990 International Computer pioneers in showing the isolated effect of Music Conference. San Francisco, Calif.: International mapping on digital musical instrumental de- Computer Music Association, pages 181Ð184, 1990. sign and another important work recently car- [10] W. Buxton. Trends in Gestural Control of Music, chapter ried out at the University of York has pushed Round Table. Ircam - Centre Pompidou, 2000. [11] C. Cadoz. Instrumental Gesture and Musical Composi- [29] A. Freed, R. Avizienis, T. Suzuki, and D. Wessel. Scal- tion. In Proc. of the 1988 International Computer Music able Connectivity Processor for Computer Music Per- Conference. San Francisco, Calif.: International Com- formance Systems. In Proc. of the 2000 International puter Music Association, pages 1Ð12, 1988. Computer Music Conference. San Francisco, Calif.: In- [12] C. Cadoz, L. Lisowski, and J. L. Florens. A Modu- ternational Computer Music Association, 2000. lar Feedback Keyboard Design. Computer Music J., [30] A. Freed and D. Wessel. Communication of Musical 14(2):47Ð56, 1990. Gesture using the AES/EBU Digital Audio Standard. In [13] C. Cadoz and C. Ramstein. Capture, Repr«eesentation Proc. of the 1998 International Computer Music Con- and ”Composition” of the Instrumental Gesture. In Proc. ference. San Francisco, Calif.: International Computer of the 1990 International Computer Music Conference. Music Association, pages 220Ð223, 1998. San Francisco, Calif.: International Computer Music [31] G. Garnett and C. Gouseseune. Performance Factors in Association, pages 53Ð56, 1990. Control of High-Dimensional Spaces. In Proc. of the [14] C. Cadoz and M. M. Wanderley. Trends in Gestural 1999 International Computer Music Conference. San Control of Music, chapter Gesture-Music. Ircam - Cen- Francisco, Calif.: International Computer Music Asso- tre Pompidou, 2000. ciation, pages 268Ð271, 1999. [15] A. Camurri. Interactive Dance/Music Systems. In Proc. [32] P. H. Garrett. Advanced Instrumentation and Computer of the 1995 International Computer Music Conference. I/O Design. Real-time System Computer Interface Engi- San Francisco, Calif.: International Computer Music neering. IEEE Press, 1994. Association, pages 245Ð252, 1995. [33] N. Gershenfeld and J. Paradiso. Musical Applications of [16] S. Card, J. Mackinlay, and G. Robertson. A Morpho- Electric Field Sensing. Computer Music J., 21(2):69Ð89, logical Analysis of the Design Space of Input Devices. 1997. ACM Transactions on Information Systems, 9(2):99Ð [34] S. Gibet. Codage, Repr«esentation et Traitement du Geste 122, 1991. Instrumental. PhD thesis, Institut National Polytech- [17] I. Choi. Cognitive Engineering of Gestural Primitives nique de Grenoble, 1987. for Multi-Modal Interaction in a Virtual Environment. In [35] S. Goto. The Aesthetics and Technological Aspects of Proc. IEEE International Conference on Systems, Man Virtual Musical Instruments: The Case of the Super- and Cybernetics (SMC’98), pages 1101Ð1106, 1998. Polm MIDI Violin. Leonardo Music Journal, 9:115Ð [18] I. Choi. Trends in Gestural Control of Music, chapter 120, 1999. Gestural Primitives and the Context for Computational [36] V. Hayward and O. R. Astley. Robotics Research: Processing in an Interactive Performance System. Ircam The 7th International Symposium, chapter Perfor- - Centre Pompidou, 2000. mance Measures for Haptic Interfaces, pages 195Ð207. [19] I. Choi, R. Bargar, and C. Goudeseune. A Manifold in- Springer Verlag, 1996. terface for a high dimensional control space. In Proc. [37] A. Hunt. Radical User Interfaces for Real-time Musical of the 1995 International Computer Music Conference. Control. PhD thesis, University of York, 1999. San Francisco, Calif.: International Computer Music [38] A. Hunt and R. Kirk. Radical User Interfaces for Real- Association, pages 385Ð392, 1995. Time Control. In Euromicro 99 Conference, Milan, [20] P. Cook. Principles for Designing Computer Music Con- 1999. trollers. In Proc. of the NIME Workshop, CHI 2001, [39] A. Hunt and R. Kirk. Trends in Gestural Control of 2001. Music, chapter Mapping Strategies for Musical Perfor- [21] P. Cook, D. Morril, and J.-O. Smith. A MIDI Control mance. Ircam - Centre Pompidou, 2000. and Performance System for Brass Instruments. In Proc. [40] A. Hunt, M. M. Wanderley, and R. Kirk. Towards a of the 1993 International Computer Music Conference. Model of Mapping Strategies for Instrumental Perfor- San Francisco, Calif.: International Computer Music mance. In Proc. of the 2000 International Computer Association, pages 130Ð133, 1993. Music Conference. San Francisco, Calif.: International [22] A. de Ritis. Senses of Interaction: What does Interac- Computer Music Association, pages 209Ð212, 2000. tivity in Music mean anyway? Focus on the Computer [41] J. Impett. A Meta-Trumpet(er). In Proc. of the 1994 Game Industry. In Proc. of the Society for Artificial In- International Computer Music Conference. San Fran- telligence and the Simulation of Behavior Conference, cisco, Calif.: International Computer Music Associa- London, 2001. tion, pages 147Ð150, 1994. [23] F. Delalande. La gestique de Gould. In Glenn Gould [42] International MIDI Association. MIDI Musical Instru- Pluriel, pages 85Ð111. Louise Courteau,editrice, « inc., ment Digital Interface Specification 1.0. North Holly- 1988. wood: International MIDI Association, 1983. [24] P. Depalle, S. Tassart, and M. M. Wanderley. Instru- [43] M. Lee and D. Wessel. Connectionist Models for Real- ments Virtuels. R«esonance, (12):5Ð8, Sept. 1997. Time Control of Synthesis and Compositional Algo- [25] A. J. Dix, J. Finlay, G. Abowd, and R. Beale. Human- rithms. In Proc. of the 1992 International Computer Computer Interaction. Prentice Hall, second edition, Music Conference. San Francisco, Calif.: International 1998. Computer Music Association, pages 277Ð280, 1992. [26] E. B. Egozy. Deriving Musical Control Features from [44] C. Lippe. A Composition for Clarinet and Real-Time a Real-Time Timbre Analysis of the Clarinet. Master’s Signal Processing: Using Max on the IRCAM Signal thesis, Massachusetts Institut of Technology, 1995. Processing Workstation. In 10th Italian Colloquium on [27] S. Fels. Glove Talk II: Mapping Hand Gestures to Computer Music, pages 428Ð432, 1993. Speech Using Neural Networks. PhD thesis, University [45] H. Lusted and B. Knapp. Controlling Computer with of Toronto, 1994. Neural Signals. Scientific American, pages 58Ð63, 1996. [28] E. Flety. Trends in Gestural Control of Music, chapter [46] T. Machover. Hyperinstruments - A Progress Report 3D Gesture Acquisition Using Ultrasonic Sensors. Ir- 1987 - 1991. Technical report, Massachusetts Institut cam - Centre Pompidou, 2000. of Technology, 1992. [47] I. S. MacKenzie and W. Buxton. Extending Fitts’ Law to [65] R. Picard. Affective Computing. MIT Press, 1997. Two Dimensional Tasks. In Proc. Conf. on Human Fac- [66] P. Pierrot and A. Terrier. Le violon . Technical tors in Computing Systems (CHI’92), pages 219Ð226, report, IRCAM, 1997. 1992. [67] D. Pousset. La Flute-MIDI, l’histoire et quelques appli- [48] T. Marrin and J. Paradiso. The Digital Baton: A Ver- cations. M«emoire de Maˆitrise, 1992. Universit«e Paris- satile Performance Instrument. In Proc. of the 1997 Sorbonne. International Computer Music Conference. San Fran- cisco, Calif.: International Computer Music Associa- [68] M. Puckette and C. Lippe. Getting the Acoustic Parame- tion, pages 313Ð316. ters from a Live Performance. In 3rd International Con- ference for Music Perception and Cognition, pages 328Ð [49] T. Marrin-Nakra. Trends in Gestural Control of Music, 333, 1994. chapter Searching for Meaning in Gestural Data: Inter- pretive Feature Extraction and Signal Processing for Af- [69] C. Ramstein. Analyse, Repr«esentation et Traitement du fective and Expressive Content. Ircam - Centre Pompi- Geste Instrumental. PhD thesis, Institut National Poly- dou, 2000. technique de Grenoble, December 1991. [50] M. V. Mathews and G. Bennett. Real-time synthesizer [70] J. C. Risset. Evolution« des outils de cr«eation sonore. In control. Technical report, IRCAM, 1978. Repport 5/78. Interfaces Homme-Machine et Cr«eation Musicale, pages 17Ð36. Hermes Science Publishing, 1999. [51] M. V. Mathews and F. R. Moore. GROOVE - A Program to Compose, Store and Edit Functions of Time. Commu- [71] J. C. Risset and S. Van Duyne. Real-Time Performance nications of the ACM, 13(12):715Ð721, December 1970. Interaction with a Computer-Controlled Acoustic Piano. Computer Music J., 20(1):62Ð75, 1996. [52] E. M«etois. Musical Sound Information - Musical Gestures and Embedding Systems. PhD thesis, Mas- [72] C. Roads. Computer Music Tutorial. MIT Press, 1996. sachusetts Institut of Technology, 1996. [73] C. Roads. Computer Music Tutorial, chapter Musical [53] P. Modler. Trends in Gestural Control of Music, chapter Input Devices, pages 617Ð658. The MIT Press, 1996. Neural Networks for Mapping Gestures to Sound Syn- [74] X. Rodet and P. Depalle. A New Additive Synthesis thesis. Ircam - Centre Pompidou, 2000. Method using Inverse Fourier Transform and Spectral [54] P. Modler and I. Zannos. Emotional Aspects of Ges- Envelopes. In Proc. of the 1992 International Computer ture Recognition by a Neural Network, using Dedicated Music Conference. San Francisco, Calif.: International Input Devices. In Proc. KANSEI - The Technology of Computer Music Association, pages 410Ð411, 1992. Emotion Workshop, 1997. [75] J. P. Roll. Trait«e de Psychologie Exp«erimentale, chapter [55] F. R. Moore. The Disfunctions of MIDI. In Proc. of the Sensibilit«es cutan«ees et musculaires. Presses Universi- 1987 International Computer Music Conference. San taires de France, 1994. Francisco, Calif.: International Computer Music Asso- [76] J. Rovan, M. M. Wanderley, S. Dubnov, and P. Depalle. ciation, pages 256Ð262, 1987. Instrumental Gestural Mapping Strategies as Expressiv- [56] A. Mulder. Virtual Musical Instruments: Accessing the ity Determinants in Computer Music Performance. In Sound Synthesis Universe as a Performer. In Proced- Proceedings of the Kansei - The Technology of Emotion dings of the First Brazilian Symposium on Computer Workshop, Genova - Italy, Oct. 1997. Music, 1994. [77] J. Ryan. Some Remarks on Musical Instrument De- [57] A. Mulder. Design of Gestural Constraints Using Vir- sign at STEIM. Contemporary Music Review, 6(1):3Ð tual Musical Instruments. PhD thesis, School of Kinesi- 17, 1991. ology, Simon Fraser University, Canada, 1998. [78] J. Ryan. Effort and Expression. In Proc. of the 1992 [58] A. Mulder. Trends in Gestural Control of Music, chapter International Computer Music Conference. San Fran- Towards a Choice of Gestural Constraints for Instrumen- cisco, Calif.: International Computer Music Associa- tal Performers. Ircam - Centre Pompidou, 2000. tion, pages 414Ð416, 1992. [59] A. Mulder, S. Fels, and K. Mase. Empty-handed Gesture [79] S. Sapir. Interactive Digital Audio Environments: Ges- Analysis in Max/FTS. In Proc. KANSEI - The Technol- ture as a Musical Parameter. In Proc. COST-G6 Confer- ogy of Emotion Workshop, 1997. ence on Digital Audio Effects (DAFx’00), pages 25Ð30, [60] N. Orio. A Gesture Interface Controlled by the Oral 2000. Cavity. In Proc. of the 1997 International Computer [80] S. Serafin, R. Dudas, M. M. Wanderley, and X. Rodet. Music Conference. San Francisco, Calif.: International Gestural Control of a Real-Time Physical Model of Computer Music Association, pages 141Ð144, 1997. a Bowed String Instrument. In Proc. of the 1999 [61] N. Orio. The Timbre Space of the Classical Guitar and International Computer Music Conference. San Fran- its Relationship with the Plucking Techniques. In Proc. cisco, Calif.: International Computer Music Associa- of the 1999 International Computer Music Conference. tion, pages 375Ð378, 1999. San Francisco, Calif.: International Computer Music [81] T. Sheridan. Musings on Music Making and Listening: Association, pages 391Ð394, 1999. Supervisory Control and Virtual Reality. In Human Su- [62] N. Orio, N. Schnell, and M. M. Wanderley. Input De- pervision and Control in Engineering and Music Work- vices for Musical Expression: Borrowing Tools from shop, September 2001. Kassel, Germany. HCI. In Workshop on New Interfaces for Musical Ex- [82] J. O. Smith. Musical Signal Processing, chapter Acous- pression - ACM CHI01, 2001. tic Modeling using Digital Waveguides, pages 221Ð263. [63] J. Paradiso. New Ways to Play: Inter- Swets & Zeitlingler, Lisse, The Netherlands, 1997. faces. IEEE Spectrum, 34(12):18Ð30, 1997. [83] R. L. Smith. The Electrical Engineering Handbook, [64] G. Peeters and X. Rodet. Non-Stationary Analy- chapter Sensors. CRC Press, 1993. sis/Synthesis using Spectrum Peak Shape Distortion, [84] M. Starkier and P. Prevot. Real-Time Gestural Control. Phase and Reassignement. In Proceedings of the In- In Proc. of the 1986 International Computer Music Con- ternational Congress on Signal Processing Applications ference. San Francisco, Calif.: International Computer Technology - ICSPAT, 1999. Music Association, pages 423Ð426, 1986. [85] A. Tanaka. Musical Technical Issues in Using Inter- [102] M. M. Wanderley, J. P. Viollet, F. Isart, and X. Rodet. On active Instrument Technology with Applications to the the Choice of Transducer Technologies for Specific Mu- BioMuse. In Proc. of the 1993 International Computer sical Functions. In Proc. of the 2000 International Com- Music Conference. San Francisco, Calif.: International puter Music Conference. San Francisco, Calif.: Inter- Computer Music Association, pages 124Ð126, 1993. national Computer Music Association, pages 244Ð247, 2000. [86] A. Tanaka. Trends in Gestural Control of Music, chapter Musical Performance Practice on Sensor-based Instru- [103] D. Wessel. Timbre Space as a Musical Control Structure. ments. Ircam - Centre Pompidou, 2000. Computer Music Journal, 3(2):45Ð52, 1979. [87] T. Ungvary and R. Vertegaal. Trends in Gestural Control [104] D. Wessel and M. Wright. Problems and Prospects for of Music, chapter Cognition and Physicality in Musical Intimate Musical Control of Computers. In New Inter- Cyberinstruments. Ircam - Centre , faces for Musical Expression - NIME Workshop, 2001. 2000. [105] A. Willier and C. Marque. Juggling Gesture Analysis for [88] V.V¬alim¬aki and T. Takala. Virtual Musical Instruments - Music Control. In IV Gesture Workshop, 2001. London, natural sound using physical models. Organised Sound, England. 1(2):75Ð86, 1996. [106] T. Winkler. Making Motion Musical: Gestural Map- [89] C. Vergez. Trompette et trompettiste: un syst`eme dy- ping Strategies for Interactive Computer Music. In Proc. namique non lineaire analys«e, modelis«e et simul«e dans of the 1995 International Computer Music Conference. un contexte musical. PhD thesis, Ircam - Universit«e Paris San Francisco, Calif.: International Computer Music VI, 2000. Association, pages 261Ð264, 1995. [90] R. Vertegaal. An Evaluation of Input Devices for Tim- [107] S. Ystad. Sound Modeling using a Combination of Phys- bre Space Navigation. Master’s thesis, Department of ical and Signal Models. PhD thesis, Universit«e Aix- Computing - University of Bradford, 1994. Marseille II, 1998. [91] R. Vertegaal and E. Bonis. ISEE: An Intuitive Sound Editing Environment. Computer Music J., 18(2):12Ð29, 1994. [92] R. Vertegaal and B. Eaglestone. Comparison of Input Devices in an ISEE Direct Timbre Manipulation Task. Interacting with Computers, 8(1):13Ð30, 1996. [93] R. Vertegaal, T. Ungvary, and M. Kieslinger. Towards a Musician’s Cockpit: Transducer, Feedback and Musical Function. In Proc. of the 1996 International Computer Music Conference. San Francisco, Calif.: International Computer Music Association, pages 308Ð311, 1996. [94] M. Waisvisz. Trends in Gestural Control of Music, chap- ter Round Table. Ircam - Centre Pompidou, 2000. [95] M. Waiswisz. The Hands, a Set of Remote MIDI- Controllers. In Proc. of the 1985 International Com- puter Music Conference. San Francisco, Calif.: Inter- national Computer Music Association, pages 313Ð318, 1985. [96] M. M. Wanderley. Performer-Instrument Interaction: Applications to Gestural Control of Sound Synthesis. PhD thesis, Universit«e Pierre et Marie Curie - Paris VI, 2001. [97] M. M. Wanderley and M. Battier, editors. Trends in Ges- tural Control of Music. Ircam - Centre Pompidou, 2000. [98] M. M. Wanderley and P. Depalle. Interfaces Homme- Machine et Cr«eation Musicale, chapter Contrˆole Gestuel de la Synth`ese Sonore, pages 145Ð163. Hermes Science Publications, 1999. [99] M. M. Wanderley, P. Depalle, and O. Warusfel. Im- proving Instrumental Sound Synthesis by Modeling the Effects of Performer Gesture. In Proc. of the 1999 International Computer Music Conference. San Fran- cisco, Calif.: International Computer Music Associa- tion, pages 418Ð421, 1999. [100] M. M. Wanderley, N. Orio, and N. Schnell. Towards an Analysis of Interaction in Sound Generating Systems. In Proc. of the International Symposium on Electronic Arts - ISEA2000, Paris - France, 2000. [101] M. M. Wanderley, N. Schnell, and J. Rovan. Escher - Modeling and Performing ”Composed Instruments” in Real-time. In Proc. IEEE International Conference on Systems, Man and Cybernetics (SMC’98), 1998.