Quick viewing(Text Mode)

Embodied Sonic Meditation and Its Proof-Of-Concept: “Resonance of the Heart”

Embodied Sonic Meditation and Its Proof-Of-Concept: “Resonance of the Heart”

Embodied Sonic Meditation and its Proof-of-Concept: “Resonance of the Heart”

Jiayue Cecilia Wu Julius O. Smith Yijun Zhou Matthew James Wright UC Santa Barbara Stanford University Stanford University Stanford University [email protected] [email protected] [email protected] [email protected]

ABSTRACT 1.2 George Lakoff’s “Embodied Cognition” This paper presents the concept of Embodied Sonic Medi- tation (ESM). ESM sonically explores the theory of “Em- On the other hand, George Lakoff et al.’s “Embodied bodied Cognition,” which argues that we reflect on daily Cognition” theory argues that high-level concepts such as events and understand abstract concepts, such as the time, space, arts, and mathematics are grounded in sen- aesthetics of music and art, through our physical body. sorimotor experience – a view that our sensorimotor ca- This concept is being introduced to undergraduate stu- pacities, bodies and environment are all central to shap- dents in a U.S. university as part of an experimental mu- ing our mental processes [8]. In other words, we are not sic pedagogical methodology. The goal is to improve the disembodied mind floating around; we are also made of students’ comprehension of the relationship of gestures flesh and bones – the fact that we have bodies strongly and sounds. To practice this approach, we designed and shapes our mind and cognition. We experience and learn realized a proof of concept audio-visual named about the world through motor-based exploration accord- Resonance of the Heart (Chinese:印心). This system uses ing to limitations of our sensorimotor and perceptual sys- an infrared sensing device and touchless hand gestures to tem. Therefore, if we wanted to give someone something control a real-time tracking system producing various to think about (cognitive process), we should give sonic results. To track and estimate the subtle gestures of her/him something related to her/his bodily activities in- ten fingers that are not typically captured by any existing stead of something abstract. This is what we mean by sensing device, we implemented supervised learning al- “embodiment.” gorithms and an artificial neural network. Two novel electroacoustic vocal processing techniques, which in- 1.3 Gesture-Controlled Real-time Vocal Processing clude a Tibetan Throat Singing filter and a spectral-tilt Human-computer interactive design, input, mapping, and filter, were first implemented to simultaneously process control strategies have been developed to enhance human vocals based on the performer’s hand gestures with a vocal expression in real-time, such as [1], [6], and [11]. one-to-one mapping strategy. Although the uses of body movement in composition and instrumental design have been studied since the 1980’s 1. INTRODUCTION [2], few studies have been done in the context of electro- acoustic vocal performance. 1.1 Pauline Oliveros and “Deep Listening” Vocal performance is unique from other instrumental performances in several ways. First, to a vocalist, the In 1974, Pauline Oliveros disrupted traditional western body is the instrument: sound comes directly from a vo- music education by practicing an ancient eastern philo- calist’s body; there is no other sound generator. Moreo- sophical concept – meditating though sound. In Bud- ver, humans can naturally read body language and voices; dhism this is called “experiencing sonic Vedanā” [4]. She this begins at infancy and is refined to an art by adult- adopted this approach and further developed it into an hood [3]. Thus, a vocalist’s body gestures produce more improvising, composing, and teaching practice. Through complex perceived cognitive meanings in terms of com- sonic meditations, Oliveros advocated a “Deep Listen- municating emotions and expressing musicality to the ing” practice [12] that trains our ears and mind to con- audience compared to other instrument players [14]. sciously appreciate all sounds to increase sonic aware- In a previous study [21], we proposed the first empir- ness, thus creating profound effects on music making and ical evaluation methodology of a Digital Music Instru- listening. Actual sound making in Oliveros’ sonic medita- ment (DMI) for augmenting electroacoustic vocal per- tions was “primarily vocal, with sometimes hand clap- formance from the audience’s perspective. We found the ping or other body sounds. Occasionally, sound- relationship between the performer’s body movement and producing objects and instruments are used” [13]. Oli- vocal expression are crucial to the degree of perceived veros’ work in sonic meditation focused on the cognition engagement from the audience perspective. However, of sound. more investigation is needed to further understand this Copyright: © 2017 Jiayue Cecilia Wu et al. This is an open-access article profound vocal-gesture relationship in computer music. distributed under the terms of the Creative Commons Attribution License 3.0 Unported, which permits unrestricted use, distribution, and reproduc- tion in any medium, provided the original author and source are credited.

110 2017 ICMC/EMW 2. GOALS AND MOTIVATIONS an engagement with sonic awareness, self-exploration, and non-hierarchical social relationships of music crea- Inspired by Pauline Oliveros’ four decades of “Deep Lis- tion and appreciation. Students will be practicing ESM tening” practice and George Lakoff’s “embodied cogni- through listening, singing, music performance and im- tion,” we propose Embodied Sonic Meditation: a sonic provisation, field recording, and interactive music con- art practice based on the combination of sensing technol- trolled by motion capture. With the of music technol- ogy and human sensibility. Through this practice, we ogy, we aim to open up a safe, free, and non-judgmental encourage people to fully understand and appreciate ab- space to touch, move, and inspire students to express stract electric and electroacoustic sounds and how these their creative nature, embrace their inner selves, and gen- sounds are formed and transformed (cognitive process), uinely connect with others by enhancing their sonic by providing them interactive audio that can awareness and their ability to listen, understand, and tightly engage their bodily activities to simultaneously communicate through novel music expressions with em- create, sculpt, and morph the sonic outcomes themselves, bodied experience. using their body motions (embodiment). This ongoing project aims to further explore gesture-controlled, vocal- 3. SYSTEM ARCHITECTURE processing DMI design strategies and experimental sound education. In this preliminary research phase, we focus on elec- 3.1 Tracking Device and Its Optimization Solutions troacoustic vocal processing manipulations, particularly Because of its low price, lightweight, and portability, as the mapping strategies relating motion input/control and well as a lower latency and higher frame rate compared to the parameters of vocal processing, because of their sig- other sensors [16] such as Microsoft’s Kinect™, we nificance and lack of systematic studies in the field. chose a Leap Motion™ infrared sensor as our non- For teaching and proof-of-concept purposes, we de- attached tracking sensor to realize the gestural instrument veloped an interactive audio-visual system named “Reso- for this project. nance of the Heart.” The name of the system is borrowed The sensing algorithms and visual processing are from “印心”, a “Kōan” story in Chinese Zen Buddhism, implemented in Python. We implemented supervised which describes a Zen master and his disciple’s thoughts learning algorithms and an artificial neural network to resonant without verbal communication. This system estimate and track the subtle motions of ten fingers, enables students to learn and explore the sound-gesture which are not typically captured by existing sensing de- relationship using hand movement through sensing tech- vices. Two historically intractable problems of the Leap nology. We applied ancient Buddhist hand gestures Motion™ sensor are addressed: 1) the sensor cannot de- named Mudras [7] that have the hands and fingers tect overlapping hands, and 2) the sensor’s detection crossed or overlapped, as shown in Figure 1, to trigger range is spatially limited. Training examples included corresponding sonic effects. Meanwhile, dynamic hand seven Tibetan Buddhist Mudras that are overlapping hand motions are also mapped to the audio/visual system to gestures as well as fifty hand trajectories where a hand continuously control electroacoustic voice manipulations and a visualization of a 4-dimensional fractal goes out of the sensor's range and later returns. Hand ges- [5]. The visual processing system is independent from the tures were treated as a classification problem with 62% audio processing system. The visual component is only accuracy. used in live performance. We use only the audio system With our system optimization solutions, the hand tra- in our teaching practice in order to let the students to fo- jectory is predicted when the tracking device loses its cus on sonic awareness. track. Instead of losing track and interrupting the smoothness of the gestural input and control, at least there are some data to be processed, thus improving the coherence and wholeness of the user experience. Our preliminary research shows the general potential of ap- plying to creating robust DMIs using unreliable sensors.

3.2 Audio-Visual Processing The user gives the system two inputs: vocals via a micro- phone, and hand gestures and motions via a Leap Mo- Figure 1. Examples of input gestures and output labels tion™ sensor. The OSC protocol enables communica- In spring 2017, ESM is being introduced to seven- tions between ChucK [19] software for all the audio pro- teen undergraduate students at the College of Creative cessing, and Python software for sensing and visual pro- Studies1 at the University of California, Santa Barbara as cessing. Recognition of one of the seven Mudras triggers an experimental music pedagogical methodology during a the corresponding sound, while continuous hand motions course named “Embodied Sonic Meditation–A Creative control two novel vocal filters and four other sonic ef- Sound Education.” The unifying theme of this course is fects for real-time vocal processing, as well as the visual- ization of a 4-dimensional Buddhabrot’s trajectory de- formation. Finally, the resulting sounds and graphics are 1 https://www.ccs.ucsb.edu/ HEARING THE SELF 111 amplified and projected into the performance space. Fig- strategy much more engaging than simply pushing a but- ure 2 shows the overall system architecture. ton. The embodied cognitive process makes this Mudras- to-sound process more meaningful, as “it inherently in- volves perception and action” and it “takes place in the context of task-relevant inputs and outputs” [20]. 4.2.2 Right hand dynamic gestural mappings There are also six ways that dynamic hand motions are mapped to continuously manipulate the real-time vocal processing:

1) Three-fingers’ vertical movement -> individually turns Figure 2. General overview of the system on (vertically move down) or turns off (vertically move 4. GESTURAL INPUT, CONTROL, & up) three random frequencies in three ranges of perceiva- MAPPING DESIGN ble low (60-200hz, controlled by index finger); mid (200- 1000hz, controlled by middle finger); and high (1000- 4.1 Design Principles 6000hz, controlled by ring finger). Since there is a broad range of possible two-hand ges- 2) Palm’s vertical movement -> roll-off of a Spectral Tilt tures that can serve as data input, we decided to use a filter, which will be further discussed in Session 5. one-to-one mapping strategy on both audio and visual 3) Palm’s horizontal movement -> spatial panning: mov- layers to simplify the design process [8]. Because of the ing the right hand from left to right gradually pans the interconnectedness among ten fingers, movements on x, sound output from the left speaker to the right speaker, y, and z positions, as well as the limitation and interfer- and vice versa. ence due to palm orientations and positions as a whole, even the simple one-to-one mapping can produce a rich 4.2.3 Left hand gestural mappings: sonic result through hand movements, which sometimes is not necessarily desirable in terms of increasing the sys- 1) Palm’s horizontal movement -> manipulates the fre- tem’s transparency to the audience. However, as a tool quencies of different harmonic partials based on the voice for teaching the student to pay close attention and be input’s pitch by a second-order “peaking equalizer” aka aware of subtle sonic outcomes that are tightly synchro- “throat-singer's formant emphasis filter.” We will further nized with their hand movement, it is efficient and does a discuss this filtering technique in session 5. good job of enhancing the user’s sonic awareness. 2) When the thumb and index finger touch (a “pinch” We map the gestural input to the effects controls in gesture), the vertical movement shifts the pitch of the order to “translate user’s actions into parameter values simulated Tibetan throat singing – higher distance con- needed to drive the sound processing” [18]. Most of the trols higher pitch, and vice versa. A pitch recognition gestural mappings are direct control while the throat sing- 2 function from faust2ck called “PitchTrack” tracks the ing case is an adaptive control that involves pitch track- pitch of the input voice. Then “PitchShift” and several ing, equalization, and pitch-shifting. To ensure continu- “BPF (band pass filters)” from Chuck’s “Unit Genera- ous control, we implement interpolation and a one-pole tors” library were implemented to shift and filtering the filter to smooth the raw input data, as well as implement- original voice pitch in a harmonic way to simulate the ing our machine learning prediction model to make up undertone harmonic partials. Theses harmonic partials the missing data when the hands are overlapping or leave stand with the original voice together to simulate the deep the tracking range. undertone vocal performance, known as Throat Singing 4.2 Mapping Strategies [10]. 4.2.1 Mudra gestures recognition and mappings 3) The horizontal distance between two hands controls a delay effect’s parameter. The delay time is proportional Mapping the seven Mudras gestures to trigger seven dif- to distance; when two hands are closest to each other, all ferent meditative sonic outputs was originally impossible effects are switched off. for our project until we added the machine-learning com- ponent for gesture recognition. When the classifier rec- ognizes a particular Mudra, it sends the detected type to 5. VOCAL FILTERING ChucK, which plays the corresponding contemplative Two voice-filtering techniques are newly introduced in sound clip. Thus, these Mudra inputs serve as low-level the “Resonance of the Heart” system for enhancing real- one-to-one triggers and have the same functionality as time vocal expression. They were implemented in the normal button inputs. Faust3 language and ported to ChucK as ChuGins using However, from the artistic and user experience per- – implementation details are available spectives, forming various complex hand gestures in or- faust2ck der to trigger specific sound effects and visuals that are symbolized in a contemplative way, makes this mapping 2 http://chuck.stanford.edu/extend/ 3 http://faust.grame.fr/ 112 2017 ICMC/EMW amplified and projected into the performance space. Fig- strategy much more engaging than simply pushing a but- online.4 The other delay, panning, and pitch-shifting ef- ure 2 shows the overall system architecture. ton. The embodied cognitive process makes this Mudras- fects, and the electric sounds, were implemented and to-sound process more meaningful, as “it inherently in- generated directly in ChucK. volves perception and action” and it “takes place in the context of task-relevant inputs and outputs” [20]. 5.1 Throat-Singing Filter 4.2.2 Right hand dynamic gestural mappings The throat-singing filter is based on a peaking equal- izer [9, 17] section that is dynamically tuned to an integer There are also six ways that dynamic hand motions are multiple of the estimated fundamental frequency f0. A mapped to continuously manipulate the real-time vocal peaking-equalizer section provides a boost or cut in the processing: vicinity of some center frequency.

1) Three-fingers’ vertical movement -> individually turns This filter efficiently simulates the vocal technique of Figure 2. General overview of the system on (vertically move down) or turns off (vertically move Tibetan throat singing that emphasizes frequencies corre- Figure 4. Spectral tilt filter giving a 1/f characteristic sponding to particular harmonic overtones. When the left 4. GESTURAL INPUT, CONTROL, & up) three random frequencies in three ranges of perceiva- (reprinted from [15]). hand is at the far left from the sensor’s center, this filter MAPPING DESIGN ble low (60-200hz, controlled by index finger); mid (200- 1000hz, controlled by middle finger); and high (1000- emphasizes the 2nd harmonic partial; when the left hand Vertical movement of the right palm controls the 4.1 Design Principles 6000hz, controlled by ring finger). nears the sensor’s center position, the filter emphasizes slope’s parameter from 2 to -2 to obtain a roll-off modu- the 13th harmonic partial. The left hand moving horizon- lation vocal-processing effect. Figure 5 gives an over- Since there is a broad range of possible two-hand ges- 2) Palm’s vertical movement -> roll-off of a Spectral Tilt tally between these two positions generates the interven- view of our spectral-tilt voice processor. tures that can serve as data input, we decided to use a filter, which will be further discussed in Session 5. ing harmonic partials. The input voice’s fundamental one-to-one mapping strategy on both audio and visual 3) Palm’s horizontal movement -> spatial panning: mov- frequency is real-time tracked to give the filter immediate layers to simplify the design process [8]. Because of the ing the right hand from left to right gradually pans the input. Figure 3 shows an overview of throat-singing voice interconnectedness among ten fingers, movements on x, sound output from the left speaker to the right speaker, processing. y, and z positions, as well as the limitation and interfer- and vice versa. ence due to palm orientations and positions as a whole, even the simple one-to-one mapping can produce a rich 4.2.3 Left hand gestural mappings: sonic result through hand movements, which sometimes is not necessarily desirable in terms of increasing the sys- 1) Palm’s horizontal movement -> manipulates the fre- tem’s transparency to the audience. However, as a tool quencies of different harmonic partials based on the voice for teaching the student to pay close attention and be input’s pitch by a second-order “peaking equalizer” aka Figure 5. Block diagram of spectral-tilt voice processing made aware of subtle sonic outcomes that are tightly synchro- “throat-singer's formant emphasis filter.” We will further from N real poles and N real zeros. The in-line gain coefficients g(i) are optionally there for fixed-point scaling; at most one is needed in nized with their hand movement, it is efficient and does a discuss this filtering technique in session 5. a floating-point implementation. Only the b1(i) coefficients are good job of enhancing the user’s sonic awareness. Figure 3. Block diagram of throat-singing voice processing. 2) When the thumb and index finger touch (a “pinch” modulated to change the slope of the spectral tilt. We map the gestural input to the effects controls in gesture), the vertical movement shifts the pitch of the 5.2 Spectral-Tilt Filter order to “translate user’s actions into parameter values simulated Tibetan throat singing – higher distance con- 6. DISCUSSION AND FUTURE WORK needed to drive the sound processing” [18]. Most of the The spectral-tilt filter was first designed and realized as a trols higher pitch, and vice versa. A pitch recognition We introduced the concept of Embodied Sonic Medita- gestural mappings are direct control while the throat sing- 2 novel audio equalizer, which implements any spectral function from faust2ck called “PitchTrack” tracks the tion practice and explored its potential for enhancing the ing case is an adaptive control that involves pitch track- roll-off slope to an arbitrary degree of accuracy over any pitch of the input voice. Then “PitchShift” and several experience of electroacoustic sound education, as well as ing, equalization, and pitch-shifting. To ensure continu- specified frequency band [15]. The slope can be safely “BPF (band pass filters)” from Chuck’s “Unit Genera- in developing the theory of gestural input, control, and ous control, we implement interpolation and a one-pole modulated in real time because only the zeros need to be tors” library were implemented to shift and filtering the mapping strategies in electroacoustic vocal processing. filter to smooth the raw input data, as well as implement- modulated while the poles remain fixed. Our system ex- original voice pitch in a harmonic way to simulate the As a proof of concept, we developed an interactive audio- ing our machine learning prediction model to make up plores spectral-tilt filtering for real-time vocal processing. undertone harmonic partials. Theses harmonic partials visual system named “Resonance of the Heart.” This ges- The spectral-tilt filter is computed in closed form the missing data when the hands are overlapping or leave tural-controlled vocal processing DMI is being provided stand with the original voice together to simulate the deep from exponentially distributed real pole-zero pairs, where the tracking range. to seventeen undergraduate students in UC Santa Barba- undertone vocal performance, known as Throat Singing the pole-zero spacing within each pair determines the ra’s College of Creative Studies for them to practice this 4.2 Mapping Strategies [10]. spectral roll-off slope, and the distance between the pairs approach. Gestural recognition and supervised learning determines the accuracy. Over a log axis, the pole-zero 4.2.1 Mudra gestures recognition and mappings 3) The horizontal distance between two hands controls a algorithms are implemented to optimize the tracking de- pairs are uniformly distributed. Arbitrary spectral slopes delay effect’s parameter. The delay time is proportional vice’s tracking ability to predict a of what it Mapping the seven Mudras gestures to trigger seven dif- are obtained by sliding the exponential array of zeros to distance; when two hands are closest to each other, all guesses the user’s hands might be doing while out of ferent meditative sonic outputs was originally impossible relative to the exponential array of poles (again, uniform effects are switched off. range, or overlapping. Contemplative sound clips are for our project until we added the machine-learning com- arrays over a log axis). Practical designs can arbitrarily triggered by a series of Tibetan Buddhist Mudra. Ten ponent for gesture recognition. When the classifier rec- approach an equal-ripple “Chebyshev” approximation by fingers’ dynamic subtle movements coupled with two ognizes a particular Mudra, it sends the detected type to 5. VOCAL FILTERING enlarging the pole-zero array frequency-band beyond the hands movements are one-to-one mapped to filtering ef- ChucK, which plays the corresponding contemplative desired frequency band. An illustrative design of a “1/f” Two voice-filtering techniques are newly introduced in fects’ parameters to real-time process voice input. Two sound clip. Thus, these Mudra inputs serve as low-level equalizer is shown in Figure 4. Software implementations the “Resonance of the Heart” system for enhancing real- novel vocal filters are designed and first implemented to one-to-one triggers and have the same functionality as time vocal expression. They were implemented in the are available in matlab and Faust. simulate Tibetan throat singing and enhance electroa- normal button inputs. Faust3 language and ported to ChucK as ChuGins using coustic vocal expression. However, from the artistic and user experience per- – implementation details are available At the time of this writing, a detailed ESM pedagogi- spectives, forming various complex hand gestures in or- faust2ck cal methodology is under development. Through teach- der to trigger specific sound effects and visuals that are ing, we will be observing the students’ behaviors in class, symbolized in a contemplative way, makes this mapping 2 http://chuck.stanford.edu/extend/ 4 https://ccrma.stanford.edu/~jos/CeciliaROH/ 3 http://faust.grame.fr/ HEARING THE SELF 113 asking for their , and proposing a formal peda- [9] Julius O. Smith III, Introduction to Digital Filters, gogical methodology to refine this creative sound educa- https://ccrma.stanford.edu/~jos/filters/, Sep, 2007, online tional practice. A subsequent paper will analyze the ef- book. fectiveness of ESM. We expect this practice can deepen the students’ understanding in different perspectives of [10] Lindestad, Per-Åke, et al. "Voice source sound as an art form, such as its social impact, hidden characteristics in Mongolian “throat singing” studied quiet, and subtlety. We also expect that new gestural in- with high-speed imaging technique, acoustic spectra, put, control and mapping design principles and strategies and inverse filtering." Journal of Voice 15.1 (2001): can be further explored through this practice. A multime- 78-85. dia electroacoustic vocal performance is also under con- [11] Mitchell, Thomas J., Sebastian Madgwick, and struction and we aim to demonstrate the ESM concept Imogen Heap. “Musical interaction with hand through performing arts as well. posture and orientation: A toolbox of gestural Acknowledgments control mechanisms.” (2012). We would like to pay tribute to Pauline Oliveros and her [12] Oliveros, Pauline. Deep listening: a composer's inspiring practices in the arts, humanities, and self- sound practice. IUniverse, 2005. exploration. Thanks to UCSB/CREATE, Curtis Roads, [13] Osborne, William. "Sounding the Abyss of and Clarence Barlow, as well as Stanford/CCRMA and Otherness: Pauline Oliveros’s Deep Listening and Chris Chafe for their generous support of our project. the Sonic Meditations." Women Making Art (2000): Thanks to Romain Michon for the Faust-to-ChuGin up- 65-86. dates. Thanks to the UCSB/MAT writing club. Thanks to Donghao Ren for his collaborations realizing the sys- [14] P. N. Juslin. Music and Emotion, chapter tem’s Buddhabrot and real-time fractal visualization. Communicating Emotion in Music Performance. Leap Motion™ is a trademark of Leap Motion Inc. Oxford University Press, 2001. [15] Smith, Julius, and Harrison Freeman Smith. "Closed Form Fractional Integration and Differentiation via 7. REFERENCES Real Exponentially Spaced Pole-Zero Pairs." arXiv preprint arXiv:1606.06154 (2016).

[16] Tormoen, Daniel, Florian Thalmann, and Guerino [1] Beller, Grégory. “The Synekine Project.” Mazzola. "The Composing Hand: Musical Creation Proceedings ofthe 2014 International Workshop on with Leap Motion and the BigBang Movement and Computing. ACM, 201 Rubette." NIME. 2014. [2] Cadoz, Claude. "Instrumental gesture and musical [17] Udo Zölzer, Digital Audio Signal Processing, composition." ICMC 1988-International Computer New York: John Wiley and Sons, Inc., 1999. Music Conference. 1988. [18] Verfaille, Vincent, Marcelo M. Wanderley, and [3] Darwin, Charles. The expression of the emotions in Philippe Depalle. "Mapping strategies for gestural man and animals. Oxford University Press, 1998. and adaptive control of digital audio effects." Journal of New Music Research 35.1 [4] De Silva, Padmasiri. "Theoretical perspectives on (2006): 71-93. emotions in early Buddhism." Marks and Ames 1995 (1995): 109-121. [19] Wang, Ge. The chuck audio programming language. a strongly-timed and on-the-fly environ/mentality. [5] GREEN, Melinda. "The Buddhabrot Princeton University, 2008. Technique." Superliminal Software (2012). [20] Wilson, Margaret. "Six views of embodied [6] Hewitt, Donna, and Ian Stevenson. "E-mic: extended cognition." Psychonomic bulletin & review 9.4 mic-stand interface controller." Proceedings of the (2002): 625-636. 2003 conference on New interfaces for musical expression. National University of Singapore, 2003. [21] Wu, J.C., Huberth, M., Yeh, Y.H. and Wright, M., Evaluating the Audience’s Perception of Real-time [7] Hirschi, Gertrud. Mudras: Yoga in your hands. Gestural Control and Mapping Mechanisms in Weiser Books, 2016. Electroacoustic Vocal Performance. [8] Hunt, Andy, Marcelo M. Wanderley, and Matthew

Paradis. "The importance of parameter mapping in electronic instrument design." Journal of New Music Research 32.4 (2003): 429-440.

114 2017 ICMC/EMW