Musical Performance Practice on Sensor-based Instruments Atau Tanaka Faculty of Media Arts and Sciences Chukyo University, Toyota-shi, Japan
[email protected] Introduction Performance has traditionally been a principal outlet for musical expression – it is the moment when music is communicated from musician to listener. This tradition has been challenged and augmented in this century – recorded and broadcast media technologies have brought new modes of listening to music, while conceptual developments have given rise to extremes such as music not to be heard. Computer music makes up part of these developments in new methods of creation and reception of music. It is at its origin a studio based art, one that gave composers possibilities to create a music not possible with traditional performative means. Realizing music in the electronic and computer music studios freed the composer from traditional channels of composition-interpretation-performance, allowing him to focus in a new way on sonic materials. Advancements in computer processing speeds have brought with it the capabilities to realize this music in real time, in effect giving us the possibility to take it out of the studio and put it on stage. This has introduced the problematic of how to articulate computer generated music in a concert setting. Concurrently, developments in the fields of computer-human interface (CHI) and virtual reality (VR) have introduced tools to investigate new interaction modalities between user and computer (Laurel 1990). The intersection of these fields with computer music has resulted in the field of gestural computer music instrument design. The creation of gestural and sensor based musical instruments makes possible the articulation of computer generated sound live through performer intervention.