Interactive Electroacoustics

Submitted for the degree of

Doctor of Philosophy

by

Jon Robert Drummond B.Mus M.Sc (Hons)

June 2007

School of Communication Arts University of Western Sydney

Acknowledgements Page

I would like to thank my principal supervisor Dr Garth Paine for his direction, support and patience through this journey. I would also like to thank my associate supervisors Ian Stevenson and Sarah Waterson.

I would also like to thank Dr Greg Schiemer and Richard Vella for their ongoing counsel and faith.

Finally, I would like to thank my family, my beautiful partner Emma Milne and my two beautiful daughters Amelia Milne and Demeter Milne for all their support and encouragement.

Statement of Authentication

The work presented in this thesis is, to the best of my knowledge and belief, original except as acknowledged in the text. I hereby declare that I have not submitted this material, either in full or in part, for a degree at this or any other institution.

……………………………………………

Table of Contents

TABLE OF CONTENTS ...... I

LIST OF TABLES...... VI

LIST OF FIGURES AND ILLUSTRATIONS...... VII

ABSTRACT...... X

CHAPTER ONE: INTRODUCTION...... 1

1.1 POTENTIALS AND POSSIBILITIES ...... 1 1.2 HAVE WE REALISED THE POTENTIALS OF INTERACTIVE SYSTEMS?...... 2 1.3 STRUCTURE OF THE THESIS ...... 3 1.4 TECHNOLOGY CONVERGENCE...... 4 1.5 DESCRIBING INTERACTIVE SYSTEMS ...... 5 1.6 CONTEXT...... 6 1.6.1 Interactive Sound Installations...... 6 1.6.1.1 Myron Krueger and Responsive Environments ...... 6 1.6.1.2 Jeffrey Shaw–Points of View ...... 7 1.6.1.3 David Rokeby–Very Nervous System...... 8 1.6.1.4 Rod Berry–Models of Artificial Life and Listening Sky...... 9 1.6.1.5 Sarah Waterson and Kate Richards–Data Mapping and sub_scapeBALTIC ...... 10 1.6.1.6 George Khut–Bio-sensing and Cardiomorphologies...... 11 1.6.1.7 Garth Paine–Interacting with the Environment...... 11 1.6.1.8 Felix Hess–Emergent Behaviour and Moving Sound Creatures...... 12 1.6.2 Performing with Interactive Systems and Designing New Instruments...... 13 1.6.2.1 Remote Sensing Interfaces...... 14 1.6.2.1.1 ...... 15 1.6.2.1.2 Rolf Gehlhaar–SOUND=SPACE...... 15 1.6.2.1.3 Leonello Tarabella–Exploded Instruments...... 15 1.6.2.1.4 Jon Drummond– and Sheet ...... 16 1.6.2.2 Physical Interfaces, Designing New Instruments...... 17 1.6.2.2.1 Michel Waisvisz–The Hands...... 18 1.6.2.2.2 Hyperinstruments...... 19 1.6.2.3 Interactive System as Autonomous Agent ...... 20 1.6.2.3.1 Musical Accompaniment: Risset’s Duet for One Pianist ...... 20 1.6.2.3.2 Score Following: Cort Lippe’s Music for Clarinet and ISPW...... 21 1.6.2.3.3 Interactive System as an Independent Performer: Lewis’ Voyager...... 22 1.7 SUMMARY ...... 22

i

CHAPTER TWO: OVERVIEW OF INTERACTIVE ELECTROACOUSTIC ART PRACTICE PART 1 ...... 25

2.1 LIVE- BEGINNINGS–1950S AND 1960S...... 25 2.1.1 ...... 26 2.1.1.1 Imaginary Landscape No. 4...... 26 2.1.1.2 Cartridge Music ...... 28 2.1.1.3 Variations V...... 30 2.1.2 –Mikrophonie I ...... 32 2.1.3 –Rainforest IV...... 34 2.1.4 –Music for Solo Performer...... 37 2.1.5 –Hornpipe...... 38 2.2 PROGRAMMABLE INTERACTIVE –1970S ...... 41 2.2.1 Salvatore Martirano–SalMar Construction...... 41 2.2.2 Joel Chadabe ...... 43 2.2.2.1 CEMS System ...... 43 2.2.2.2 Solo ...... 44 2.2.3 Mathews–GROOVE and the Conductor Programme...... 45 2.3 FIRST COMMERCIAL INTERACTIVE MUSIC SOFTWARE–1980S...... 47 2.3.1 ...... 47 2.3.2 M and Jam Factory ...... 49 2.4 NETWORKED ENSEMBLES ...... 50 2.4.1 The League of Automatic Music Composers ...... 50 2.4.2 The Hub ...... 54 2.4.3 austraLYSIS electroband ...... 56 2.4.4 HyperSense Complex ...... 60

CHAPTER THREE: OVERVIEW OF INTERACTIVE ELECTROACOUSTIC ART PRACTICE PART 2 ...... 65

3.1 INTERFACES–MAPPING GESTURE TO SOUND ...... 65 3.1.1 Michel Waisvisz–The Hands ...... 66 3.1.2 Laetitia Sonami–Lady’s Glove...... 69 3.1.3 Donna Hewitt–E-mic ...... 70 3.1.4 Sensorband–Soundnet...... 73 3.1.5 Greg Schiemer–Spectral Dance...... 75 3.2 INSTALLATION AND AUDIENCE INTERACTION...... 79 3.2.1 Gibson and Richards’ The Bystander Field...... 79 3.3 COLLABORATIVE INTERACTIVE VIRTUAL ENVIRONMENTS ...... 83 3.3.1 Sergi Jordà–reacTable...... 83 3.3.2 James Patten and Ben Recht–Audiopad...... 85

ii

3.3.3 Toshio Iwai–Composition on the Table, Electroplankton and Tenori-on ...... 86 3.3.4 Tina Blaine–Jam-O-Drum ...... 89 3.4 MACHINE LISTENING–HUMAN AND COMPUTER INTERACTIVE PERFORMANCES...... 91 3.4.1 George Lewis–Voyager ...... 92 3.4.2 Robert Rowe–Cypher ...... 95 3.4.3 Gil Weinberg and Scott Driscoll–Haile...... 97 3.5 SUMMARY ...... 100 3.5.1 Instrument Building, Composing or Performing...... 103 3.5.2 Shared Control...... 103 3.5.3 Collaboration...... 103 3.5.4 Interactive Conversations...... 104

CHAPTER FOUR: DEFINITIONS, CLASSIFICATION AND MODELS...... 105

4.1 INTRODUCTION...... 105 4.2 DEFINITIONS...... 106 4.2.1 Joel Chadabe–Interactive Composing ...... 106 4.2.2 Robert Rowe–Interactive Music Systems...... 108 4.2.3 Todd Winkler–Composing Interactive Music...... 109 4.3 CLASSIFICATIONS AND MODELS...... 111 4.3.1 Empirical Classifications ...... 111 4.3.2 Rowe’s Classification Dimensions...... 112 4.3.2.1 Score-Driven v. Performance-Driven...... 113 4.3.2.2 Response Type ...... 114 4.3.2.3 Instrument v. Player ...... 115 4.3.3 Winkler–Models based on Acoustic Instrument Ensembles ...... 116 4.3.4 ’s Multi-dimensional Model...... 117 4.3.5 System Responsiveness ...... 119 4.3.6 Other Metaphors...... 120 4.4 SYSTEM ANATOMY...... 122 4.4.1 Rowe’s Three Stage System Model ...... 122 4.4.2 Winkler’s Five Stage System Model...... 123 4.4.3 Bongers–Control and Feedback ...... 124 4.4.4 Mapping...... 127 4.4.5 Separating the Interface from the Sound Generator...... 129 4.4.6 Gesture...... 130

CHAPTER FIVE: FOLIO OF CREATIVE WORKS...... 136

5.1 BOOK OF CHANGES...... 136 5.1.1 Overview...... 136 5.1.2 System Architecture ...... 137

iii

5.1.2.1 Markov Chains ...... 139 5.1.2.2 Duration Choice ...... 141 5.1.2.3 Ending a Performance...... 141 5.1.3 Interacting with the System...... 142 5.1.4 Interaction Design ...... 142 5.2 PLUS MINUS (+-)...... 144 5.2.1 Overview...... 144 5.2.2 System Architecture ...... 144 5.2.2.1 Nineteen-Tone Equal Temperament...... 145 5.2.2.2 Interface...... 146 5.2.2.3 Automated Panning Function ...... 147 5.2.2.4 Performance Options...... 148 5.2.3 Interacting with the System...... 149 5.2.4 Interaction Design ...... 150 5.3 SONIC CONSTRUCTION ...... 152 5.3.1 Overview...... 152 5.3.2 System Architecture ...... 154 5.3.2.1 Video Analysis ...... 154 5.3.2.2 FOF Synthesis ...... 155 5.3.2.3 Spatialisation ...... 158 5.3.3 Interacting with the System...... 158 5.3.4 Interaction Design ...... 160 5.4 SOUNDING THE WINDS ...... 162 5.4.1 Overview...... 162 5.4.1.1 Other Wind Played Instruments...... 163 5.4.2 System Architecture ...... 164 5.4.2.1 Kite and Sensors...... 164 5.4.2.2 Virtual String with RMI/Modalys ...... 165 5.4.2.3 Mappings ...... 166 5.4.3 Interacting with the System...... 168 5.4.4 Interaction Design ...... 169 5.5 SIX DEGREES OF TENSION ...... 170 5.5.1 Overview...... 170 5.5.2 System Architecture ...... 171 5.5.2.1 Signal Flow...... 171 5.5.2.2 Effects Processors ...... 173 5.5.2.3 Neural Network Interface...... 173 5.5.3 Interacting with the System...... 174 5.5.4 Interaction Design ...... 175

CHAPTER SIX: CONCLUSIONS ...... 176

iv

6.1 OVERVIEW ...... 176 6.2 DEFINITION OF AN INTERACTIVE SYSTEM ...... 177 6.3 INTERACTIVE SYSTEM MODELS ...... 178 6.4 CLASSIFICATION ...... 179 6.5 CHALLENGES...... 179

BIBLIOGRAPHY ...... 181

DISCOGRAPHY...... 199

APPENDIX A: DESCRIPTION OF THE COMPANION CDS...... 201

v

List of Tables

TABLE 4.1 MULDER’S LIST OF WORDS DESCRIBING HAND GESTURES ...... 133 TABLE 5.1 BOOK OF CHANGES–THE GRAPHIC ICONS ASSOCIATED WITH THE EIGHT SCORE FRAGMENTS. THE BLANK ICON INDICATES A SILENT SECTION...... 139 TABLE 5.2 TABLE OF NINETEEN-TONE EQUAL TEMPER INTERVALLIC RELATIONSHIPS COMPARED TO

EQUIVALENT JUST INTONATION INTERVALS ...... 146

vi

List of Figures and Illustrations

FIGURE 1.1 VIDEO STILL FROM KRUEGER’S VIDEOPLACE ...... 7 FIGURE 1.2 JOYSTICK INTERFACES FOR JEFFREY SHAW’S POINTS OF VIEW...... 8 FIGURE 1.3 BERRY’S LISTENING SKY INTERFACE ...... 10 FIGURE 1.4 PERISCOPE INTERFACE FROM WATERSON AND RICHARDS’ SUB_SCAPEBALTIC...... 10 FIGURE 1.5 PAINE’S REEDS FLOATING REED POD...... 12 FIGURE 1.6 PAINE’S REEDS CROSS SECTION DRAWING...... 12 FIGURE 1.7 HESS’ MOVING SOUND CREATURES ...... 13 FIGURE 1.8 VIDEO ANALYSIS IN TARABELLA’S HANDEL SYSTEM ...... 16 FIGURE 1.9 JON DRUMMOND’S SPIRAL AND SHEET...... 17 FIGURE 1.10 WAISVISZ’S THE HANDS IN 2005...... 19 FIGURE 1.11 TIMELINE OF WORKS DISCUSSED IN CHAPTER ONE...... 23 FIGURE 2.1 IMAGINARY LANDSCAPE NO. 4 P.19 ...... 26

FIGURE 2.2 CARTRIDGE MUSIC–SHEET SIX OF TWENTY WITH THE FOUR TRANSPARENCIES OVERLAID (POINTS, CIRCLES, MARKED CIRCLE, DOTTED LINE)...... 29 FIGURE 2.3 VARIATIONS V ...... 31 FIGURE 2.4 MIKROPHONIE I EXCERPT FROM SCORE...... 32 FIGURE 2.5 MIKROPHONIE I IN PERFORMANCE...... 33 FIGURE 2.6 FIRST PUBLIC PERFORMANCE OF RAINFOREST IV, BUFFALO STATE COLLEGE, NEW YORK, MAY 1974. PHOTOGRAPHY JOHN DRISCOLL...... 35 FIGURE 2.7 GENERALIZED CIRCUITRY DIAGRAM FOR RAINFOREST IV 1973 ...... 35 FIGURE 2.8 ALVIN LUCIER–MUSIC FOR SOLO PERFORMER...... 38 FIGURE 2.9 MUMMA AND CYBERSONIC HORN FROM A LIVE-PERFORMANCE OF HORNPIPE AT THE METROPOLITAN MUSEUM OF ART, NEW YORK CITY. FEBRUARY 19, 1972...... 40 FIGURE 2.10 SALVATORE MARTIRANO AT THE SALMAR CONSTRUCTION IN THE MID-1970S ...... 42

FIGURE 2.11 CHADABE WORKING AT THE CEMS SYSTEM IN THE ELECTRONIC MUSIC STUDIO AT STATE UNIVERSITY OF NEW YORK AT ALBANY IN 1970 ...... 43 FIGURE 2.12 CHADABE PERFORMING SOLO AT NEW MUSIC NEW YORK IN 1979...... 45 FIGURE 2.13 MUSIC MOUSE INTERFACE SHOWING THE MOUSE CO-ORDINATES ...... 48 FIGURE 2.14 M’S MAIN SCREEN...... 49 FIGURE 2.15 THE LEAGUE OF AUTOMATIC MUSIC COMPOSERS (PERKIS, HORTON, AND BISCHOFF, LEFT TO RIGHT) PERFORMING AT FT. MASON, SAN FRANCISCO 1981 ...... 50 FIGURE 2.16 FLYER FROM THE LEAGUE OF AUTOMATIC MUSIC COMPOSERS’ BLIND LEMON CONCERT NOVEMBER 26, 1978...... 52

FIGURE 2.17 FLYER FROM THE LEAGUE OF AUTOMATIC MUSIC COMPOSERS’ CONCERT MARCH 28, 1980...... 52

vii

FIGURE 2.18 AUSTRALYSIS ELECTROBAND OSTINATO MAX/MSP PATCH...... 59 FIGURE 2.19 HYPERSENSE COMPLEX FLEX SENSORS...... 61 FIGURE 2.20 HYPERSENSE COMPLEX CUSTOM MADE CLOTHING ...... 61 FIGURE 2.21 HYPERSENSE COMPLEX SIGNAL FLOWS ...... 62 FIGURE 3.1 THE HANDS II ...... 67 FIGURE 3.2 THE HANDS II–CLOSE UP OF THE GLASS MERCURY TILT SWITCHES ...... 67 FIGURE 3.3 E-MIC CLOSE-UP OF JOYSTICK AND PRESSURE SENSORS ...... 71 FIGURE 3.4 E-MIC IN PERFORMANCE...... 71 FIGURE 3.5 E-MIC SENSORS AND EXAMPLE MAPPINGS ...... 72 FIGURE 3.6 SPECTRAL DANCE SIGNAL FLOW ...... 76 FIGURE 3.7 SPECTRAL DANCE IN PERFORMANCE...... 78 FIGURE 3.8 SPECTRAL DANCE–CLOSE-UP OF UFO ...... 78 FIGURE 3.9 THE BYSTANDER FIELD–MODEL OF THE PHYSICAL INSTALLATION ...... 80 FIGURE 3.10 THE BYSTANDER FIELD EXAMPLE SCREEN CAPTURE ...... 81 FIGURE 3.11 REACTABLE...... 84 FIGURE 3.12 AUDIOPAD’S TABLETOP INTERFACE...... 86 FIGURE 3.13 ELECTROPLANKTON–THE SPLIT INTERFACE TAKES ADVANTAGE OF THE NINTENDO DS DUAL SCREEN FORMAT...... 87 FIGURE 3.14 TENORI-ON ...... 89 FIGURE 3.15 JAM-O-DRUM...... 89 FIGURE 3.16 VOYAGER–OUTLINE OF INTERNAL STRUCTURE...... 93 FIGURE 3.17 VOYAGER SYSTEM AS TWO PARALLEL STREAMS...... 94 FIGURE 3.18 HAILE IN PERFORMANCE ...... 98 FIGURE 3.19 TIMELINE OF WORKS DISCUSSED IN CHAPTER TWO ...... 101 FIGURE 3.20 TIMELINE OF WORKS DISCUSSED IN CHAPTER THREE ...... 102 FIGURE 4.1 MODEL OF A SCORE FOLLOWING SYSTEM...... 114 FIGURE 4.2 ROWE’S THREE STAGE SYSTEM MODEL ...... 122 FIGURE 4.3 WINKLER’S FIVE STAGE SYSTEM MODEL COMPARED TO ROWE’S THREE STAGE MODEL... 123 FIGURE 4.4 SOLO PERFORMER AND INTERACTIVE SYSTEM–CONTROL AND FEEDBACK ...... 126 FIGURE 4.5 MAPPING IN THE CONTEXT OF A DIGITAL MUSICAL INSTRUMENT ...... 127 FIGURE 4.6 EXAMPLES OF DIFFERENT MAPPING STRATEGIES FOR A REED INSTRUMENT ...... 129 FIGURE 5.1EXAMPLE SCORE FRAGMENT FROM BOOK OF CHANGES ...... 137 FIGURE 5.2 BOOK OF CHANGES MAIN PROJECTION SCREEN–TWO ICONS ARE DISPLAYED REFERENCING

TWO SCORE FRAGMENTS FOR THE PIANO AND VIOLIN PARTS. SLIDERS ON THE LEFT AND RIGHT

DISPLAY THE TIME REMAINING FOR THE CURRENTLY SELECTED FRAGMENTS...... 138 FIGURE 5.3 BOOK OF CHANGES–EXAMPLE OF AN INDIVIDUAL PROBABILITY FUNCTION FOR EACH STEP ...... 140 FIGURE 5.4 BOOK OF CHANGES–THE EIGHT PROBABILITY CHOICES FOR EACH SCORE FRAGMENT...... 140 FIGURE 5.5 CUSTOMISED KEYBOARD INTERFACE FOR PLUS MINUS...... 147

viii

FIGURE 5.6 PLUS MINUS–INTERFACE FOR PAN AND ENVELOPE CONTROLLERS ASSIGNED TO EACH VOICE ...... 148 FIGURE 5.7 PERFORMANCE INTERFACE FOR SONIC CONSTRUCTION ...... 153 FIGURE 5.8 SONIC CONSTRUCTION–RE-PROJECTED IMAGE INTO THE INSTALLATION SPACE ...... 154 FIGURE 5.9 SONIC CONSTRUCTION–SIGNAL FLOW...... 155 FIGURE 5.10 SONIC CONSTRUCTION–PARAMETERS USED TO CONTROL THE FOF SYNTHESIS ...... 156

FIGURE 5.11 SONIC CONSTRUCTION–CONTROL DATA ENVELOPES FOR MAPPING MOVEMENT TO FOF SYNTHESIS PARAMETERS ...... 157 FIGURE 5.12 SOUNDING THE WINDS–ELECTROFRINGE 2005 PERFORMANCE KING EDWARD PARK, NEWCASTLE ...... 163 FIGURE 5.13 PICAVET DETAIL...... 165 FIGURE 5.14 SOUNDING THE WINDS (REHEARSAL)–KITE AND GROUND BASED LAPTOP RECEIVING OSC DATA VIA BLUETOOTH ...... 166 FIGURE 5.15 SOUNDING THE WINDS SIGNAL FLOW ...... 167 FIGURE 5.16 SIX DEGREES OF TENSION–SYSTEM CONFIGURATION...... 170 FIGURE 5.17 SIX DEGREES OF TENSION–SIGNAL FLOW ...... 172 FIGURE 5.18 SIX DEGREES OF TENSION–MIXER INTERFACE ...... 172 FIGURE 5.19 SIX DEGREES OF TENSION–THE MUNGER~ PATCH...... 173 FIGURE 5.20 SIX DEGREES OF TENSION–NEURAL NETWORK INTERFACE...... 174

ix

Abstract

Creating and performing utilising interactive systems is now a well-established paradigm. Sensing technology can map gestures to sound generating processes, capturing the nuances of a gesture and sculpting the sound accordingly. Interactive installations enable audiences to become part of the process of realising a creative work. Yet many of the models and frameworks for interactive systems, specifically music focused systems, are strongly oriented around a MIDI event based framework, with little or no provision to accommodate the potentials of more dynamic approaches to creative practice. This research seeks to address the lack of appropriate models currently available and come to a more contemporary understanding of interactive music making.

My approach follows two trajectories. Firstly, I undertake a comprehensive review of interactive creative works, encompassing the of the 1950s and 1960s, interactive installation, digital musical instruments and computer networked ensembles. Secondly, I explore and draw together proposed definitions, models and classifications of interactive systems, clarifying concepts such as mapping, processing, gesture and response. The concepts are tested in a folio of creative works that form the creative research.

x 1

Chapter One: Introduction

1.1 Potentials and Possibilities

Creating and exploring rich electroacoustic sound environments through intuitive, organic and engaging interactive experiences continues to inspire and challenge both audiences and artists alike. Such systems present many creative possibilities, however, I propose that we are yet to fully realise their creative potentials.

Interactive sound installations can empower participants to perform and create their own unique experience of a work, exploring and discovering the sonic potentials contained within, as opposed to a more passive experience of engaging with a pre- rendered work1. Interactive performance can lead to the creation of new and unexpected sonic outcomes, produced as a consequence of the iterative feedback loop between human performer and interactive computer system.

Interactive systems have the potential to facilitate a liquid and flexible approach to creating sonic temporal structures and topographies, while still maintaining the integrity and overall identity of an individual work. Just as a sculpture can change appearance with different perspectives and lighting conditions, yet a sense of its unique identity is still maintained i.e. that it is same artwork, so too, an interactive sound installation or performance may well sound different with subsequent experiences of the work, but still be recognisable as the same piece.

Interactive systems can allow algorithms to be used compositionally through dynamic interaction, not only with respect to event-based parameters such as pitch and duration, but also with respect to sound synthesis and processing techniques, mediating and interpreting rich gestural input and defining formal structural relationships. By defining synthesis parameters and compositional processes algorithmically, a performer interacting with the system can concentrate on the

1 Although even experiencing a static, fixed sound environment still implies some degree of interaction between artist, audience and the creative work. Chapter One: Introduction 2 manipulation of a specific set of musical parameters. For example a performer’s gestures could be translated to control aspects of timbre and spatialisation, creating a sense of sculpting a work as much as performing or interacting with it. Furthermore, interactive systems can potentially provide a way for performers to engage with technology more intuitively, with the system’s responses being less machine like and predictable but instead more organic, serendipitous and independent.

1.2 Have we realised the potentials of interactive systems?

The concept of interaction with respect to new media and electronic arts’ practice is certainly well established. However, it has been proposed that the majority of interactive art works are in fact reactive, rather than interactive (Bongers 2000; Paine 2002b). While not wanting to suggest that the degree of interaction in a system reflects in any way some measure of the quality of a creative work, the liberal application of the term ‘interactive’ does nothing, however, to further our understanding of how such systems function and the potentials for future development. Furthermore, there is considerable divergence in the proposed models, classifications and analyses of interactive sound generating systems. Invariably such models focus on the concept of mapping, defining a system in terms of the way inputs are routed to outputs, overlooking the equally important and interrelated role of processing.

Interactive performance systems are often described as instruments (Bongers 2000; Jordà 2005; Tanaka 2000; Waisvisz 1985), with the process of designing and creating an interactive system analogous to creating a new musical instrument. Such comparisons have the benefit of drawing on the expertise and historical credibility of the long tradition of acoustic instrument building. In this way an instrument can be thought of as a kind of gesture sonification device. However the comparison fails to encompass the composition like qualities of interactive performance systems, with the potential to encode in such systems, complex and sophisticated compositional structures and relationships, to be discovered and engaged with through the process of interaction. Likewise, describing interactive systems with a focus on musical

Chapter One: Introduction 3 performance only, i.e. as an ‘interactive music system’ (Rowe 1993; Winkler 1998), fails to encompass the potential for gestural interaction and the control of timbral aspects of sound. Creating and performing with interactive systems typically incorporates elements of both composition and improvisation. Some have suggested the term ‘comprovisation’ (Dean 2003) to describe this blurring of composition and improvisation, however there is little consensus in the specific meaning and use of the term.

1.3 Structure of the Thesis

Firstly (Chapter One), interactive systems are placed in context through a discussion of their use with respect to both installation and musical performance. Chapters Two and Three then go on to discuss selected interactive works in detail.

Chapter Two begins with the live electronic music of the 1950s and 1960s discussing works by Cage, Stockhausen, Tudor, Lucier and Mumma. The first programmable instruments and interactive music software applications are then discussed with specific reference to works by Martirano, Chadabe, Zicarelli, Spiegel and Mathews. The chapter concludes with a discussion of interaction in the context of computer-networked performance ensembles, with the work of the League of Automatic Music Composers, the Hub, austraLYSIS electroband and HyperSense Complex.

Chapter Three continues this detailed look at examples of interactive art practice beginning with new instrument designs intended for use in performance contexts, discussing works by Waisvisz, Schiemer, Hewitt and the trio Sensorband. Interactive sound installations are discussed, looking specifically at a recent collaborative project by Gibson and Richards. Interactive instruments designed for performance by non-musicians are discussed in the context of works by Jordà, Patten and Recht, Iwai and Blaine. The overview concludes by discussing interactive systems intended to behave like an independent, autonomous performer presenting examples by Lewis, Rowe and Weinberg.

Chapter One: Introduction 4

Chapter Four, contrasting the discussion of creative works in the previous two chapters, examines the differing approaches that have been taken in attempting to define, classify and model interactive music systems. Specific reference is made to the writings of Rowe (1993), Winkler (1998), Bongers (2000), Chadabe (1997) and Spiegel (1992). A general model of interactive systems is then presented with specific reference to system architecture, control and feedback, mapping, gestural input and processing.

Chapter Five presents the folio of creative works that form the creative research informed by the concepts and examples outlined in this thesis. Five works are discussed—Book of Changes, Plus Minus, Sonic Construction, Sounding the Winds and Six Degrees of Tension.

Chapter Six presents the conclusion and summary. An Appendix lists the contents of the supporting Audio CD and Data CD–ROM.

1.4 Technology Convergence

As I will demonstrate in the following two chapters there is considerable and growing interest in creating interactive systems. Autonomous musical robot musicians, interactive musical toys, tabletop interfaces, wearable interfaces and interactive games for creating sound are just some examples of the systems being developed. Supporting this are a number of key developments in both software and hardware that continue to evolve.

Real-time patching and scripting software languages for sound synthesis and data processing such as Max/MSP2, Pd3, SuperCollider4, AudioMulch5, ArtWonk6, AC Toolbox7 and Kyma X8, are now well established and generally have extensive user

2 and David Zicarelli http://www.cycling74.com/products/maxmsp viewed 1/5/2007. 3 Miller Puckette, http://crca.ucsd.edu/~msp/software.html viewed 1/5/2007. 4 James McCartney http://www.audiosynth.com/ viewed 1/5/2007. 5 Ross Bencina http://www.audiomulch.com/ viewed 1/5/2007. 6 John Dunn http://www.algoart.com/artwonk.htm viewed 1/5/2007. 7 Paul Berg http://www.koncon.nl/ACToolbox/act.html viewed 1/5/2007. 8 http://www.symbolicsound.com/cgi-bin/bin/view/Products/WebHome viewed 1/5/2007.

Chapter One: Introduction 5 communities and code libraries that can be readily extended for custom applications9. Computing processor speeds continue to increase, making it possible to run more sophisticated audio processing in real-time. The range of hardware sensors and microprocessors available continues to expand with cost effective solutions supporting reasonable resolutions readily accessible, for example – ultrasonic, infrared, laser, GPS, flex, tension, gyroscope, accelerometer, hall effect and compass. Allowing system components to communicate with each other are network communication protocols such as OpenSound Control (OSC)10 that compliment the well-established and older (1983) MIDI11 messaging protocol for musical instruments.

1.5 Describing Interactive Systems

There is considerable variation in the way interactive systems are described. Interactive music systems, interactive instruments, interactive compositions, interactive installations; responsive, hyper, enactive, reactive and generative are just some of the ways interactive systems have been depicted. As we shall see in the following chapters many of these words and phrases can have specific meanings assigned to them, however, they are just as likely to be used freely and arbitrarily, especially in the wider context of the electronic arts. I have chosen to title the thesis Interactive Electroacoustics to try to encompass as wide an interpretation of the genre as possible and to be inclusive of my own artistic practice. By using the term Interactive Electroacoustics my intention is to accommodate an interactive practice where the input to a system could be musical, gestural or environmental and the sonic output could likewise be described variously as music, sound art, noise, spectral, acousmatic or even silence. In the thesis, where a term or phrase does have a specific meaning, for example Rowe’s ‘interactive music systems’, I likewise use the specific description in context.

9 e.g. http://www.maxobjects.com/ viewed 1/5/2007. 10 Matt Wright http://www.cnmat.berkeley.edu/OpenSoundControl/ viewed 1/5/2007. 11 Musical Instrument Digital Interface http://www.midi.org/ viewed 1/5/2007.

Chapter One: Introduction 6

For visual clarity I have italicised the titles of works referenced in the thesis. In some cases the distinction between a work and an instrument or software system is not easilly apparent. For example Lewis’ Voyager and Waisvisz’s The Hands are in both cases the names of interactive systems, however these titles are also associated with performances and recordings made by the artists playing their respective systems. In this situation and similar, I have chosen to also italicise the name.

1.6 Context

1.6.1 Interactive Sound Installations

Interactive sound installations sense the actions of one or more participants in a space to generate input values for creating and controlling sound (Bongers 2000; Wanderley and Orio 2002). In the wider context of new media and electronic arts, interactive installations also typically incorporate other mixed media elements including video, sculptural objects and mechanical or robotic devices. In fact the use of the term interactive is now ubiquitous throughout new media and electronic art practice—a fact that forces a more pedantic application of the term interactive within this thesis.

1.6.1.1 Myron Krueger and Responsive Environments

Myron Krueger’s responsive environments are some of the first examples of interactive installation. Influenced by John Cage’s experiments in indeterminacy and audience participation, Krueger considered the “artist as a ‘composer’ of intelligent, real-time computer-mediated spaces” (Packer and Jordan 2002:105). His first interactive space, Glowflow, was conceived in 196912. It consisted of a darkened empty room with floor mounted pressure sensors, loudspeakers projecting sounds into the space generated by a Moog synthesiser13 and six transparent tubes containing phosphorescent particles in water, attached to the walls of the space, each with a different coloured pigment. Participants walking in the space triggered the floor sensors, which in turn lit up the different coloured tubes and changed the sounds

12 Glowflow was developed in co-operation with Dan Sandin, Jerry Erdman and Richard Veneszky. 13 http://emfinstitute.emf.org/exhibits/moogsynth.html viewed 1/5/2007.

Chapter One: Introduction 7 being generated by the synthesiser and their spatial location. However, Krueger considered the result more a kinetic sculpture than his own ideal of a responsive environment (Packer and Jordan 2002:106).

Videoplace is a work that Krueger has been developing since the mid 1970s. The work consists of two or more separated, darkened spaces, each set-up with video cameras. Inside, participants see their own image captured by the video camera, projected and interpreted as a computer-generated silhouette, together with those of other participants situated in remote locations (Dinkla 1994, 1997; Packer and Jordan 2002). Participants can ‘touch’ each other’s video silhouette and manipulate and interact with various computer generated graphical objects (Figure 1.1). Participants’ movements can also be tracked by the system, generating responsive graphics, video transformations and sound. Videoplace was one of the first examples of augmented reality and a precursor to developments in telepresence and telematic art. Krueger continues to develop the work.

Figure 1.1 Video still from Krueger’s Videoplace14

1.6.1.2 Jeffrey Shaw–Points of View

Another early example of interactive art involving audience participation is Jeffrey Shaw’s Points of View, developed in 1983. Using physical interfaces not video tracking, Points of View was a performed interactive installation in which a selected audience member controlled or directed the interaction using two custom-made

14 From http://www.medienkunstnetz.de/works/videoplace/images/2 viewed 1/5/2007.

Chapter One: Introduction 8 joysticks (Figure 1.2)15, 16. The result was rendered as a three-dimensional computer graphic simulation and projected onto a large screen, with the rest of the audience watching the performance as spectators (Dinkla 1994). The imagery was derived from ancient Egyptian hieroglyphics. Sonically, sixteen sound tracks of mostly spoken text were mixed by the joystick interfaces, together with the computer graphics.

Figure 1.2 Joystick interfaces for Jeffrey Shaw’s Points of View17

1.6.1.3 David Rokeby–Very Nervous System

Also dating form the early 1980s are David Rokeby’s first interactive sound installations with Reflexions (1983) and Body Language (1984–86) leading to the development of his Very Nervous System (1986–90)18. Using a configuration of video cameras, image processors, computers and synthesisers, the Very Nervous System tracked a person’s movements in the installation space to create sound. Although mostly presented in the context of indoor gallery installation, Very Nervous System has also been installed in public outdoor spaces and used in live performance contexts. Rokeby’s interest is in the holistic perception of the system, the relationship between the participant and the system that unfolds through the process of interaction, rather than perceiving the system as under the control of or being performed by the participant. Rokeby writes— The installation could be described as a sort of instrument that you play with your body but that

15 http://www.jeffrey-shaw.net/html_main/frameset-works.php3 viewed 1/5/2007. 16 Software design by Larry Abel, Hardware by Tat van Vark and Charly Jungbauer. 17 Edited from http://www.jeffrey-shaw.net/images/067_001.jpg viewed 1/5/2007. 18 http://homepage.mac.com/davidrokeby/vns.html viewed 1/5/2007.

Chapter One: Introduction 9

implies a level of control which I am not particularly interested in. I am interested in creating a complex and resonant relationship between the interactor and the system19.

The term interactive, is now a common description applied to new media and electronic art practice, with such works typically combining and connecting a variety of contemporary technologies and applications including Global Positioning Systems (GPS), Geographic Information Systems (GIS), personal digital assistants (PDAs), artificial life models (AL), computer games, robotics, virtual worlds and simulated environments. I will now look at some more recent examples of interactive installation practice.

1.6.1.4 Rod Berry–Models of Artificial Life and Listening Sky

Rod Berry’s Listening Sky20 (2001) uses artificial life (AL) algorithms to create a virtual ecosystem (Berry et al. 2001; Dorin 2004). Participants can use a mouse or pen-and-tablet interface to move a spider-like listener over the virtual landscape, interacting with the artificial creatures living in the system and interpreting their DNA into sounds (Figure 1.3). The creatures’ genetic traits are reflected in their size, colour and pitch. When the virtual creatures interact and reproduce they pass on their DNA to their offspring who as a result, share the sound and graphic qualities of their parents.

19 Rokeby from http://homepage.mac.com/davidrokeby/vns.html viewed 1/5/2007. 20 Produced in collaboration with Wasinee Rungsarityotin, Alan Dorin, Palle Dahlstedt, and Catherine Haw at ATR Media Integration and Communications Research Laboratories, Kyoto Japan. http://www.mic.atr.co.jp/~rodney/listening_sky/L_Sky_index.htm viewed 1/5/2007.

Chapter One: Introduction 10

Figure 1.3 Berry’s Listening Sky interface21

1.6.1.5 Sarah Waterson and Kate Richards–Data Mapping and sub_scapeBALTIC

Sarah Waterson and Kate Richards’ interactive installation sub_scapeBALTIC (2004) invited participants to explore disparate yet connected data sets; bathymetric measurements from the Baltic Sea measuring pollutants, fish populations and the like and video footage of the Australian desert, through mappings and transcodings22, 23. A custom-made interface in the form of a periscope, housed the video display and enabled participants to scan the data sets, dynamically mapping bathymetric data to control both sound synthesis and video processing parameters (Figure 1.4).

Figure 1.4 Periscope Interface from Waterson and Richards’ sub_scapeBALTIC

21 From http://www.mic.atr.co.jp/~rodney/listening_sky/L_Sky_index.htm viewed 1/5/2007. 22 http://www.subscape.net/isea.html viewed 1/5/2007. 23 Programmed by Jon Drummond.

Chapter One: Introduction 11

1.6.1.6 George Khut–Bio-sensing and Cardiomorphologies

The examples presented so far have used differing sensing technologies to map participants’ physical gestures to sound and video outputs, specifically video tracking, joysticks, computer mouse movements and periscopes. George Khut’s installation Cardiomorphologies24 (2004–06) uses the participant’s own body signals as inputs to the system. Sensors placed on a participant’s body measure heartbeat and breathing rate to influence and control an immersive abstract video and sound projection. Gradual changes over time in the participant’s heartbeat are detected and measured. The results of this analysis are used to affect and change the sounds and images being rendered by the system, allowing the participant to experience and interact with their underlying body cycles via the system’s responses.

1.6.1.7 Garth Paine–Interacting with the Environment.

An installation can also interact with its environment, with or without human input. Garth Paine’s Reeds (2000) and MeteroSonics (2005) respond to weather data, specifically wind speed, wind direction, temperature and solar radiation to control sound synthesis algorithms. Reeds was installed in the Ornamental Lake of the Royal Botanic Gardens, Melbourne, and consisted of a number of ‘reed pod’ sculptures floating on the lake25 (Figure 1.5 and Figure 1.6). Weather data, transmitted to a central computer from small weather stations built into some of the floating pods, was mapped to control instrument algorithms running in SuperCollider, changing the pitch, texture and intensity of the sound (Paine 2003). The synthesis results were then transmitted back to floating pods equipped with speakers, broadcasting an electroacoustic soundscape over the lake. MeteroSonics also uses data collected from weather stations, in this case in a web based environment, allowing participants to select and layer software instruments built in JSyn26 and assign specific weather data to affect various synthesis parameters27.

24 http://www.georgekhut.com/cardiomorphologies/how_it_works.html viewed 1/5/2007. 25 http://www.activatedspace.com/Installations/Reeds/ReedsInstallation.html viewed 1/5/2007. 26 Phil Burk audio and music synthesis API for Java http://www.softsynth.com/jsyn viewed 1/5/2007. 27 http://www.meterosonics.com viewed 1/5/2007.

Chapter One: Introduction 12

Figure 1.5 Paine’s Reeds floating reed pod28 Figure 1.6 Paine’s Reeds cross section drawing29

1.6.1.8 Felix Hess–Emergent Behaviour and Moving Sound Creatures

Interactive systems can also interact with other intelligent systems. Felix Hess’ Moving Sound Creatures (1989) consists of twenty-four small robots that move and produce sound depending on the behaviour of the other robots and the audience (Figure 1.7). Hess describes their behaviour as follows— They move on wheels and have stereo hearing. If it is quiet enough and they hear each other, they try to find each other. When they all lump together they emit a different sound which means “go away” and they disperse again. It is a concert where the participants perform a dance (Arjen Mulder and Post 2000:117).

Hess was interested in the perception of the interactions as a whole, rather than as a sum of its individual parts. As Hess describes— I was not particularly interested in the exact sound the devices should produce. My main concern was whether the interaction between the devices could yield the changing rhythms, the wavelike movements and the spatial quality that I find so marvelous in frog concerts (Arjen Mulder and Post 2000:115).

28 From http://www.activatedspace.com/Installations/Reeds/ReedsInstallation.html viewed 1/5/2007. 29 From http://www.activatedspace.com/Installations/Reeds/ReedsInstallation.html viewed 1/5/2007.

Chapter One: Introduction 13

Figure 1.7 Hess’ Moving Sound Creatures30

These works discussed above are just a small representative sampling of the vast landscape of interactive installation practice and new media arts. Generating sound through interaction is also the basis for much of the contemporary practice of digital musical instrument design and musical performance with interactive systems. We shall now look at some examples of interactive instrument design and performance.

1.6.2 Performing with Interactive Systems and Designing New Instruments

Interactive musical instruments combine interfaces and sound generating functions with programmable logic. Interactive instruments are programmable, can execute compositional algorithms in software, can have memory for storing past or pre- recorded information (data and sound) and can perform real-time digital signal processing and sound synthesis. Unlike their more familiar acoustic counterparts, interactive instruments are able to respond with their own compositional material, expanding and interpreting a performer’s gestures in context, or can behave almost like another independent performer.

Working with the GROOVE31 system in the 1970s, Laurie Spiegel and described the systems they were developing as “intelligent instruments”

30 From Mulder, A. and Post, M. (2000). Book for the Electronic Arts. : De Balie. 31 GROOVE (Generated Realtime Operations on Voltage-Controlled Equipment) developed by Max Mathews and F. Richard Moore at , from 1968 to 1979.

Chapter One: Introduction 14

(Jordà 2004:325; Spiegel 1987). For Spiegel and Mathews an intelligent instrument was a system that could respond to a performance input in a multitude of ways through the use of compositional algorithms. Joel Chadabe (1997:293) described performing with interactive music systems as “interactive composing”, where control of the music is shared between performer and computer system.

To perform, play and/or interact with an interactive instrument requires the design of an effective interface. The interface captures or senses a performer’s gestures, mapping them to the system’s internal processing algorithms. The outputs of the system’s processing functions are likewise mapped to sound generating parameters, converted into acoustic signals by loudspeakers or mechanical interfaces connected to sound-making objects. Connecting sensor outputs to the control inputs for sound- making functions in this context is often described as mapping (Paradiso 1997). Much of the recent creative practice and research in new instrument design has focused on this concept of mapping.

1.6.2.1 Remote Sensing Interfaces

The remote sensing of a performer’s gestures through non-mechanical means has been a commonly employed interface approach in both installation and performance contexts. As previously discussed, both Krueger’s Videoplace and Rokeby’s Very Nervous System used real-time video capture and analysis to create environments that responded to participants’ movements in the installation space. Other remote sensing technologies utilise infrared, ultrasonic and capacitance. Remote sensors can provide accurate and continuous (data can be reported in intervals of milliseconds) measurements of a performer’s movements, the results of which can be mapped directly or indirectly, after analysis and interpretation, to control sound synthesis, processing and spatialisation parameters. The persistence of these practices indicates that there is something about the effect of a performer’s gestures creating and sculpting sound ‘out of thin air’ that is particularly captivating and engaging for performers and audiences alike. Early examples of the application of technologies to interactive music systems include:

Chapter One: Introduction 15

1.6.2.1.1 Joel Chadabe–Solo Chadabe (1997:292) in his work Solo (1977) used two Theremin antennas as sensors rather than instruments in their own right. Chadabe’s arm movements, detected by the Theremin antennas as changes in capacitance with changes in proximity, were used to control a synthesiser32, one antenna to control tempo the other to control timbre.

1.6.2.1.2 Rolf Gehlhaar–SOUND=SPACE An often cited example of remote a sensing instrument is Rolf Gehlhaar’s SOUND=SPACE (1984). The system uses an array of ultrasonic sensors to convert a small room (approximately thirty-six square metres) into an interactive musical instrument (Gehlhaar 1991)33. The system detects the position and movements of multiple performers in the space and uses this information to control sound synthesisers via MIDI messages. The first version was installed at the Centre Pompidou, Paris, in 1985 and since then Gehlhaar has continued to develop the instrument, taking advantage of advances in digital sound synthesis technology. Gehlhaar composes different pieces for SOUND=SPACE by designing different interactive relationships between the input gestures as detected and analysed by the system and the outputs sent to the synthesiser. Movement in the space is not always mapped directly to synthesis parameters but can be interpreted through algorithmic processes to define “different musical topographies” or “ways of structuring the space musically”34.

1.6.2.1.3 Leonello Tarabella–Exploded Instruments Also functioning in terms of an instrument controlled by the performer, Leonello Tarabella (2004b; 2004a) has used remote sensing as the interface to a number of his instruments including TwinTowers (1995) and Handel (2004)35. TwinTowers used infrared sensor arrays to measure the distances of a performer’s palms from the

32 http://www.vintagesynth.com/index2.html viewed 1/5/2007. 33 http://easyweb.easynet.co.uk/nour-rolf viewed 1/5/2007. 34 From http://easyweb.easynet.co.uk/nour-rolf/ssdoc.html viewed 1/5/2007. 35 Tarabella’s instruments have been developed together with other researchers at the computerART Lab of ISTI/CNR in Pisa including Graziano Bertini and Gabriele Boschi who developed the electronics of the Twin Towers. http://tarabella.isti.cnr.it/init.html viewed 1/5/2007.

Chapter One: Introduction 16 sensors, while Handel uses real-time analysis of video to detect different shapes and positions of the performer’s hands (Figure 1.8). The interfaces have been designed to be stable and responsive, evoking in “the performer the sensation of touching the sound” (Tarabella 2004a). Tarabella’s Imaginary Piano is based on the Handel system and, as the title suggests, treats the instrument as a virtual keyboard. The performer plays the instrument, as if seated at an invisible piano, with the system detecting where and how fast the performer’s hands cross the imaginary keyboard. The information generated from the interface is used for “controlling algorithmic compositions rather than for playing scored music” (Tarabella 2004a). Tarabella defines his electroacoustic instruments as exploded instruments, consisting of different elements combined together – sensors and controllers, computers and electronic sound generators, amplifiers and loudspeakers, connected via different typologies of cables and signals (Tarabella 2004a).

Figure 1.8 Video analysis in Tarabella’s Handel system36

1.6.2.1.4 Jon Drummond–Spiral and Sheet I have also been enticed by the magic of invisibly sensing gesture to generate sonic structures. My first interactive work, Spiral and Sheet (1991–97), used two Theremins as sensors with their antennas consisting of two copper sculptural objects (Bandt 2001)37. Software running on a Motorola 68HC11 Microcontroller38 analysed the frequency of the Theremin signals to determine the proximity of a performer’s

36 From http://tarabella.isti.cnr.it/iperinstruments.html viewed 1/5/2007. 37 http://www.jondrummond.com.au/pastprojects.html viewed 1/5/2007. 38 http://www.hc11.demon.nl/thrsim11/68hc11/ viewed 1/5/2007.

Chapter One: Introduction 17 hands to each of the copper antennas (Figure 1.9). Second order analysis of the signals was carried out in the software to determine other aspects of the performer’s gestures including velocity of movement, duration of held or stationary positions and moments of crossing between the different antennas. This information was used to influence various algorithmic compositional processes controlling event density, stochastic pitch set selection and voice. The results were sent to an Alesis QuadraSynth39 sound module and an Ensoniq DP440 effects processor. Interacting with the instrument through physical gesture effected both spatial and timbral aspects of the sonic outcomes. The system was used in both performance and installation contexts.

Figure 1.9 Jon Drummond’s Spiral and Sheet41

1.6.2.2 Physical Interfaces, Designing New Instruments

Although providing a significant amount of physical freedom, remote sensing of gesture provides little or no feedback to the performer, either passive or haptic, about the state of the controller itself or the state of the system, aside from the sonic outcomes. This can present a significant restriction in the degree of control a performer has over a remote sensing interface in comparison to the feedback physical interfaces can provide (Bongers 2000). With respect to acoustic instruments, the feedback of the interface is integrally linked to the physics and the acoustics of the

39 http://www.midiworld.com/quadrasynth/qs_plus.htm#S4P viewed 1/5/2007. 40 http://www.stanford.edu/~dattorro/DP4.htm viewed 1/5/2007. 41 From http://www.jondrummond.com.au/pastprojects.html viewed 1/5/2007.

Chapter One: Introduction 18 instrument design. Strings vibrate under fingers and against bows, reeds vibrate against lips and embouchures, even the subtle differences in touch of a piano keyboard can provide crucial tactile feedback to a concert pianist. Simple electronic interfaces can likewise provide a form of passive feedback; for example sliders, dials and touch sensitive surfaces can provide both visual and tactical feedback of the controller’s state or current settings.

1.6.2.2.1 Michel Waisvisz–The Hands An often cited example of a physical interface, that still achieves some of the gestural freedom offered by remote sensing is Michel Waisvisz’s The Hands (1984). The instrument consists of a pair of hand-held sensor arrays that capture Waisvisz’s performance gestures with a similar degree of freedom to a remote sensing system, yet provides the performer with passive feedback of the status of the interfaces, with the performer able to both observe and feel the physical orientation and relative positions of the interfaces (Figure 1.10). The instrument incorporates a number of different sensor types including ultrasonic, tilt switches and finger-operated switches and thus supports both discrete and interrelated multiple mappings to control sound generating parameters (Bongers 2000:159). In a similar approach to the previously discussed examples of interactive instruments, the mappings of sensor output to sound generator developed for The Hands are not always direct, but instead can be interpreted and processed by the system’s internal processing algorithms, described by Waisvisz (1985) as “control signal algorithms”. These internal processing algorithms contribute to the instrument creating a sense of its own independence, responding in a non-linear and at times unpredictable manner.

Chapter One: Introduction 19

Figure 1.10 Waisvisz’s The Hands in 200542

Other new instrument designs incorporating various sensor configurations embedded into physical interfaces include Laetitia Sonami’s Lady’s Glove (1991) and Donna Hewitt’s E-Mic (2003). In all these examples each artist’s own performance practice has directly informed the instrument design. These instruments are discussed in more detail in Chapter Three.

1.6.2.2.2 Hyperinstruments By attaching sensors to acoustic instruments such as guitars, wind instruments, keyboards, percussion and strings, with Joe Chung at the MIT Media Lab in 1986 started to developed what they refer to as Hyperinstruments (Machover and Chung 1989; Machover 1992, 1997). One of their main goals was to give virtuoso performers the ability to exploit their advanced playing skills, using sensors embedded in the instrument, to expand and amplify their performance gestures and nuances in a coherent and connected manner to their traditional instrumental playing technique43. We sought to develop techniques that would allow the performer’s normal playing technique and interpretive skills to shape and control computer extensions to the instrument, thus combining the warmth and “personality” of human performance with the precision and clarity of digital technology. In fact, the whole

42 From http://www.crackle.org/TheHands.htm viewed 1/5/2007. 43 Marcelo Wanderley (2001), in the context of clarinet performance also proposed a similar use of ancillary performance gestures as control sources.

Chapter One: Introduction 20

hyperinstrument idea is an extension of my general musical philosophy: conveying complex experience in a simple and direct way44.

The Hypercello45, 46 and the Hyperviolin47 were some of the first Hyperinstruments to be developed at the MIT Media Lab and as their titles suggest, they were based on string instruments. The Hypercello, created for Yo-Yo Ma in 1991, uses sensors to measure left hand fingering positions, bowing pressure, position and wrist angle and captures the audio signals from each of the strings (Paradiso 1997). Machover wrote the Hyperstring Trilogy (1990–1993) for Yo-Yo Ma and the new instrument, consisting of three pieces – Begin Again Again..., Song of Penance, and Forever and Ever. In discussing Begin Again Again... Machover describes the relationships between the instrument and the electronics, with the instrument’s embedded sensors enabling the computer to— … measure, evaluate, and respond to as many aspects of the performance as possible. This response is used in different ways at different moments of the piece: at times the cellist's playing controls electronic transformations of his own sound; at other times the interrelationship is more indirect and mysterious. The entire sound world is conceived as an extension of the soloist--not as a dichotomy, but as a new kind of instrument48.

1.6.2.3 Interactive System as Autonomous Agent

1.6.2.3.1 Musical Accompaniment: Risset’s Duet for One Pianist Jean-Claude Risset’s Duet for One Pianist: Eight Sketches (1989) for Yamaha Disklavier, pianist and computer takes specific advantage of the Disklavier’s MIDI input and output by having both the pianist and the computer play a duet on the same piano (Rowe 1993:85). As the pianist plays each of the eight notated sketches, the computer responds, playing along with the performer in a duet, responding according

44 From http://web.media.mit.edu/~tod/Tod/hyperstring.html viewed 1/5/2007. 45 http://web.media.mit.edu/~tod/Tod/hyper.html viewed 1/5/2007. 46 Designed by Neil Gershenfeld and Joe Chung in 1991. 47 http://web.media.mit.edu/~tod/Tod/hyperviolin.html viewed 1/5/2007. 48 From http://web.media.mit.edu/~tod/Tod/begin.html viewed 1/5/2007.

Chapter One: Introduction 21 to the rules defined for each sketch, encoded algorithmically in Max. The titles of the eight sketches reveal the type of response each sketch focuses on—i: Double ii: Mirrors, iii: Extensions, iv: Fractals, v: Stretch, vi: Resonances, vii: Up Down and vii: Metronomes.

1.6.2.3.2 Score Following: Cort Lippe’s Music for Clarinet and ISPW Cort Lippe’s Music for Clarinet and ISPW (1992) uses a score-following system, in which the computer tracks the clarinettist’s progress through the score and responds accordingly. At certain pre-defined cue points in the score, the system runs various transformation processes of the clarinet signal via the ISPW49 (Rowe 1993:88). The transformation methods employed include reverberation, frequency shifting, harmonisation, noise modulation, sampling, filtering, and spatialisation. The clarinettist not only triggers the digital signal processing functions, but the processing functions themselves are controlled by the clarinet’s performance. Lippe writes— These compositional algorithms are themselves controlled by the information extracted from the clarinet input. Thus, the raw clarinet signal, its envelope, continuous pitch information from the pitch detector, the direct output of the score follower, and the electronic score all contribute to control of the compositional algorithms employed in the piece (Lippe 1993).

As the score follower advances, it triggers the ‘electronic score’ which is stored as event lists. The event lists directly control the signal processing modules. In parallel, compositional algorithms also control the signal processing. These compositional algorithms are themselves controlled by the information extracted from the clarinet input. Thus, the raw clarinet signal, its envelope, continuous pitch information from the pitch detector, the direct output of the score follower, and the electronic score all contribute to control of the compositional algorithms employed in the piece

49 IRCAM Signal Processing Workstation (ISPW) hardware and software architecture based on NeXT computers and Arial DSP boards.

Chapter One: Introduction 22

1.6.2.3.3 Interactive System as an Independent Performer: Lewis’ Voyager George Lewis’ Voyager (1986) system functions as a virtual performer, analysing Lewis’ trombone improvisation and performing its own, distinct improvisation in response (Lewis 1999, 2000). Programmed into the system is the potential for its response to be independent and unique from the material being performed by Lewis, thus requiring Lewis to react to the system, just as the system reacts to Lewis.

1.7 Summary

This chapter has presented a brief overview of interactive art practice. A timeline of the works discussed in the chapter can be found in Figure 1.11 with works classified as interactive sound installations on the right and interactive musical instruments on the left. Many of these examples could easily be classified as both an installation and an instrument. For example Gehlhaar’s SOUND=SPACE (1.6.2.1.2) and my own Spiral and Sheet (1.6.2.1.4) have been presented in both public installation and performance contexts. Similarly both Krueger’s Videoplace (1.6.1.1) and Rokeby’s Very Nervous System (1.6.1.3) could just as easily be considered as interactive instruments as installations. The relationships between these two categories will be discussed in detail in Chapter Four but remains indistinct.

Chapter One: Introduction 23

Figure 1.11 Timeline of works discussed in Chapter One

The concept of a traditional acoustic instrument implies a significant degree of control, repeatability and a sense that with increasing practice time and experience one can become an expert with the instrument. Also implied is the notion that an instrument can facilitate the performance of many different compositions encompassing many different musical styles. Contrasting this, an interactive system has the potential for variation and unpredictability in its response and depending on

Chapter One: Introduction 24 the context may well be considered more in terms of a composition or structured improvisation.

From this overview the many differing interpretations of interactive systems become apparent, incorporating concepts of audience participation, , sound synthesis, sonification and new interfaces for musical expression and performance. Such diversity also poses the question; what makes these works interactive? Waisvisz’s The Hands (1.6.2.2.1) and Gehlhaar’s SOUND=SPACE could be considered simply as examples of new digital musical instruments, more reactive than interactive. Lippe’s Music for Clarinet and ISPW (1.6.2.3.2) includes many highly deterministic elements, with the computer following the performer and responding accordingly from a predefined score, while Berry’s Listening Sky (1.6.1.4) is clearly a generative work, running both autonomously according to a predefined set of rules, and responding to participant’s input to the system. Works that require audience participation such as Krueger’s Videoplace, Khut’s Cardiomorphologies (1.6.1.6) and Rokeby’s Very Nervous System are invariably described as interactive, irrespective of how direct and reactive the underlying functions of the system might be.

However there is clearly something different about interactive systems that distinguish them from traditional instruments and compositional practice. Furthermore, exploring an interactive music system—whether from the perspective of audience member, novice-participant or expert performer—has the unique potential to provide a rich and immersive engagement with the creative potentials encoded in the system.

To address these issues a thorough examination of interactive art practice and theory is required. Chapters Two and Three will investigate specific examples of interactive art practice in detail, beginning with the live electronic music of the 1950s and 1960s. Later, Chapter Four will examine definitions, classifications and models of interactive systems that researches and artists have proposed.

25

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1

The next two chapters provide a detailed overview of interactive electroacoustics discussing selected examples in detail. The chapter begins with an examination of interaction in the context of live electronic music of the 1950s and 1960s focusing on examples by Cage, Stockhausen, Tudor, Lucier and Mumma. The use of programmable computers is discussed with reference to the work of Martirano, Chadabe and Mathews. Spiegel’s Music Mouse50 and Chadabe and Zicarelli’s M51 and Jam Factory (Zicarelli 1987) provide examples of the first publicly available commercial, interactive music software. The chapter concludes with an examination of interaction in the context of computer network performance groups, starting with the League of Automatic Music Composers, following on with the Hub and finally, discussing the Australian based network ensembles austraLYSIS electroband52 and HyperSense Complex53.

2.1 Live-Electronic Music Beginnings–1950s and 1960s

In the 1950s and 1960s the possibilities presented by advances in electronics for new ways of musical expression were accompanied by a flourishing of creative practice as manifest in the live-electronic music of the time. This is revealed in the works of composers such as John Cage, David Tudor, Karlheinz Stockhausen and Alvin Lucier amongst others. Contributing to this creative explosion was the very fact that the electronics were taken out of the studio and used ‘live’ in performance. Many of the ideas developed in these works for live-electronics have resonances, if not direct influence on contemporary explorations of interactive music systems.

50 http://retiary.org/ls/programs.html viewed 1/5/2007. 51 http://www.cycling74.com/products/M viewed 1/5/2007. 52 http://www.australysis.com/electrob.htm viewed 1/5/2007. 53 http://arrowtheory.com/hypersense/ viewed 1/5/2007. Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 26

2.1.1 John Cage

2.1.1.1 Imaginary Landscape No. 4

Cage’s Imaginary Landscape No. 4 (1951) was scored for an of twelve radios, with the actual sounds produced by each radio totally dependant on what was being broadcast at the time of the performance and where those broadcasts were located in the radio spectrum. Twenty-four performers were required to perform the work, with each radio assigned two performers, one to control the radio tuning frequency and the other to control volume and tone colour. The performers’ actions were precisely defined using traditional scored notation, which indicated the exact tuning and volume settings for each radio (Figure 2.1). Cage used the chance processes of the I-Ching to define various aspects of the piece including tempi, durations, sounds and silences, dynamics and superimpositions (Cage 1968:57-59).

Figure 2.1 Imaginary Landscape No. 4 p.1954

54 Cage, John. (1960). Imaginary Landscape No. 4. Henmar Press.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 27

The performers are given only partial control of the composition. The traditional instrumental performance paradigm is subverted because the performers have only limited control over the sounds produced by their instrument—a radio; there is no prior knowledge of what might be broadcast at the time of the performance, or even if a station exists at a particular notated dial setting. Hisses, hums, buzzes, music of any genre, speech and even silence are just some of the possible sounds that the radio instruments can produce, subject to both the time and the geographical location of the performance. Cage’s use of randomness removes both the composer’s and the performer’s sense of control over traditional instrumental note-to-note relationships. Cage as a composer does not write the specific musical content of the work but instead the composition is defined as process; technology, chance and the combined interrelated outcome of a network of performers determines the actual audible content. Cage writes of the composition process for the piece— It is thus possible to make a musical composition the continuity of which is free of individual taste and memory (psychology) and also of the literature and “traditions” of the art. The sounds enter the time-space centered within themselves, unimpeded by the service to any abstraction, their 360 degrees of cricumference free for an infinite play of interpenetration. Value judgments are not in the nature of this work as regards either composition, performance, or listening. The idea of relation being absent, anything may happen. A “mistake” is beside the point, for once anything happens it authentically is (Cage 1968:59).

Pre-existing traditional notions of musical control are subverted on three distinct levels. Firstly the composition itself is created from aleatoric processes derived from using the I-Ching. Secondly the sounds produced by the radios are essentially random, subject to the specific time and place of the performance and any environmental factors effecting signal reception such as the weather. Thirdly, control of each radio instrument is split between two performers, one for radio station tuning and the other for volume and tone control. At a specific point in the score a radio station might well be ‘tuned in’, however the volume might equally be turned off. Similarly, just because the volume is instructed to be turned up does not guarantee that a station will be present at the instructed frequency, or if present that the station

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 28 is broadcasting audio at that specific moment in time. Thus, not only are the sounds produced by the instrument subject to random factors, but the behaviour of the instrument is made random, with control divided between two performers. In this way the work itself can be considered as a collection of aleatoric events making a whole. As we shall see later, shared control of musical instruments and processes is a concept central to networked performance ensembles such as the League of Automatic Music Composers, the Hub and austraLYSIS electroband to name a few.

2.1.1.2 Cartridge Music

Nine years later, Cage’s Cartridge Music (1960) required performers to pluck small objects inserted into phonograph pick-ups (toothpicks, pins, pipe-cleaners, matches, feathers, wires, twigs, etc.) and to play furniture (i.e. chairs) wired with contact microphones55. All sounds are amplified and controlled separately by the performers with the loudspeakers surrounding the audience. The score for Cartridge Music consists of twenty numbered sheets with irregular shapes (the number of shapes corresponding to the number of the sheet) and four transparencies, one with points, one with circles, another with a circle marked like a stopwatch and the last with a dotted curving line, with a circle at one end (Figure 2.2)56. Prior to a performance the performers create their own part by superimposing the transparencies on one of the twenty sheets.

55 The number of performers should be at least that of the cartridges and not greater than twice the number of cartridges. 56 http://www.johncage.info/workscage/cartridge.html viewed 15/11/2006.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 29

Figure 2.2 Cartridge Music–sheet six of twenty with the four transparencies overlaid (points, circles, marked circle, dotted line)57

By assigning performers to separate aspects of the sound making process (for example amplification control separate to generation of the input excitation) and having the performers follow an indeterminate score (constructed from twenty sheets and four transparency overlays) an individual performer’s ability to directly control their aspect of the system is lost. As was the case with Imaginary Landscape No. 4, the actions of one performer may well be rendered inaudible by the actions of another performer controlling their amplifier and turning the volume down as instructed by the score for that particular performance. Cage writes of these levels of indeterminacy— I had been concerned with composition which was indeterminate of its performance; but, in this instance, performance is made, so to say, indeterminate of itself (Cage quoted in Nyman 1999:91).

In Cartridge Music the score is indeterminate, reconfigured by the players for each performance by combining and overlying the graphic materials provided. The players use this score to control separate aspects of a connected and interrelated

57 Cage, John. (1960). Cartridge Music. Henmar Press.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 30 system. Since the score changes with each performance the players have no direct control over the sonic outcomes of their actions. In effect Cartridge Music defines a range of timbrel constraints and a process from which the composition emerges as a result of performance.

Cage accepted into Cartridge Music any sounds, including unplanned by-products from the electronic system such as feedback and loudspeaker hum (Nyman 1999:91). Cage writes of the work— All events, ordinarily thought to be undesirable, such as feedback, humming, howling, etc., are to be accepted in this situation (Cage quoted in Revill 1992:198).

In much of the live electronic music of the 1960s, live instruments and sound sources were not just amplified, but electronically transformed at the moment of performance (Toop 2000:63). In Cartridge Music the normally quiet and virtually inaudible sounds produced by pipe cleaners and pieces of wire are amplified through the use of phonograph cartridges to an audible amplitude. These sounds would not typically be classed as distinct, clear pitches but rather as noises. This acceptance of sound itself as compositional material becomes a major consideration in . It also suggests a shift away from a focus on the defined morphology of instrumental pitch and towards timbrel composition.

2.1.1.3 Variations V

In Variations V58 (1965) Cage used photocells59 and antennae60 to sense individual dancer’s movements and consequently affect the sonic outcome (Nyman 1999:97). Cage wrote that his aim for the work was — to implement an environment in which the active elements interpenetrate … so that the distinction

58 The first performance was choreographed by , the sound-system designed by David Tudor, electronic percussion devices by , photo-electric devices by Billy Klüver, televised image distortions by Nam June Paik, film by Stan VanDerBeek, mixer designed by Max Mathews, tape recordings by John Cage, lighting by Beverly Emmons, shortwave receivers and their special placement arranged by Billy Klüver and Frederic Lieberman. http://www.johncage.info/workscage/variations5.html viewed 16/11/2006. 59 Photocell or photoresistor – electronic component whose resistance decreases with increasing incident light intensity. 60 The antennas were capacitance devices that responded to the proximity of the dancers to the antennas (Nyman 1999:97).

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 31

between dance and music may be somewhat less clear than usual (Cage quoted in Revill 1992:212).

Ten directional photocells were aimed at the stage lights, acting as light sensitive switches. As the dancers cut the light beams with their movements they triggered sounds. A second sensing system consisting of twelve 1.5 metre high metal antennae was also used. Sounds were triggered when the dancers entered the active field of the antennae, each with a sensing radius of just over one metre (Figure 2.3).

Figure 2.3 Variations V61

The controlled sound sources consisted of oscillators, short-wave radios and tape- recorders containing material composed by Cage. Before being routed to the dancers’ triggering system, these sound sources were operated by a separate group of performers. As was the case with Imaginary Landscape No. 4 and Cartridge Music, this use of separate independent control processes over the same material, resulted in no individual performer having direct control over the system. Instead, it was the combined interactions of the performers with the system that created the perceived sonic outcome. In the case of Variations V an extremely complex relationship is created between the dancers’ movements onstage and the sound movements in the

61 Variations V (1965) performed for a television taping session in Hamburg http://www.medienkunstnetz.de/works/variations-v/ viewed1/5/2007.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 32 performance space (Nyman 1999:98). Further adding to the complexity of the system, contact microphones were attached to objects that could be touched or moved by the dancers (plants, pillows, a table and two chairs). Variations V provides an early example of gestural control of sound with the entire stage floor transformed into a musical instrument, responsive to movement throughout the space.

2.1.2 Karlheinz Stockhausen–Mikrophonie I

Although coming from a contrasting aesthetic to Cage’s, one of control not randomness, Stockhausen’s Mikrophonie I (1964) results in a similar performance paradigm in which the sonic outcome is a result of the combined interdependent interactions of the performers, with no single performer in complete control of any particular sound event (Burns 2001). Unlike the Cage examples previously discussed, Mikrophonie I is precisely scored using graphic notation (Figure 2.4). Although, the score does require the unbound pages to be ordered by the performers in accordance with instructions supplied by Stockhausen.

Figure 2.4 Mikrophonie I excerpt from score62

The graphic score directs the actions of six performers—two playing a large tam- tam63 with a variety of implements, two others operating microphones on either side of the instrument (Figure 2.5) and the final two performers operating resonant

62 Stockhausen, K. (1974). Mikrophonie I. London: Universal Edition. 63 Stockhausen specifies the use of a sixty inch Paiste tam-tam for use in Mikrophonie I.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 33 bandpass filters applied to the microphone outputs and potentiometers to distribute the resulting sounds quadraphonically.

Figure 2.5 Mikrophonie I in performance64

The six performers share control of a single musical structure, with each individual performer’s actions dependant upon or mediated by the other performers in the system. The two percussionists share control of the same tam-tam; the microphone performers depend upon the sounds produced by the percussionists; the electronics performers (filtering and mixing) depend upon the combined actions of the percussionists and the microphone performers. No individual performer is in control, the sonic results being the outcome of the whole collaborative process. Wessel writes of these interdependent interactions in Mikrophonie I— The resulting sound world is not just the additive combination of the sounds generated by the individual players as in a traditional ensemble. Rather, it is the result of a cooperative interaction among the performers (Wessel 1991).

The process of performance and composition are entwined. The sounds of the tam- tam are altered in real-time; pitch and texture may be changed, extraneous sounds may be turned into pitched sound, and similarities and differences between musical

64 From http://www.medienkunstnetz.de/works/mikrophonie-i/images/1/ viewed 1/5/2007.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 34 gestures may be emphasized or de-emphasized by the electronic processes (Jordà 2005:130). This model of an interactive performance system created through shared control of a musical structure can also be found in other live electronic works by Stockhausen incorporating performance ensembles with audio feedback, audio filters and ring modulators for example (1964) and Mikrophonie II (1965).

2.1.3 David Tudor–Rainforest IV

In Stockhausen’s Mikrophonie I the acoustic responses of a tam-tam are processed using live electronics and distributed over a loudspeaker system. In David Tudor’s Rainforest IV (1973)65 the paradigm is reversed with electronic signals sent to suspended physical objects, causing them to vibrate acoustically and function like loudspeakers (Figure 2.6). Rainforest IV requires a group of composer/performers to collect a number of found and made sculptural objects66. Sounds are sent as electric signals to transducers attached to the objects, thus causing the objects to resonate and sound. The resulting sounds produced are picked up by contact microphones or phono cartridges placed on the sculptural objects and sent to ‘real’ loudspeakers located around the space. Tudor’s circuit diagram for the work can be found in Figure 2.7. Typically a performance of Rainforest IV lasts three to six hours, with the audience able to walk around and through the installation. Describing the work for a 1979 performance Tudor and Driscoll wrote— Rainforest IV is an electro-acoustic environment … Each composer has designed and constructed a set of sculptures that function as instrumental loudspeakers under their control, and each independently produces sound material to display their sculptures’ resonant characteristics. The appreciation of Rainforest IV depends upon individual exploration, the audience is invited to move freely among the sculptures (Driscoll and Rogalsky 2004:28).

65 A number of Rainforest works preceded Rainforest IV. The first Rainforest was realised for Merce Cunningham’s dance ensemble in 1958. 66 Any number of people can participate, although a performance typically involves four to fourteen composer/performers, creating some sixteen to forty sculptural loudspeaker objects.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 35

Figure 2.6 First public performance of Rainforest IV, Buffalo State College, New York, May 1974. Photography John Driscoll67

Figure 2.7 Generalized circuitry diagram for Rainforest IV 197368

Each performer is free to choose what sounds to send to the speakers he or she constructs. Tudor’s only restriction was not to use pre-recorded musical material (Driscoll and Rogalsky 2004:29). A wide variety of sound sources have been used from purely electronic to amplified natural sounds. Performers with the system learn through practical experience what material will best entice their sculpture’s resonant

67 From Viola, B. (2004). David Tudor: The Delicate Art of Falling, Leonardo Music Journal, 14, p. 51. 68 From the Electronic Music Foundation (EMF) David Tudor Pages http://www.emf.org/tudor/Electronics/rfDiagram73.html 1/5/2007.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 36 nodes to sound. In many instances the input sounds are unrecognisable when reproduced by the sculptural speaker.

The audience not only hears the acoustic sounds generated by the sculptures but also the sounds picked up by the contact microphones or phono cartridges attached to the sculptures and sent to loudspeakers at the edges of the space. Tudor described this second layer of sound as a reflection, creating— quite a harmonious and beautiful atmosphere, because wherever you move in the room, you have reminiscences of something you have heard at some other point in the space69.

Bill Viola, one of the first collaborators to perform Rainforest IV also describes these parallel and intersecting sound layers of acoustic and electronic reproduction— For me the most significant thing about Rainforest was that the sound existed both inside and outside the objects at the same time – the electrical pick-ups attached to each object revealed its internal vibrations, which were amplified and sent to loudspeakers at the periphery of the space, while the external surface of each object was audibly resonating within its own local area. The different characters of these two sounds, the inner and the outer, the material and the ephemeral, the acoustic and the electronic, made for an extremely varied and complex soundscape, which audience members caused to unfold by walking through (Viola 2004:52).

There is no score as such for Rainforest IV but instead, instructions describing a process and a circuit or signal flow diagram (Figure 2.7). The composition is embedded in the system design, realised through creating the system. Much like an instrument, Rainforest IV is performed, yet it is also a composition created through the process of performance. Audience members can walk through and around the installation, creating their own unique experience of the work; with no fixed listening point no two audience members, or performers for that matter, will hear the work in the same way (Driscoll and Rogalsky 2004:29). Likewise, given the number of

69 From I smile when the sound is singing through the space: An Interview with David Tudor by Teddy Hultberg in Dusseldorf 1988 http://www.emf.org/tudor/Articles/hultberg.html viewed 1/5/2007.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 37 performers and the complexity of the sound mappings no two performances will be exactly the same.

2.1.4 Alvin Lucier–Music for Solo Performer

Alvin Lucier’s Music for Solo Performer (1965) also uses electronic signals to resonate and play physical sound-making objects. In this example a solo performer’s own alpha brainwaves are used to resonate a variety of percussion instruments that are placed around the performance space. The performer’s alpha waves are converted into electrical signals using EEG70 scalp electrodes71. Alpha brain waves occur in the sub-audible range of eight to twelve cycles per second (Hz) (Brazier 1970). These sub-audible signals are amplified and sent to drive loudspeakers coupled to the percussion instruments, thus causing sympathetic resonances in the instruments (Lucier 1995:300). The percussion instruments Lucier has used in performances of the work include large gongs, cymbals, tympany, metal ash-cans, cardboard boxes and bass and snare drums (with small loudspeakers face down on them). The performer’s brain signals are also used to occasionally trigger pre-recorded sped-up alpha waves and to activate other electronic devices including radios, televisions, lights and alarms.

Alpha brain waves are produced in relaxed states such as meditation and are also produced in states of non-visual processing and lack of focus (Brazier 1970). In Music for Solo Performer the performer is required to sit in front of an audience and maintain an alpha state (Figure 2.8). If the performer is distracted, visualises or becomes sleepy, the alpha wave state is lost and the system is disrupted. It is the moments or bursts of alpha wave activity that are amplified into sonic energy, not directly as synthesis control signals, but as energy transferred through the loudspeakers to the acoustic percussion instruments. Lucier writes— It’s difficult to maintain a perfectly meditative alpha state. Those busts of alpha that go through the amplifier and drive the loudspeakers, the complexity of the signal and the fact that it is making the cone of the

70 Electroencephalogram (EEG) is a recording of the electrical activity in the brain. http://www.nlm.nih.gov/medlineplus/ency/article/003931.htm viewed 1/5/2007. 71 Edmond Dewan designed the brain alpha wave detector used by Lucier.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 38

loudspeaker work to resonate objects, or membranes on a drum, or the cardboard in a box, those live, physical events are the composition of the piece to me (Lucier 1995:56).

Figure 2.8 Alvin Lucier–Music for Solo Performer72

The performer, rather than controlling the system is part of a process, trying to maintain an alpha state while the system in turn is generating irregular rumblings and chatterings in response to the performer as they drift in and out of the alpha state they are attempting to maintain.

Lucier has explored interactions between electronics, resonators and acoustics in many of his works including73— Pure Waves, Bass Drum and Acoustic Pendulum (1980). Music on a Long Thin Wire (1977). Music for Snare Drum, Pure Wave Oscillator and One or More Reflective Surfaces (1990).

2.1.5 Gordon Mumma–Hornpipe

Also combining electronics with acoustic instruments, Gordon Mumma’s Hornpipe (1967) for live-electronics and solo French Horn, used interactive electronic circuitry

72 From http://emfinstitute.emf.org/exhibits/luciersolo.html viewed 1/6/2007. 73 http://alucier.web.wesleyan.edu/works.html viewed 23/01/2007.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 39 to alter and create sound, based on the performer’s input. For Hornpipe, Mumma developed a modified French Horn with a special microphone (Mumma 2005). Behind the performer was a series of vertical pipes, containing their own microphones. These were tuned to resonate at different frequencies. Behind the pipes was placed a loudspeaker. The acoustic feedback loop between the French Horn, the resonant pipes, and the loudspeaker, formed part of an electronic feedback system which Mumma (1967) referred to as “amplitude gated frequency translation”. Mumma described the system as follows— As the performance begins the system is balanced. Sound is produced only when something in the acoustic-electronic feedback-loop system is unbalanced. The initial sounds produced by the French Hornist unbalance parts of the system, some of which rebalance themselves and unbalance other parts of the system. The performer’s task is to balance and unbalance the right thing at the right time, in the proper sequence (Mumma 2005).

During the performance, the French Horn player chooses pitches that effect the electronic processing in different ways. Thus, the French Horn player’s performance and the resultant sound altered by the acoustics of the room create an interactive loop that in turn is further processed by the electronics. Mumma describes this concept of interactive electronic circuitry that alters and creates sound based on a performer’s input as ‘cybersonics’. Simply, “cybersonics” is a situation in which the electronic processing of sound activities is determined (or influenced) by the interactions of the sounds with themselves – that interaction itself being “collaborative.” As both a composer and a sound designer with analog-electronic technology, I have experienced no separation between the collaborative processes of composing and instrument-building (Mumma 1967).

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 40

Figure 2.9 Mumma and cybersonic horn from a live-performance of Hornpipe at the Metropolitan Museum of Art, New York City. February 19, 197274

It is of interest to note that Mumma in this context proposes a close relationship between composition and instrument building. Furthermore, Mumma considers that the process of designing the circuits of his cybersonics, is in fact no different to composition. My decisions about electronic procedures, circuitry, and configurations are strongly influenced by the requirements of my profession as a music maker. This is one reason why I consider that my designing and building of circuits is really “composing.” I am simply employing electronic technology in the achievement of my art. (Mumma 1967)

As we shall see in Chapter Three, Mumma’s thoughts on composition as instrument building and circuit design as composing have clear analogies with contemporary discourse by interactive instrument designers including Jordà, Paine and Schiemer.

74 From http://brainwashed.com/mumma/photo.html viewed 1/5/2007.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 41

2.2 Programmable Interactive Computer Music–1970s

In the late 1960s and through the 1970s composers and researchers such as Chadabe, Martirano, Mathews and Moore began to experiment with using the programmable computer technology of the time in conjunction with music synthesisers. By including programmable logic in an interactive live performance system, composer/performers could explore more complex and sophisticated relationships between the input, treatment processes and the sonic results. Simple algorithms such as pseudo-random number generators and sequencers could be controlled by input devices and the results synthesised in real-time. Different interactive compositions could be created by re-programming the system. Writing software became a part of the composition process. Examples of the first uses of programmable logic and computers in interactive music systems include the SalMar Construction, CEMS System, and GROOVE.

2.2.1 Salvatore Martirano–SalMar Construction

The SalMar Construction75 was built by Salvatore Martirano with James Divilbiss, Sergio Franco, Richard Borovec and Jay Barr. Work began on the instrument in 1969 and it was completed in 1972 (Figure 2.10). Chadabe considers the SalMar Construction together with his own CEMS76 System as the first examples of interactive composing instruments (Chadabe 1997:291). The SalMar Construction featured77, 78— • Two-hundred-and-ninety-one touch-sensitive two-state switches • Four voice analogue synthesiser modules including voltage-controlled oscillators, amplifiers and filters • Combinatorial logic unit that could affect in real time four concurrently running programs • A large patch bay • Twenty-four assignable speakers

75 http://ems.music.uiuc.edu/~martiran/HTdocs/instruments.html viewed 21/3/2007. 76 Coordinated Electronic Music Studio. 77 http://emfinstitute.emf.org/exhibits/martiranosalmar.html viewed 21/3/2007. 78 http://ems.music.uiuc.edu/~martiran/HTdocs/salmar.html viewed 21/3/2007.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 42

Figure 2.10 Salvatore Martirano at the SalMar Construction in the mid-1970s79

The two-hundred-and-ninety-one touch-sensitive switches could be patched to control the combinatorial logic unit in real-time, affecting the four concurrently running programs driving the four voices of the analogue . The complexity of the patching made it impossible to fully predict the system’s behaviour. Martirano writes of performing with the system— It was too complex to analyze. But it was possible to predict what sound would result, and this caused me to lightly touch or slam a switch as if this had an effect. Control was an illusion. But I was in the loop. I was trading swaps with the logic. I enabled paths. Or better, I steered. It was like driving a bus (from Chadabe 1997:291).

The interface allowed the performer to switch control from macro to micro parameters. In this way the SalMar Construction facilitated the idea of zoomable control. So that the same controls could be applied at any level, from the micro- structure of individual timbres, to the macro-structure of an entire composition (Jordà 2005:122).

79 From http://emfinstitute.emf.org/exhibits/martiranosalmar.html viewed 21/3/2007.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 43

By combining both synthesis and automated logic control via a stylised human performance interface, the SalMar Construction no longer functions as an instrument, performed in the traditional sense. We can consider a performer guiding the SalMar Construction through an improvisation, interacting with the system to create complex, new sonic outcomes. The system itself changes behaviour during the performance in both known/expected and un-known/un-expected ways; the system is therefore a dynamic player in the unfolding of the performance in a manner quite different to a traditional instrument, which has a fixed morphology and is highly valued for always behaving in a known manner.

2.2.2 Joel Chadabe

2.2.2.1 CEMS System

The CEMS System80 was an automated synthesiser built for Joel Chadabe by Robert Moog and completed and installed in 1969. Unlike the SalMar Construction, which was a specific interactive instrument of fixed design, the CEMS System could be configured into different interactive compositions. The system’s eight analogue sequencers, and customised logic hardware enabled automated and pseudo-random control of rhythm and timbre generated by the extended array of sound-generating and processing modules. Chadabe (1997:286) highlights that it was “the world’s largest concentration of Moog sequencers under a single roof”.

Figure 2.11 Chadabe working at the CEMS System in the Electronic Music Studio at State University of New York at Albany in 197081

80 http://emfinstitute.emf.org/exhibits/cems.html viewed 1/6/2007. 81 From http://www.chadabe.com/photos.html viewed 1/6/2007.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 44

In a succession of works created by Chadabe for the instrument including Drift (1970), Ideas of Movement at Bolton Landing (1971), and Echoes (1972) he explored and developed his ideas of interactive performance and automation. Describing Ideas of Movement at Bolton Landing Chadabe writes— Because I was sharing control of the music with the sequencers, I was only partially controlling the music, and the music, consequently, contained surprising as well as predictable elements. The surprising elements made me react. The predictable elements made me feel that I was exerting some control. It was like conversing with a clever friend who was never boring but always responsive. I was, in effect, conversing with a musical instrument that seemed to have its own interesting personality (Chadabe 1997:287).

2.2.2.2 Solo

Building on the experiences with his CEMS System, Chadabe continued to develop interactive compositions. Extending the approach applied by Cage in Variations V, Solo (1977) used two Theremin82 antennas, one to control tempo and the other to control timbre of several simultaneous but independent sequences on a Synclavier83 synthesiser. The work depended on physical gestures unlike previous instrumental techniques. Chadabe notes that— The gestures of moving my arms in the air to control tempo and cue instruments in or out reinforced the performing metaphor of conducting an orchestra. It was, in fact, an “improvising” orchestra … I would be reacting to what I heard in deciding how to perform … the next event (Chadabe 1997:292).

82 The Theremins were used as proximity sensors not as sound sources. 83 http://www.synclavier.com viewed 6/11/2006.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 45

Figure 2.12 Chadabe performing Solo at New Music New York in 197984

Many of the ideas explored in Chadabe’s interactive works from the 1970s still resonate strongly with contemporary practice. For example— • Compositional processes under the shared control of both the human performer and the computer system • A sense of conversing with the instrument • Systems that respond in both predictable and surprising ways • Systems that influence the performer as much as the performer influences the system • The use of physical gestures to control and generate sound

2.2.3 Max Mathews–GROOVE and the Conductor Programme

The conducting metaphor employed by Chadabe in Solo was also an underpinning model for Mathews’ Conduct software created for the GROOVE85 system (1968– 1979). The GROOVE system was built around a Honeywell DDP224 computer connected to an analogue synthesiser via fourteen independent control lines (Spiegel 1998). The system could be controlled by a variety of input devices including knobs, pushbuttons, a small organ keyboard, a 3D joystick, an alphanumeric keyboard, card

84 From http://www.chadabe.com/photos.html viewed 1/6/2007. 85 GROOVE (Generated Realtime Operations on Voltage-Controlled Equipment) developed by Max Mathews and F. Richard Moore at Bell Labs, from 1968 to 1979.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 46 reader and several console and toggle switches. Mathews and Moore described GROOVE as— a hybrid system that interposes a digital computer between a human composer-performer and an analog sound synthesizer (from Chadabe 1997:158).

For Spiegel (1998) the GROOVE system enabled all parameters of composition to considered as continuous functions of time and associated freely— GROOVE … viewed everything as (perceptually) continuous functions of time … All data was stored as series of numbers that had no specific association with any parameter of sound or of musical composition except what a user program might give it by connecting these numbers to a relay or DAC (digital to analogue convertor) … The importance of being able to approach all parameters of sound, of composition, or of performance as perceptually continuous functions of time cannot be over stressed during this current period when music seems everywhere to be digitally described as entities called “notes”, and in which there are generally conceived to be differing necessary rates of change for different musical parameters (Spiegel 1998).

The GROOVE system was used by a number of composers including , Emmanuel Ghent and Laurie Spiegel. In describing his experiences of working with the system Ghent wrote— We had the sense that we had hired the computer as a musical assistant and it was producing something that we never would have dreamed of (from Chadabe 1997:162).

As a result of working with the system Spiegel and Mathews developed their concept of ‘intelligent instruments’—a system or instrument that is able to respond to performance input in a multitude of ways through the use of compositional algorithms (Spiegel 1987). The responses of such instruments are not always predictable and thus cause new responses from the performer as a result of interacting with the system.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 47

In reaction to the fixed and inflexible relationship existing between performer and tape in such compositions of the time and inspired by a request from Boulez “to be able to conduct the tape” (Chadabe 1997:230), Mathews developed the Conductor Programme (1976) for the GROOVE system. The programme enabled a performer to playback and control different aspects of pre-stored music sequence data. Different interfaces were used to ‘conduct’ the programme including a joystick, keyboard entries and drum hits sensed with strain gauges. The first part of the process required the notes (score) to be stored in memory. In rehearsal, the phrasing, accents and other articulation were added. The final part of the process was the performance of the stored data via the conducting interface. The intention was not simply to enable the performer to play the notes but rather to perform the music, controlling other compositional aspects such as tempo and articulations.

2.3 First Commercial Interactive Music Software–1980s

In the mid 1980s commercial interactive music software applications started to be developed. Such applications were made possible due to the availability of relatively affordable desktop computers, taking advantage of the newly established technologies of the time such as graphic user interfaces, mouse input and MIDI. Laurie Spiegel’s Music Mouse (1985) and Joel Chadabe and David Zicarelli’s M and Jam Factory (1986) were some of the first to become available.

2.3.1 Music Mouse

Laurie Spiegel’s Music Mouse (1986) generated musical output controlled by the co- ordinates returned by the computer’s mouse interface. The performer moved the computer mouse in an onscreen grid representing vertical and horizontal keyboards (Figure 2.13). The nature of the musical output for a specific screen location/co- ordinate was determined by typing key commands on the QWERTY keyboard, controlling parameters such as harmonic structure (chromatic, diatonic, pentatonic, middle eastern, octatonic, or quartal harmonies), dynamics, articulation, tempo, transposition and pitch quantisation. The software was first released for the

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 48

Macintosh86 computer and then subsequently the Commodore Amiga87 and the Atari ST88.

Figure 2.13 Music Mouse interface showing the mouse co-ordinates89

Music Mouse is an example of Spiegel’s concept of an intelligent instrument. In performing with the system the underlying compositional algorithms respond to the performer’s mouse gestures in multiple and not always predictable ways. Consequently the system can influence the performer as much as the performer can influence and control the system. However, the variability of response of the system is a result of the complexity of the interrelationships designed into the system, rather than from internal generative or interactive software models. As such the system can be considered more reactive than interactive.

86 Music Mouse was originally written for the Mac 512k in 1986 http://www.apple-history.com/ viewed 1/5/2007. 87 Commodore Amiga was originally developed by Amiga Corporation and subsequently released by Commodore International in 1985. It was based on the Motorola 68k series of 32-bit microprocessors. http://www.amigau.com/aig/comamiga.html viewed 1/5/2007. 88 Atari ST personal computer was based on the Motorola 68000 CPU and released in 1985 http://www.atarimuseum.com/computers/16bits/stmenu/atarist.htm viewed 1/5/2007. 89 From http://retiary.org/ls/programs.html viewed 1/5/2007.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 49

2.3.2 M and Jam Factory

In 1986 Chadabe, together with David Zicarelli90, founded the company Intelligent Music and released the programs M91 and Jam Factory (Zicarelli 1987). Essentially, both programmes transform MIDI data using algorithmic techniques. Jam Factory ‘improvised’ on input MIDI data. Phrases played into the software were used to calculate transition tables for pitch and rhythmic values (first to fourth-order Markov chains) (Jordà 2005:72). The transition tables then became the basis for the software’s improvised responses to the performance input. Graphic sliders allowed the performer to modify the way the data was being generated.

M worked in a similar way but with more sophisticated control. Higher-level material is transformed by the system including rhythmic, melodic, accent and articulation patterns, chords and intensity ranges. After defining the transformation algorithms the performer interacts with the system by modifying on screen controls (Figure 2.14), MIDI keyboard controller input or by conducting with the mouse on a multidirectional grid, reminiscent of Mouse Music (Zicarelli 1987).

Figure 2.14 M’s main screen92

90 Zicarelli was one of the main developers of the Max programming environment and later director of the music software company Cycling74 (http://www.cycling74.com viewed 1/5/2007), which has been developing Max/MSP and other related software since 1997. 91 http://www.cycling74.com/products/M viewed 1/5/2007. 92 From Chadabe, J. (1991). About M, Contemporary Music Review, 6 (1), p. 144.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 50

2.4 Networked Ensembles

Transportable computing hardware made it significantly easier for computer based interactive performance to take place outside the studio. It also made ensemble performance with real-time computer music systems possible and opened the potential for networking the performer’s computers. The League of Automatic Music Composers and the Hub are probably the two most cited examples of networked computer ensembles. Their work is discussed next, followed by two more recent interpretations of the model, the Australian ensembles austraLYSIS electroband and HyperSense Complex.

2.4.1 The League of Automatic Music Composers

The League of Automatic Music Composers (1978–1983) is generally acknowledged as the first realisation of a networked, microcomputer band. The core members of the League were Jim Horton, John Bischoff and Rich Gold. Tim Perkis, , and Donald Day were later incorporated into the collective.

Figure 2.15 The League of Automatic Music Composers (Perkis, Horton, and Bischoff, left to right) performing at Ft. Mason, San Francisco 198193

The League developed and explored ideas of interactivity using a network of microcomputers (Chadabe 1997:295). Each member of the group performed with

93 Photo: Peter Abramowitsch. From http://crossfade.walkerart.org/ viewed 12/1/2006.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 51 their own KIM-194 microcomputer and could generate sound either directly or indirectly. They explored a number of different configurations but typically each computer would run a program that was able to— • Produce music by itself • Receive data from the network • Apply that data to affect its musical behaviour • Output data that would affect other performers’ computer programs in the network

Bischoff writes of the network connections— Typically, connections were made via the 8-bit parallel ports available on the KIM’s edge connectors. In such a case, the program on the receiving end would either periodically check the port for new data or more casually retrieve whatever data was there when it looked. At other times we connected via the KIM’s interrupt lines which enabled an instantaneous response as one player could “interrupt” another player and send a burst of musical data which could be implemented by the receiving program immediately (Brown and Bischoff 2002).

With respect to performance models, the League members were thinking conceptually; not in terms of individual compositions, but rather whole concerts of music conceived as public occasions for shared listening (Brown and Bischoff 2002). For each new work the ensemble developed a new set of network configurations and programmes. Defining the network architectures and programming structures was an integral part of the composition process.

The flyer for the 1978 League performance at the Blind Lemon in Berkeley (Figure 2.16), one of their first performances, provides a sketch of the connections and signal flows between each of the performer’s computers. Figure 2.16 illustrates an interconnected network in which each performer has at least one connection to every

94 http://www.6502.org/oldmicro/buildkim/kim.htm viewed 12/1/2006.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 52 other performer, exchanging audio signals and digital data, however, not all connections are bi-directional.

Figure 2.16 Flyer from the League of Figure 2.17 Flyer from the League of Automatic Music Composers’ Blind Lemon Automatic Music Composers’ concert March concert November 26, 197895 28, 198096

By way of contrast, the program notes for a 1980 League event, a benefit concert for Ear Magazine, reveals a different approach (Figure 2.17). This network sketch displays a more linear approach to the connections, with each performer able to send data to the next. The programme note for the performance reveals that no individual performer had complete control of the system— The musical system can be thought of as three stations each playing its own ‘sub’-composition which receives and generates information relevant to the real-time improvisation. No one station has an overall score (Brown and Bischoff 2002).

The League’s music was not simply the combined outcome of an ensemble of improvisers performing with networked microcomputers, but rather the result of working with a networked system in which their compositional ideas were encoded

95 From http://crossfade.walkerart.org/brownbischoff/league_texts/blind_lemon_concert_f.html viewed 28/11/2006. 96 From http://crossfade.walkerart.org/brownbischoff/league_texts/ear_mag_benefit_f.html viewed 28/11/2006.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 53 in the software and the network structure. As Bischoff identifies, the outcome was very much more than the sum of its separate elements— What we noticed from the beginning was that when the computers were connected together it sounded very different from pieces just being played simultaneously. If you imagine four pieces of music together at the same time, then coincidental things will happen, and just by listening you make some musical connections. But by actually connecting the computers together, and having them share information, there seems to be an added dimension. All of a sudden the music seems not only to unify, but it seems to direct itself. It becomes independent, almost, even from us (Brown and Bischoff 2002).

The League’s performance-based approach using an interconnected ensemble of microcomputers, presented a contrasting aesthetic to the prevailing ‘academic’ computer generated tape-music of the time. Tape-music was typically exploring complex timbral transformations, generated much slower than real-time (Brown and Bischoff 2002). In contrast to this approach, the League was a real-time performance ensemble, engaging with and utilising the surprise moments generated through performance and improvisation along with the evolving structures created as a result of this process.

It is also interesting to note, that by creating a performance ensemble of networked microcomputers, the League removed the gestural performance physicality that is invariably associated with (and to an extent defines) performance practice for traditional acoustic musical instruments and which is also often sought in interactive sensor based systems. This lack of obvious physical performance gesture and any direct association with the sonic outcome continues to be a subject of debate with respect to contemporary laptop-based music performance (Cascone 2003).

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 54

The League has influenced many artists both directly and indirectly. For George Lewis, the League sparked an interest in interactive performance systems, leading him to develop his Voyager97 software— It sounded like a band of improvising musicians. You could hear the communication between the machines as they would start, , and change musical direction. Each program had its own way of playing. I hadn’t heard much computer music at the time, but every piece I had heard was either for tape or for tape and people, and of course none of them sounded anything like this. I felt like playing, too, to see whether I could understand what these machines were saying (Lewis interviewed in Roads 1985:79).

Chris Brown, later to become a member of the Hub was also similarly inspired after listening to the League— It was electronic, but it had this feeling of improvised music, that everyone was talking at the same time and everyone was listening at the same time (Brown quoted in Chadabe 1997:296-297).

2.4.2 The Hub

The Hub (1987–present), can in many ways, be considered as an extension of the networked ensemble performance paradigms developed by the League. Formed by previous League members Tim Perkis and John Bischoff, together with Scot Gresham-Lancaster, Phil Stone, and Mark Trayle it addressed the complexity created by the League’s intertwined configuration of networked computers by creating a central processing network to route data between the performers’ individual computers. This central hub provided the inspiration for the name of the new ensemble. Perkins writes of the League’s complex set-up— Every time we rehearsed, a complicated set of ad-hoc connections between computers had to be made. This made for a system with rich and varied behavior, but it was prone to failure, and bringing in other players was difficult. Later we sought a way to open the process up, to make it easier for other musicians to play in the

97 Voyager is discussed in Chapter 3.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 55

network situation. The goal was to create a new way for people to make music together (Brown and Bischoff 2002).

The first Hub central networks (1987–1990) were constructed as explorations by Bischoff and Perkis using the KIM-198 microprocessors, as per the League’s setup. For their 1987 New York performance the Hub moved to two separate SYM-199 microprocessors, enabling networked connections not only between the performers locally but also between two remote locations (Experimental Media and The Clocktower). In 1990 the Opcode Studio 5 MIDI interface became the hub of the ensemble. Using this MIDI Hub each performer was assigned a unique port number on the MIDI interface. Since 1997 a variety of network technologies have been used by the Hub members in different collaborations. These technologies have included Max/MSP network messaging, OSC, JSyn100, TransJam101, and SuperCollider.

With each iteration of the Hub’s network hardware, the ensemble has explored different approachs to interactive networked performance. The early microprocessor hub versions not only operated as a data exchanger, but also as a space where all the performers’ data could be individually stored and accessed as required, functioning as a type of common memory (Brown and Bischoff 2002). In their composition titled Simple Degradation (1987) waveforms were broadcast to the network by one performer and used as amplitude modulation control signals by the other performers. Shared access to melodic phrases stored in the central memory was the focus of Borrowing and Stealing (1987), where any player could store the melodic material they were creating during the performance and in turn, this material could be retrieved and incorporated into other player’s performances.

The MIDI Hub gave all performers access to every other performer’s MIDI data streams. Messages were sent in real-time and not stored in the hub itself. The amount

98 The same microcomputer used by the League. 99 http://oldcomputers.net/sym-1.html viewed 1/12/2006. 100 Audio and music synthesis API for Java http://www.softsynth.com/jsyn viewed 1/12/2006. 101 TransJam is a collaboration server for music jams, chat and games http://www.transjam.com/info viewed 1/12/2006.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 56 and rate of data exchange was much higher than previously possible with the microcomputer based hub. The MIDI messages being sent, were mapped freely by the performers, as Bischoff writes— playing of “notes” really did not imply what it usually does in terms of pitches and tunings. The actual “notes” could be any mapping of MIDI note numbers to sounds of any kind. (Brown and Bischoff 2002).

Beyond the use of networked performance both in local and remote locations, the Hub was primarily interested in exploring the potentials of interactive computer based performance in which the outcomes were more that the sum of its parts and in which the interactive instrument presented them with new directions to explore musically. Perkins writes— In a conversation, one says things, not knowing what the next person will say, and therefore, not knowing what oneself will say next either. In the Hub we want to surprise and be surprised by each other, and, in playing together, to also be surprised by our own odd computer network instrument102.

Since the late 1990s many musicians and artists have been engaging with interactive networks for example The Princeton (PLOrk)103, Farmers Manual104, austraLYSIS electroband (Dean 2003), Simulus105 and HyperSense Complex (Riddell 2005) to name a few. There has also been significant development in software supporting the creation of networked music such as OSC (Wright et al. 2003), JSyn and TransJam.

2.4.3 austraLYSIS electroband

Taking inspiration from the Hub, the austraLYSIS electroband was formed in Australia in 1994106 and is an ensemble with a primary focus on improvisation and

102 From The HUB, an article written for Electronic Musician Magazine, 1999 http://www.perkis.com/wpc/w_hubem.html viewed 1/11/2006. 103 The Princeton Laptop Orchestra http://plork.cs.princeton.edu viewed 1/11/2006. 104 http://www.mego.at/farmers.html viewed 1/11/2006. 105 http://www.simulus.org viewed 1/11/2006. 106 austraLYSIS was an earlier incarnation of the ensemble formed in 1989.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 57 performance in computer networked environments and their intersections with composition. Directed by Roger Dean, the other core members of the ensemble are Sandy Evans, Greg White and Hazel Smith107. In a similar fashion to the Hub, the austraLYSIS electroband’s networked interactions are made possible through interconnected MIDI messaging and patching in Max/MSP. However, in contrast to the Hub’s interest in creating new states through networked systems, the austraLYSIS electroband is primarily concerned with exploring improvisatory structures and the possibilities offered by interactions between instrumental performance, compositional structures and networked systems. Dean describes their focus as— the use of multiple composition modules by improvisers, including the idea of real-time high and low-level interaction with their parameters (Dean 2003:88).

The austraLYSIS electroband incorporates acoustic instruments together with synthesisers, samplers and computer music software applications such as Max/MSP.108

For Dean (2003:93-94), the austraLYSIS electroband provides a way of addressing the musical limitations of conventional and unconventional jazz he had previously encountered as a performer. The austraLYSIS electroband explores the possibilities inherent in computer networking of all aspects of sound, image and text performance. Each member of the ensemble has the option to directly and precisely influence the progression of the other players during their improvisation (Dean 2003:94). For example, a performance might involve parameters of an individual’s patch being modified by another member of the group. In this way both hierarchical and more egalitarian relationships are explored by the ensemble in different performance contexts.

107 Other ensemble members have included Peter Jenkin, Stephanie McCallum, Daryl Pratt, Phil Slater, Ian Shanahan, Neil Simpson and David Worrall. 108 For example, performances have included Sandy Evans on saxophones and wind controller, Phil Slater on trumpets.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 58

Another aspect of the austraLYSIS electroband’s interest in interactive network performance is the potential to influence and define future sonic events and thus explore the creative possibilities presented through the real-time control of music structure (Dean 2003:95). Through software programming, events can be defined to occur at fixed times in the future. For example new parameters can be made available in a particular Max patch, new musical material may be introduced or new connections provided (or removed) to affect another performer. The austraLYSIS electroband uses the control of future events in an improvising context to introduce new directions in which the precise nature and sonic qualities of the outcome are only developed once the new direction has started. Dean describes his interest in the interactive control of musical structure as— the possibility of designing complex structural devices that control an overall comprovisation while being able to redesign them during performance - control of structure (Dean 2003:95).

Stylistically the austraLYSIS electroband encompasses both traditional western art music models of acoustic instrument performance in the landscape of notes, rhythmic patterns, groove patterns, microtonal systems and the more textural, electroacoustic techniques of sound transformation, timbre, spectral transformations and noise. However, the majority of the underlying algorithmic processes used by the austraLYSIS electroband operate in the context of pitch transformations, rhythmic transformations and rhythm generation. Furthermore, the outputs of theses processes are typically sent to MIDI synthesisers and samplers.

An example of a rhythm generating Max/MSP patch developed by the austraLYSIS electroband is shown in Figure 2.18, a fairly straightforward pulse generating system, in which the main pulse is sent to two separate voice layers, a rhythm section and a piano voice. The rhythm section layer subdivides the beat into two equal parts via a delay object and assigns a random percussion sound and a random key velocity for each note. The piano voice supports a more complex beat subdivision by supplying separate delays for each of the seven notes assigned to the main eight beat loop.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 59

Figure 2.18 austraLYSIS electroband ostinato Max/MSP patch109

In performance the patch is used with saxophone and acoustic piano, both instruments improvising in sync or “in the groove” (Dean 2003:117). By altering parameters of the patch such as the main pulse speed, the piano pitches, delay times (beat subdivisions) the nature of the patch can be transformed during a performance. Not only can the rhythmic pattern be sped up or slowed down but by modifying delay times, note values and the key velocities the pattern can be completely and progressively transformed over time. Dean considers this an example of a process in which— the comprovisational structure can be made a flexible one, both at the micro level (pitches, for example) and

109 Screen capture of austraLYSIS patch supplied on CD-Rom. Dean, R. T. (2003). Hyperimprovisation: Computer-Interactive Sound Improvisations. Middleton, Wis: A-R Editions.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 60

at the macro level (for example, whether or not a groove is apparent) (Dean 2003:118).

Dean has further developed these concepts of interactive rhythm creation in his performances as Dr Metagroove. Here the focus is on the real-time generation of drum and bass patterns. The model generates the drum patterns, base line and other style typical layers with parameters controllable in real-time and mappable to other algorithmic processes. Markov chains are applied to create melodic lines, stochastic processes used to affect bass lines, while probability models are used to insert intermittent keyboard harmonies and noise bursts (Dean 2003:77).

2.4.4 HyperSense Complex

HyperSense Complex110 is a performance based electroacoustic ensemble consisting of members Somaya Langley, Simon Burton and Alistair Riddell (2005). The ensemble explores shared control of a networked interactive music system. Each member of the ensemble performs using data-glove111 inspired, custom made interfaces (Figure 2.19). The ensemble’s data-gloves have been constructed out of four flex sensors mounted on each of their hands using Heat Shrink and Velcro strips (a total of eight flex sensors per performer).

A modular approach (Figure 2.21) has been taken to the hardware and software implementation, developed to interpret gestures captured by their skeletal gloves. An AVR 8535112 micro-controller attached to each glove pair, converts the eight channels of analogue sensor voltage information generated by the flex sensors into a digital format. These micro-controllers are worn by the performers in custom made clothing (Figure 2.20). The rest of the system is divided between two separate computers.

110 http://www.arrowtheory.com/hypersense/index.html viewed 27/10/2006. 111 Data-gloves include the Mattel PowerGlove (http://www.ccs.neu.edu/home/ivan/pglove/faq-0.1.html viewed 27/10/2006) Laetitia Sonami’s Lady’s Glove (http://www.sonami.net/lady_glove2.htm viewed 27/10/2006) and the P5 virtual reality glove (http://www.vrealities.com/P5.html viewed 27/10/2006). 112 AVR 8535 8-bit microcontroller with RISC architecture http://www.atmel.com/dyn/resources/prod_documents/2502S.pdf viewed 27/10/2006.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 61

Figure 2.19 HyperSense Complex flex Figure 2.20 HyperSense Complex custom sensors113 made clothing114

Running on its own dedicated computer hardware, software written in Python is used to analyse the gesture information received from all three AVR 8535 micro- controllers. The system “interpolates the data into a composition framework” (Riddell 2005) and in this manner defines the musical constraints for the different HyperSense Complex interactive performances. Riddell refers to this part of the software architecture as the “composition itself” or from the perspective of improvisation the “timing, control and structure”115.

Separate computer hardware, running algorithms constructed in the real-time sound synthesis language SuperColliderServer (SC3)116, is used to render the sonic outcomes. The SuperCollider patch is controlled by the Python composition engine via OSC. This modular approach not only optimises system performance but also reflects the ensemble’s approach to interactive performance.

113 From http://www.arrowtheory.com/hypersense/index.html viewed 27/10/2006. 114 From http://www.arrowtheory.com/hypersense/index.html viewed 27/10/2006. 115 From private email correspondence with A. Riddell 5/4/2005. 116 http://supercollider.sourceforge.net/ viewed 27/10/2006.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 62

Figure 2.21 HyperSense Complex signal flows117

In performance the hand mounted flex sensors are typically mapped to a number of effects including118— • Triggers for sound sample playback • Sound sample playback rate • Effects including high pass filter, echo, reverb (often global) • Compositional parameters and event control

The incoming sensor data is also superimposed onto pre-defined structures, which can evolve over time. In their performance titled Drumming Tree, two performers trigger events along transparent cyclic rhythmic structures while the third performer controls effects. The temporal nature of this structure was defined initially by a cycle of 5 beats, the first of which was slightly accented. Each beat could have a second and even third temporal layer where events could be place by the performer at any time and the accenting altered (Riddell 2005).

117 Diagram created from Riddell’s description of HyperSense hardware (Riddell 2005). 118 From private email correspondence with Riddell 5/4/2005.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 63

Riddell refers to the triggering of events along the cycling pattern “like filling empty buckets as they pass”119. The performers can also increase the complexity of the sequence by creating sub-divisions below the main pattern.

HyperSense Complex describe themselves as an “ensemble bound together as much by performer awareness as by its networked technology implementation” (Riddell 2005). The concept of ensemble here is central to the aesthetic, performing in the context of a structured and rehearsed improvisation with interaction, listening to and watching the other players of equal concern as interacting with and listening to the interactive computer system’s performance. Highlighting the group dynamics Riddell describes— those moments when it is realized that something, like the sound, eye contact or audience reaction, is giving a signal that a point of collective aesthetic agreement has been reached. Nothing more need be said and the moment is gone. It is an obvious and significant instant in an otherwise exploratory time and space marked by individual manoeuvrings (Riddell 2005).

HyperSense Complex presents a different interactive network model to the previously discussed ensembles. For the League, the Hub and the austraLYSIS electroband the network facilitates shared and interrelated control between the individual ensemble members. For HyperSense Complex, the network allows the ensemble to control a shared interactive system— suppose that the performers share the same computational space? That they make the same or near similar performance gestures; that the performance is dominated by the performer’s physical presence, not the display of the technological and, that the performers respond to and influence each other as well as the totality of their performance sound? (Riddell 2005)

119 From private email correspondence with A. Riddell 5/4/2005.

Chapter Two: Overview of Interactive Electroacoustic Art Practice Part 1 64

However, HyperSense Complex members have also found that the complexity of interactive response generated from their system can also lead to a loss of the sense of a coherent ensemble and interaction. The complexity of the mappings and the sensitivity of the interface present further challenges for the performers and require learning how to perform and interact with such systems.

65

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2

This chapter continues an overview of interactive electroacoustic art practice, starting with a discussion of gesture mapping, custom sensor designs and unique interactive instruments in works by Waisvisz, Schiemer, Sensorband and Hewitt. Additionally, Gibson and Richards’ immersive installation The Bystander Field is discussed as an example of interactive gallery installation intended for use by the general public. Interactive instrument design for non-musicians and collaborative interactive systems are explored in discussing Jordà’s reacTable, Patten and Recht’s Audiopad, Iwai’s Electroplankton and Blaine’s Jam-O-Drum. Machine listening and improvising software is discussed in the context of Lewis’ Voyager system and Rowe’s Cypher software. The chapter concludes by considering the possibilities of robot interaction and improvisation as demonstrated by Haile – Weinberg’s interactive robotic percussionist.

3.1 Interfaces–Mapping Gesture to Sound

As discussed in Chapter Two, the use of computers in live electronic music systems enabled real-time algorithmic, and context sensitive processing of musical data. Such systems facilitate mappings between input data generated from performance interfaces such as keyboards, dials and sliders with synthesis control parameters rendered either externally via a synthesiser or—thanks to recent advances in computer processing speeds—rendered internally with software based synthesis. Sensing technologies can extend this model by providing rich data describing a performer’s gestures, for example location, speed and force of a particular movement. Furthermore, real-time computer analysis of a performance enables interpretation of the audio signals through such techniques as pitch and attack detection, amplitude enveloping and spectral analysis.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 66

The data generated from these sensing inputs can thus be interpreted and mapped onto sound parameters within the context of an interactive system. This enables sophisticated, unique and potentially intuitive relationships to be explored between performers, audiences and interactive computer music instruments. The following works reveal some of the different approaches taken by composers and performers in exploring these relationships between performance gesture and the real-time generation of sound.

3.1.1 Michel Waisvisz–The Hands

Waisvisz’s The Hands (1984) is probably one of the most recognized and extensively referenced examples of a gestural interface for the live performance of electroacoustic music. In his position as director of the STEIM120 foundation in Amsterdam, Waisvisz has been responsible for the development of many new sensor based instruments and computer music software applications121. Examples include instruments such as the CrackleBox (1975) and Belly Web (1996) and in collaboration with Frank Bald the software applications LiSa122 and JunXion123.

The Hands, as the title suggests, consists of two hand-held senor arrays that capture Waisvisz’s performance gestures, mapping this data to music algorithms and synthesis parameters. The instrument has gone through a number of revisions, improving the design as Waisvisz’s performance experiences with the instrument accumulated. However, two main versions of the instrument are evident. The first version was used in concert for the first time in The Concertgebouw in Amsterdam in 1984, whilst the second version, The Hands II (Figure 3.1), was built with Bert Bongers (2000:159) in 1991. The improvements implemented in the second version included, a single wooden frame as the main body for attaching the sensors and better quality components, increasing reliability. A microphone was also subsequently added to facilitate the live processing of sounds in performance.

120 STEIM – The Studio for Electro-Instrumental Music. http://www.steim.org viewed 1/9/2006. 121 http://www.crackle.org/instruments.php viewed 1/9/2006. 122 LiSa – Live Sampling, a realtime audio manipulation environment. http://www.steim.org/steim/lisa.html viewed 1/9/2006. 123 JunXion, a Mac OS X data routing application that allows the connection of any USB game controller and defines, the translation of each key or joystick action into a specific MIDI event. http://www.steim.org/steim/junxion.html viewed 1/9/2006.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 67

Figure 3.1 The Hands II 124 Figure 3.2 The Hands II–close up of the glass mercury tilt switches125

The Hands make use of a number of different sensors, potentiometers and finger- activated switches. The two main gestures sensed by the instrument are the distance between the hands, measured using ultrasound transducers—transmitter in one hand and the receiver in the other—and the orientation of the hands, measured using glass mercury tilt switches (Figure 3.2) arranged in a pyramid shape (Bongers 2000:154- 158). This arrangement of glass mercury tilt switches provides a measure of ten different inclinations— • One neutral position with all the switches closed • One when the hands are turned upside down with all switches open • Four orientations of roll and pitch, clockwise and anticlockwise rotation • Four intermediate stages

STEIM’s SensorLab126 is used to convert and interpret the analogue sensor signals into MIDI messages. The SensorLab is effectively a small computer worn by the performer and can be programmed with STEIM’s Spider programming environment127.

124 From http://www.crackle.org/instruments viewed 12/1/2006. 125 From Bongers (2000). Physical Interfaces in the Electronic Arts – Interaction Theory and Interfacing Techniques for Real-Time Performance. In M. M. Wanderley and M. Battier (eds.), Trends in Gestural Control of Music. Paris: IRCAM-Centre p.159. 126 http://www.steim.org/steim/sensor.html viewed 12/1/2006. 127 http://www.steim.nl/software/other/spider/Spider%20Manual.pdf viewed 21/3/2007.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 68

The Hands have been used by Waisvisz to control a number of different synthesisers and software systems including Yamaha DX7s, 802s, TX7s, SY99s and Emu samplers. They have also been used with most of STEIM’s software, specifically Lick Machine, Sam, LiSa and the visual performance program Image/ine128.

Waisvisz specifically refers to The Hands as an instrument. He has actively resisted continually transforming the design and instead placed his focus on developing his technique and skill with the instrument, making modifications only to improve its performance characteristics. Performance gestures can be mapped onto any aspect of sound generation and the control data meditated and interpreted algorithmically; characteristics widely divergent from traditional acoustic instruments.

The sensor data from The Hands is not simply mapped onto sound control parameters, but interpreted and processed by algorithms written in the Spider scripting language, running on the SensorLab interface. Waisvisz (1985) describes these functions as “Control Signal Algorithms”, introducing non-linearity and uncertainty into the mapping chain. Two examples are his ‘GoWild’ and ‘That’s Enough’ algorithms which have been programmed to introduce their own influence over the performance when specific criteria are met. Jordà writes of the algorithms— His ‘GoWild’ algorithm, for instance, deliberately generates erratic information when the input control data exceeds a certain threshold, while the ‘That’s Enough’ algorithm, decides by itself at odd moments that things start to become boring and changes the mapping behaviour (Jordà 2005:69-70).

In this way The Hands not only function as gestural controllers for sound synthesis but also interact with algorithmic processes that form part of the system and clearly define compositional decisions.

128 http://www.image-ine.org viewed 12/1/2006

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 69

3.1.2 Laetitia Sonami–Lady’s Glove

Laetitia Sonami’s Lady’s Glove129 is another example of an artist perfecting their performance practice with a new instrument of their own design. The instrument consists of an arm-length black Lycra glove worn on the left hand, embedded with sensors capturing finger, hand and arm movements (Bongers 2000:134). The first version of the instrument was made in 1991. As is the case with Waisvisz’s The Hands, the instrument has undergone a number of revisions, each improving the design based on Sonami’s practical experiences with the performance interface. The current version was made in 1994 with Bongers who was also responsible for the construction of the second version of The Hands.

The Lady’s Glove is embedded with a large number of sensors providing at least nineteen control signals (ten digital and nine analogue) for mapping to sound. The sensors and wiring on the glove are purposefully left uncovered, creating a distinct cyber aesthetic in live performance. STEIM’s Sensorlab interface is used to convert and interpret the sensor signals into MIDI. The following sensors are embedded into the glove— • Digital o Five micro-switches on the top of the fingers o Four Hall effect transducers located at the tip of the fingers with a magnet on the right hand, used as switches o A mercury tilt switch on top of the hand • Analogue o Pressure pad on the inside of the index finger o Resistive strips along the fingers and wrist130 o An ultrasonic transmitter on the inside palm, one receiver on the right arm and one on the left foot o An accelerometer that measures the speed of motion of the hand

129 http://www.sonami.net/lady_glove2.htm viewed 1/12/2006. 130 Sensors extracted from the Mattel Power Glove, Mattel Toy Co.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 70

The sensors enable multiple and parallel control of sound parameters. Performances with the instrument may incorporate Sonami’s own voice, mixed into the system via a microphone. Video, lights and hobby motors have also been mapped to the glove’s sensors. For Sonami the instrument provides a platform to explore the mapping of gesture to the control and generation of music. Sonami’s dance like movements on stage are an integral aspect of her performances— Through gestures, the performance aspect of computer music becomes alive, sounds are “embodied”, creating a new, seductive approach131.

Sonami changes the mappings between the glove’s sensors and sound controllers to create new compositions—as is often the case with other interactive instrument designers. From this it is evident once again, that the concept of ‘instrument’ revealed here is quite different from the acoustic instruments we are all familiar with. The instrument in this case has become merely the interface (in terms of acoustic instruments: keyboard, bow and string, mouthpiece and keys) now dislocated from the sound generating process. To create new performances with the instrument, Sonami is not simply limited to performing the interface in different ways but can also completely redesign the sound making processes of the instrument.

3.1.3 Donna Hewitt–E-mic

Australian composer and vocalist Donna Hewitt, also combines her own vocals with sensors for gestural control of sound. Together with Ian Stevenson she embedded sensors into the microphone-stand itself to create the E-Mic132 (Hewitt and Stevenson 2003). Influenced by the commonly used gestures and movements of vocal performers using microphone stands, the E-Mic was designed as a controller interface for vocalists who want to have live electroacoustic and digital signal processing elements in their performances (Figure 3.3 and Figure 3.4). Hewitt has developed a number of interactive performance-based compositions for the E-Mic,

131 From Sonami’s personal web page http://www.sonami.net/lady_glove2.htm viewed 1/12/2006. 132 E-mic: Extended mic-stand interface controller.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 71 mapping the instrument’s various controllers to sound processing parameters in Pd, AudioMulch and various VST Plugins133 (Hewitt 2003).

Figure 3.3 E-mic close-up of joystick and Figure 3.4 E-mic in performance135 pressure sensors134

The hardware design for the controller was informed by a study in which Hewitt identified key archetype movements vocalists used with microphone stands (Hewitt and Stevenson 2003:123). These included— • Grasping gestures: grasping the microphone; grasping the stand; and stroking or sliding hands up and down the stand • Movements of the stand: tilting; swinging; straddling • Foot and hand tapping • Freeform arm gestures around the stand without contact

133 Steinberg’s Virtual Studio Technology (VST) is an interface standard for creating audio synthesizer and effect Plugins. http://www.steinberg.net viewed 1/6/2007. 134 From Hewitt, D. (2003). Emic - Compositional Experiments and Real-Time Mapping Issues in Performance, In Proceedings of the 2003 Australasian Computer Music Conference. Edith Cowan University, Perth, Western Australia: Australasian Computer Music Association. 135 Still taken from video of Donna Hewitt and Julian Knowles performing Nodule at the Livewires concert Sydney Conservatorium of Music 1st September 2004. http://www.clatterbox.net.au/media/ viewed 1/6/2007.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 72

Figure 3.5 E-mic sensors and example mappings136

In order to maximise these gestures as ‘natural’ control inputs, eight separate sensor groups are incorporated into the instrument (Figure 3.5). These sensor groups consist of— • Two pressure sensors on the side of the microphone holder • A joystick acting as the microphone holder • Three joystick switches • Two infrared distance sensors • A two dimensional tilt sensor embedded in the stand’s base • Two slide sensors on the side of stand • A foot pressure sensor (FSR137) on the base of the stand • Three foot switches

136 Diagram based on description of mapping strategies in Hewitt (2003). 137 Force-sensing resistor.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 73

The eight sensor groups provide seventeen separate sensor data streams to be interpreted and mapped. Exploring both one-to-one mappings and many-to-one mappings the instrument uses an interrelated web of sound processing and sound playback tools including—GRM Tools, AudioMulch processors and other VST effects. Hewitt has developed a number of mappings of sensors to software for performance with the E-Mic. A typical mapping set-up used by Hewitt is outlined in Figure 3.5.

Hewitt’s intention is to place the performer “at the centre of the technology both as sound source and controller of sound processing” (Hewitt 2003). The mappings have been designed to create “a balance between the ‘spectacle’ of the performance and the performer’s need for control over the sonic space” (Hewitt 2003).

The complexity and interrelatedness of the mappings can result in sonic outcomes that often seem unrelated to the gesture responsible. Hewitt observes that— audience members were making imagined correlations between the gestures they were seeing and the sonic outcomes … This generally seemed to happen in the more dense sections of the piece where direct mappings become obscured (Hewitt 2003).

Perhaps it is this complexity or unexpectedness of response that intrigues and engages audiences, composers and performers with instruments such as the E-Mic.

3.1.4 Sensorband–Soundnet

The three members of Sensorband (1994)—Edwin van der Heide, Zbiniew Karkowski, and Atau Tanaka—create live interactive computer music using a diverse collection of gestural interfaces. The sensor technologies utilized by the ensemble include ultrasound, infrared, and bioelectric. Each member of the ensemble performs with their own unique sensor based instrument (Bongers 1998).

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 74

Heide performs with the MIDIconductor, a pair of hand held senor arrays based on Waisvisz’s The Hands, discussed in 3.1.1 above138. Gestures performed with the hand held interfaces affect a number of different synthesis algorithms including— real-time phase vocoding, oscillating harmonisers, room simulation with Doppler- shift, real-time granular synthesis, and chaotic looping of samples.

Karkowski performs using hand and arm gestures inside a physical array of sixteen infrared sensor pairs (transmitter and detector) installed in a fixed frame. The infrared sensors detect the position and velocity of the performer’s arms as they move inside the sensor frame. The sensor data is converted to MIDI messages and sent as MIDI continuous controllers to parameters such as dynamics, tempo and articulation. Gesture is also used to control the formal structure of the music. The sensor data is interpreted and mapped to sound using Max/MSP and software based sampling.

Tanaka uses the BioMuse139 medical sensing system to convert electrical signals from his body into musical control parameters. Specifically, Tanaka uses four electromyogram (EMG) channels. The four sensor electrodes are worn via armbands on the inner and outer forearm, tricep, and bicep. The sensor data is converted to MIDI continuous controller data by the BioMuse system. Max/MSP patches developed by Tanaka interpret the biosensor data to control both and real-time computer-graphics software.

Sensor band also performs by climbing on the Soundnet (1996), a large room-sized web of ropes fitted with sensors that measure tension changes as the ensemble members clamber over the structure. The instrument was inspired by Waisvisz’s much smaller instrument—The Web140. Bongers, who assisted with the design of the tension sensors, describes the Soundnet as— an architectural musical instrument of monumental proportions. It is a giant web, measuring 11 x 11 m,

138 The MIDIconductor was also built by Bongers at STEIM. 139 BioMuse a biosignal musical interface developed by BioControl Systems (Hugh Lusted and Ben Knapp). http://www.biocontrol.com/biomuse.html viewed1/6/2007. 140 http://crackle.org/Waisvisz'%20Small%20Web%20(Belly%20Web).htm viewed 1/6/2007.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 75

strung with 16-mm-thick shipping rope. At the end of the ropes, there are eleven sensors that detect stretching and movement (Bongers 1998:17).

Data from the tension sensors is used to transform digital recordings of natural sounds. A variety of sound-processing techniques are used including filtering, convolution, and waveshaping. Complex and often uncontrollable relationships are created between the sound, sensor interface and performers. Movement on the Soundnet requires considerable physical effort and all movements by all performers are interdependent and influence the sound in some way. Bongers describes the interdependencies as follows— with Soundnet, the notion of control is called completely into question: the instrument is too large for humans to thoroughly master. The ropes create a physical network of interdependent connections, so that no single sensor can be moved in a predictable way that is independent of the others. It is a multiuser instrument where each performer is at the mercy of the others’ actions. In this way, the conflict of control versus uncontrollability becomes a central conceptual focus of Soundnet (Bongers 1998:17).

3.1.5 Greg Schiemer–Spectral Dance

Australian composer and instrument builder Greg Schiemer has been working with interactive systems since the 1970s. Examples of his interactive works include Body Sonata (1974) for Terpistones141 with dancers Phillipa Cullen and Jacqui Carroll; Tupperware Gamelan (1977); Monophonic Variations (1986) and Polyphonic Variations (1988) with percussionist Graeme Leak; Spectral Dance (1991–1992); Token Objects (1993); and Pocket Gamelan (2005) for mobile phones (Schiemer and Havryliv 2005:295). Schiemer describes his interactive instruments as “improvising machines”— a term I use to describe my own idiosyncratic approach to algorithmic composition … an instrument that

141 The Terpistone was an adaptation by Léon Theremin of his famous instrument for use by dancers. The control antenna for the Terpistone were metal sheets on which the dancers stood. In this manner the dancers themselves became part of the antenna. In Body Sonata the Terpistone is used as an interface to a custom made peak detector circuit.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 76

reinterprets input from the real world and allows a performer to influence a musical outcome without entirely controlling it (Schiemer 1999:109).

In Spectral Dance (1991–1992) an oscillator, housed in Tupperware142 (UFO), is swung by a performer over a PZM floor mounted microphone. The sound of the UFO as it is swung around the performer, Doppler shifting in pitch, is picked up by the microphone and sent to Schiemer’s MIDI Tool Box (MTB). The pitch and amplitude of the UFO is analysed by the system and used to determine FM synthesis parameters for rendering on a Yamaha FBO1143 synthesiser module (Figure 3.6).

Figure 3.6 Spectral Dance signal flow144

The MIDI Tool Box (MTB) is based on the Motorola 68HC11 Microcontroller 145. Programming in assembly code, Schiemer used the MTB to realise a number of interactive works during the 1990s exploring his interest in cyclic code generators using digital feedback to produce repetitive patterns. The error-correcting property of the algorithm is used to aid and abet deviant musical machine behaviour rather than eliminate it. This behaviour produces a musical response from the machine that counters the playing of the live performer; musical interest on the

142 One of the many Tupperware instruments from Schiemer’s Tupperware Gamelan (1977). 143 The Yamaha FB-01 (1985) is an eight voice, 4 operator FM synthesiser design based on ’s digital frequency modulation (FM). 144 Diagram based on description provided in Schiemer (1999). 145 http://www.hc11.demon.nl/thrsim11/68hc11/ viewed 1/5/2007.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 77

part of the audience arises from the tension this creates (Schiemer 1999:108).

Spectral Dance is structured around three key algorithms— • Play–continuously varies the timbre by changing FM parameters • Map–translates the analogue signal into a stream of MIDI information • Variation–causes the cyclic code generator to produce aberrations (Schiemer 1999:110)

The result is an interactive work that is placed evenly between the boundaries of instrument, composition, and improvisation. Schiemer writes of performing with the system— In performance, interaction with the system affects timbral changes in ways that the performer cannot always anticipate; the performer must work with the sound in its more malleable form - like working with soft metal or clay - rather than simply triggering predefined sound events (Schiemer 1999:109-110).

From the instrument perspective, the work is performed by swinging the UFO in arcs over the floor mounted microphone (Figure 3.7). Controlling the speed of the UFO (Figure 3.8) alters the pitch via Doppler shift. The pitch of the UFO oscillator is also tuneable. Proximity to the microphone affects the amplitude of the signal. The gestures are reproducible and have deterministic results in the sonic output. Effective gestures can be learnt and reproduced from one performance to the next.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 78

Figure 3.7 Spectral Dance in performance146 Figure 3.8 Spectral Dance–close-up of UFO147

As a composition the work consists of pre-composed decisions and structures, defined in the HC11 assembly code. While non-deterministic variation is inherent in the underlying structures of the work, each performance still maintains a strong sense of integrity and the concept of listening to the same composition again. While any one performance may not realise all the sonic potentials inherent in the work there is still the distinct sense that it is the same composition being performed.

As a performed improvisation the system is highly reactive. The underlying cyclic code generators and idiosyncratic sensitivities of the UFO, microphone and physical space resonances, can result in unpredictable outcomes, although still constrained by the compositional parameters defined in the software. Spectral Dance spontaneously generates timbres beyond the control of my imagination. Its performances create an illusion of control, but never without surprise, both for myself as composer or performer and for the audience (Schiemer 1999:111).

This interplay between a highly reactive complex system, predetermined methods for interpreting inputs and a repertoire of learned physical performance gestures creates a system that is both a composition and an improvising instrument. The result is a sonic balancing act between the known and the unpredictable.

146 Still taken from unpublished video, Schiemer, G. (1997). Recorded at Australian Film Television & Radio School, Director Adam Sebize, Camera Chris Fraser. 147 Still taken from unpublished video, Schiemer, G. (1997). Recorded at Australian Film Television & Radio School, Director Adam Sebize, Camera Chris Fraser.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 79

3.2 Installation and Audience Interaction

By combining spatial-sound, multi-screen video projections and sensor technology, interactive rich media environments can be created in which the participants’ movements in the installation space can affect both sound and video. Ultrasound, infrared, laser, floor pressure pads and video tracking are some of the more frequently used sensor technologies to track the public’s movements in and through such installation spaces. The term interactive is often applied freely in such contexts encompassing a variety of possibilities from direct triggering for playback of video and audio media to dynamic transformations of the video and sound material.

3.2.1 Gibson and Richards’ The Bystander Field

The Bystander Field148 (2006) is another example of an interactive installation in which the audience becomes an integral player in the real-time realisation of the work. The basis for the installation is an archive of over three-thousand Sydney crime scene photographs dating from 1945–1960, conserved by the Justice and Police Museum (Richards 2006b, a). The archive is incomplete in that it does not contain case histories for the photos, but instead fragmentary notes, photographer name, location, dates and the like are all that remain149. This degradation of the archive data gave Gibson and Richards the creative freedom to construct their own stories from the images through “poetic reinterpretation and fictive play” (Richards 2006b).

148 The Bystander Field team – Ross Gibson (director/writer), Kate Richards (producer), Aaron Seymour (graphic designer), Greg White (sound design), Jon Drummond (programmer), Daniel Heckenberg (programmer), Tim Gruchy (installation design), Keir Smith (database programmer). 149 Flood damage, losses due to relocation, and overwhelming Police workloads lead to the loss of the records explaining the cases.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 80

Figure 3.9 The Bystander Field–model of the physical installation150

The Bystander Field is an immersive visual and sound environment. It is an interactive work that relies on measures of audience attentiveness to directly influence the generative and narrative flow of the sonic and visual system behaviour. As a developer for much of the software for the system, I have an insight into the interactive aspects of the work often not possible based on publicly available documentation for other similar media-arts interactive installations.

The physical installation consists of five screens arranged in a pentagon thus creating an enclosed space. (Figure 3.9) Each screen has its own data projector. Five equally spaced speakers, supported by a subwoofer, provide an equivalent spatial acoustic canvas. The pentagonal space is approximately seven-meters wide and can accommodate up to twelve participants151. An infrared camera placed above the pentagon is used to analyse participants’ movements in the space using the EyesWeb152 video analysis software. The space is lit with infrared light to enable the effective video tracking of participants while not interfering with the immersive five screen data projections.

150 From Richards, K. (2006a). ‘Bystander’ – a Responsive, Immersive ‘Spirit World’ Environment for Multiple Users, In Proceedings of the 2006 Responsive Architectures: Subtle Technologies. University of Toronto, Toronto Canada. 151 A similar though larger (using twelve projectors in six stereoscopic pairs) immersive multi-screen environment is used in the iCinema system http://www.icinema.unsw.edu.au viewed 1/10/2006. 152 http://www.eyesweb.org viewed 1/10/2006.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 81

Figure 3.10 The Bystander Field example screen capture153

Visually the work consist of three distinct elements (Figure 3.10)— • Animated particles behaving like a ‘flock’ constantly moving within the virtual space created by the pentagon • Groupings of crime screen images, revealed in the flock’s wake • Poetic texts written by Gibson, associated with the crime screen images.

In a similar fashion there are three distinct layers to the sonic aspects of the work— • An evolving (background) layer of immersive sound • An identifiable sound texture (middle ground), spatialised to the particle flock location, derived dynamically from grains and distortions of the background layer154 • Pre-rendered samples (foreground) linked to the system’s choice of crime screen and text combination. These are also spatialised to the screen locations of the images and texts.

The Bystander Field is rendered in real-time by a network of computers155 utilising both narrative and generative processes. Functioning as a generative system, the

153 From Richards, K. (2006b). Report: Life after Wartime: A Suite of Multimedia Artworks, Canadian Journal of Communication, 31 (2). 154 Spatialisation is performed using Ville Pulkki’s Vector Base Amplitude Panning (VBAP). VBAP is a method for positioning virtual sources to arbitrary directions using a setup of multiple loudspeakers. http://www.acoustics.hut.fi/research/cat/vbap viewed 1/10/2006. A custom reverb and gain model is used to simulate distance effects.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 82 images and texts are stored in a database, tagged extensively with descriptive metadata. Combinative and state-specific rules are used by the software to generate clusters of content, based on associated concepts such as meta-narrative threads, tone, or gravitational pull (Richards 2006b). The system also utilises narrative structures with both sonic and visual material revealing a flow of linear content and structure, organised around an internal timeline common to all aspects of the system.

Integral to the realisation of the work is the interactive model underlying the system. Richards describes the interactivity in the following way— The central premise of Bystander is that the more quiet and attentive the audience, the more aesthetically coherent and semantically divulgent the ‘world’. Ideally visitors can gain the ‘trust’ of the space and perform a dance of intimacy with the ‘world’ and its complex narrative matrix of image text and sound (Richards 2006b).

Measures of participants’ movement, mass—detected through surface area—and attentiveness inside the pentagon projection space are made from the infrared video analysis system. This is used to define a dynamically changing global system ‘state’ variable. Based on this measure of the behaviour of visitors in the space, a variable and volatile world of audiovisual narrative evolves endlessly but cogently (Richards 2006a:456). For example, an inattentive and restless audience results in a high system state value. This causes agitated, erratic and more compressed, particle flock behaviour, less coherent image and text combination choices, more distorted sound processing and more dynamic range from the background textures.

The system responds to audience behaviour changes quickly and incrementally. As participants become more attentive and less jittery the particle flock moves closer to them, loosens up and slows down somewhat. Photos and texts are chosen to reveal more of the narrative, while sounds become less distorted and granulated.

155 A number of software languages have been used to implement the system including OSC, Max/MSP, C++, Python, SQL, EyesWeb.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 83

The mix of narrative and generative structures, combined with a highly responsive yet simple interactive implementation results in an engaging and dynamic installation. As an interactive experience there is enough immediacy of response to reinforce the interactive paradigm. Yet the complexity of the response supports the potential for multiple revisits to the work. For Gibson and Richards’ the room becomes a kind of performative story-generator (Richards 2006a:456).

3.3 Collaborative Interactive Virtual Environments

Like the interactive installation previously discussed, the following systems are intended for use by the general public in a collaborative setting, as apposed to trained performers or the system’s designers themselves. In these environments, however, the novice user interacts with the system in a much more controlled and directed way, discovering and exploring the interactive sonic potentials encoded in the work, with the intent of generating rich and engaging musical experiences. Sensors, video tracking and video and sound projection can be combined and integrated to create a coherent, virtual, interactive space. By projecting video onto the same physical space as the sensing space, participants can not only hear but also visualise the effects of their gestures in the interactive environment.

3.3.1 Sergi Jordà–reacTable

The reacTable (2005) is a collaborative interactive instrument developed by Sergi Jordà and his team of digital luthiers156 at the Music Technology Group within the Audiovisual Institute at the Universitat Pompeu Fabra in Barcelona, Spain. Clearly defined by the development team as an instrument (Kaltenbrunner et al. 2006), it consists of a translucent round table on which small custom made objects can be placed and moved (Figure 3.11). Video camera(s) under the table continuously track the placement, movement and orientation of the objects by identifying the pattern printed on the underside of the object. Performers of the instrument can place the

156 The reacTable has been developed by Sergi Jordà (director), Martin Kaltenbrunner, Günter Geiger, Marcos Alonso (graphics engine), with contributions by Ross Bencina (computer vision engine), Ignasi Casasnovas (intern), Gerda Strobl (intern), Hugo Solis. Project homepage at http://www.iua.upf.es/mtg/reactable/ viewed 22/3/2007.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 84 objects on the table, move them to different positions and change their orientation. In so doing the performers control sound synthesis parameters. A data projector, also located under the table, displays graphic visual feed back of the of the synthesis patches, spatially aligned with the placement of the physical control objects on top of the table, rear-projected back onto the table’s translucent surface.

Figure 3.11 reacTable157

The reacTable is designed to be a collaborative instrument, with all performers physically located either at the same table or at networked remote table locations. Use of the instrument is intended to be intuitive and explorative, producing sonic outcomes that are both engaging and sonically challenging. The instrument also aims to support the needs of both novices and professional electroacoustic artists.

In its current version the reacTable uses Pd and SuperCollider as the underlying synthesis engines (Jordà et al. 2005). Sound units supported by the system include generators, filters, controllers, control filters, mixers and clock synchronisers. These units are matched to the physical objects that can be chosen by the performers. Placement and movement of the objects on the table define the “topological structure” 158 and the parameters of the sound synthesis processes. External audio signals can also be routed into the system, enabling the input of acoustic instruments and other synthesisers for manipulation and transformation by the tabletop interface.

157 From http://www.iua.upf.es/mtg/reactable/?pic=linz06.jpg viewed 22/3/2007. 158 http://www.iua.upf.es/mtg/reactable/ viewed 22/3/2007.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 85

Conceptually, the reacTable can be conceived as a physical, collaborative interface to a patching synthesis environment159. One key departure point is the use of dynamic patching in which connections between different sound modules are determined by the object type and proximity between them. In this way, simply moving objects around on the table can dynamically both break and create new connections between processing units. An interactive exploration of the sonic potentials available on the table is achieved through the physical movement of objects in the space.

Another defining feature of performances with the reacTable is that the construction of patches becomes part of the composition structure. Jordà et al. writes of the process— The reacTable has to be built and played at the same time. Each piece has to be constructed from scratch starting from an empty table (or from a single snapshot which has been reconstructed on the table before the actual performance). This is a fundamental characteristic of this instrument, which therefore always has to evolve and change its setup. Building the instrument is equivalent to playing it and vice-versa (Jordà et al. 2005).

There are many projects currently exploring similar models of interaction and interface to the reacTable. The reacTable development team maintains a listing of over forty similar projects related to tangible music interfaces and table-top controllers160.

3.3.2 James Patten and Ben Recht–Audiopad

Audiopad161 (2002), developed by James Patten and Ben Recht at MIT’s Media Lab employs a similar collaborative interface to the reacTable. Multiple players interact with the system by positioning objects (described as pucks) around a table (Patten et al. 2002). Instead of the video tracking approach used in the reacTable, Radio

159 For example Max/MSP or Pd. 160 http://www.iua.upf.es/mtg/reactable/?related viewed 22/11/2006. 161 Project homepage at http://www.jamespatten.com/audiopad viewed 22/11/2006.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 86

Frequency (RF) tags are embedded into the pucks and their location and movement detected on the table using hidden antennas (Figure 3.12). Graphic feedback of the system’s state is projected back onto the table via a data projector located above. Departing from the reacTable’s model of dynamic sound synthesis patching, Audiopad’s pucks are used to control and interact with pre-constructed sampled material in Ableton Live162. Objects (pucks) are provided to manipulate rhythm, melody, loops and textures. Blaine writes of Audiopad’s use of pre-constructed material— While the use of pre-determined musical events limits certain aspects of an individual’s creative control, it has the benefit of creating more cohesive sound spaces, particularly with multiple players (Blaine 2006).

Figure 3.12 Audiopad’s tabletop interface163

3.3.3 Toshio Iwai–Composition on the Table, Electroplankton and Tenori-on

Toshio Iwai is a Japanese multimedia artist whose interactive works integrate both sonic and visual elements, both in gallery installation contexts and for computer games. Iwai’s Composition on the Table (1998) presents another interpretation of the table-top interface for collaborative interaction with sound. Many aspects of this work can be seen transferred into Iwai’s video game Electroplankton (2005)164 developed for the Nintendo DS165 handheld gaming system.

162 Ableton Live is a loop-based software music sequencer http://www.ableton.com/live viewed 22/11/2006. 163 From http://www.jamespatten.com/audiopad/ viewed 22/11/2006. 164 http://electroplankton.nintendods.com/flash.html viewed 22/11/2006. 165 http://www.nintendo.com/channel/ds viewed 22/11/2006.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 87

The physical interface for Composition on the Table consists of switches, dials, turn- tables and sliding boards positioned on four white tables. Computer graphics are projected on the table via ceiling-mounted video projectors. As with the previous two examples (reacTable and Audiopad) the graphic projections create interfaces that are integrated with and responsive to the physical interface objects. Participants are able to move the interface objects on the table and in so doing affect both the sound and video projection. The installation is intended to support collaborative interactions (multiple users) by novice, non-expert participants.

Iwai’s Electroplankton is a direct extension of many of the ideas developed in Composition on the Table. However, in this instance interactions are not collaborative but instead take place between a solo user and the interactive game system. Electroplankton takes advantage of the dual screen and stylus touch screen interface of the Nintendo DS portable gaming system (Figure 3.13). By drawing with the stylus (click, draw, spin, or tap) on the animated digital ‘plankton’ users are able to interact with both the music and behaviour of the digital creatures (Blaine 2006). There are ten different plankton (discrete game sections) users can chose to interact with, each with distinctive interactive possibilities and musical outcomes.

Figure 3.13 Electroplankton–the split interface takes advantage of the Nintendo DS dual screen format166

Users can sample their own voice using the game controller’s inbuilt microphone. The stylus can then be used to transform the plankton creatures and in so doing also

166 From http://www.nintendo.com/channel/ds viewed 22/11/2006.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 88 effect the sample playback. The microphone can also be used to affect the animation of certain plankton. Other plankton can be interacted with to affect such things as beats, loops, layers, instrumentation, and tempo.

The sound world created by interacting with Electroplankton is diatonic, melodic and gently rhythmic with identifiable acoustic instrument samples (pianos, bells) layered over a ‘bubbly’ water soundscape. Interactions via the stylus and microphone are strongly linked to identifiable sonic and visual results.

Still in development, Iwai’s Tenori-on167 (2006) is an interactive instrument being created in collaboration with Yamaha. With obvious connections to Iwai’s aesthetic as expressed in Electroplankton, the Tenori-on, in its current state of development, generates simple diatonic melodic patterns with strong connections between the interactive performance gesture, associated image transformations and the sonic result. The instrument consists of a hand-held frame holding a grid of sixteen by sixteen LED switches (Figure 3.14). Touching one of the LED switches causes ripples of light to trace over the surface of the LED grid accompanied by changes to the melodic patterns being produced. Its physical design is an attempt to re-connect instrument design with sonic outcome. Iwai writes— In days gone by, a musical instrument had to have a beauty, of shape as well as of sound, and had to fit the player almost organically. [...] Modern electronic instruments don’t have this inevitable relationship between the shape, the sound, and the player168.

Once again the focus is on interaction providing the novice user a sense of music making otherwise not possible without significant training. The compromise is a system with a limited set of sonic possibilities to explore.

167 http://www.global.yamaha.com/design/tenori-on/swf/index.html viewed 1/12/2006. 168 from http://www.global.yamaha.com/design/tenori-on/swf/index.html viewed 1/12/2006.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 89

Figure 3.14 Tenori-on169

3.3.4 Tina Blaine–Jam-O-Drum

Tina Blaine’s Jam-O-Drum (1998) is an interactive music system based on a drum circle for up to six players (Blaine and Perkis 2000). Designed initially together with Tim Perkis170, the main focus of the interactive design was to create a collaborative performance environment enabling untrained, non-musicians to experience ensemble music making (Figure 3.15).

Figure 3.15 Jam-O-Drum171

The physical installation of Jam-O-Drum consists of six drum pads installed in a large circular table. The performers are positioned around the table at the drum pad locations. This spatial arrangement is intended to create a sense of equality between the performers. The table also acts as a video projection surface. Graphics and

169 http://www.global.yamaha.com/design/tenori-on/swf/index.html viewed 1/12/2006. 170 Perkis is also a member of the computer network ensemble The Hub, see Chapter 2. 171 Jam-O-Drum at the Experience Music Project in Seattle, WA. Photo: Elaine Thompson. From http://www.jamodrum.net/exhibits.html viewed 1/12/2006.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 90 computer animations are projected onto the table, dynamically responding to the participants’ drumming performances and also assisting in directing the interactions.

Participants collaborate both between themselves and with the interactive computer system. However, the interactive system functions more as a facilitator of interaction between the performers rather than a player in its own right. By changing the underlying interactive model, participants—regardless of their skill level—can perform in the context of different collaborative musical structures.

A number of interactive models have been developed for the Jam-O-Drum. The first interactive designs quantised players drum beats to fit a specific rhythmic pattern and explored mapping hit velocity to various effects. These initial models were considered unsuccessful by Blaine as the performers found the system confusing and unresponsive (Blaine and Perkis 2000).

Simpler interactive designs were developed to achieve a more responsive system. In a call-and-response style model, players were directed to mimic and improvise on patterns delivered by the interactive table. Computer graphics were displayed on the table’s surface to direct the performance—when to play, when to listen and to create different ensemble sub-groupings. In this model more experienced players found they were able to improvise within the compositional form constructed by the system. In another model, performers bounced virtual balls across the table with each hit on their drum pad, in turn triggering Gamelan-style bell samples. In this computer game inspired interactive system Blaine felt it was difficult to generate a specific rhythmic patterns. A more textural and ambient sound model was also developed, in which computer graphic animations were altered, with the performers effecting parameters such as colour, hue saturation and brightness.

The Jam-O-Drum is an interactive system designed to give novice performers the experience of collaborative music making. For Blaine the measure of the success of the interactions is in terms of responsiveness, performers’ control of the system and

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 91 the extent to which participants feel they are in control of the system. Blaine writes— It was difficult to find the right balance giving rich possibilities without leading to musical chaos (Blaine and Perkis 2000).

Jam-O-Drum participants are required to perform on six drum pads, the computer directing and facilitating their collaborative performance. In the following interactive systems the computer is intended to function as an independent virtual musician, listening to a human musician perform and responding with its own part.

3.4 Machine Listening–Human and Computer Interactive Performances

The concept of machine listening plays a central role in many interactive works. Machine listening in this context implies the ideal of achieving a model in which the computer is able to listen to and subsequently analyse a musical performance in much the same way as a trained human musician does. It is made possible through techniques such as real-time pitch tracking (Puckette et al. 1998) and beat detection (Scheirer 1998).

As we shall see, this concept of a machine listener has been used in a number of different interactive contexts. For George Lewis (2000), real-time computer analysis of his performance provides the basis for the interactive system’s improvisatory responses. The computer becomes anther performer in the improvisation. In an idealised sense, in this model the system should respond with the same musicality, responsiveness and inspiration as a human collaborator might provide.

Score following techniques are another example of the machine listening metaphor. In this model both the performer and the computer have access to the same score, with the computer using pitch-tracking to follow the performer’s progress through the piece. In contrast to the ridged electroacoustic tape and instrument paradigm, score following enables precise synchronisation between the performer and the

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 92 computer, allowing for the performer’s nuances to be precisely tracked. Score following will be discussed in more detail in Chapter Four.

Collaborative interactive performances can also be achieved through the use of machine listening. Many of the examples discussed so far incorporate aspects of this model. In Weinberg’s Hiale we find the concept of machine listener developed into a robot percussionist able to collaboratively improvise with other performers (Weinberg et al. 2005).

3.4.1 George Lewis–Voyager

George Lewis’ Voyager is an interactive improvising system in which the computer listens to an improviser’s performance and in parallel, creates its own independent improvisation (Lewis 2000). Inspired by the League’s172 computer networked improvisations and incorporating his own expertise as a trombonist and jazz improviser, Lewis developed Voyager in a dialect of the programming language Forth173 during 1986–1988 and continues to improve the programme. Precursors to Voyager include Rainbow Family (1984) and Chamber Music for Humans and Non- Humans (1985). Lewis describes Voyager as a— nonhierarchical interactive musical environment which privileges improvised music … A computer program analyzes aspects of an improviser’s performance in real time, using that analysis to guide an automatic composing program that generates complex responses to the musician’s playing (Lewis 1999:103).

Voyager’s internal structure is outlined in Figure 3.16. Up to two improvisers can perform with the system. Pitch followers are used to convert acoustic instrument performance into MIDI pitch and velocity data streams174. If MIDI keyboards or similar interfaces are used, these can be connected directly into the system. The

172 See Section 2.4.1. 173 Forth is a nonprocedural style programming language developed in 1970 by American astronomer Charles Moore. HMSL (Hierarchical Music Specification Language) by Phil Burk, Larry Polansky and David Rosenboom and FORMULA (FORth MUsic LAnguage) by David Anderson and Ron Kuivila are extensions to Forth developed for computer music programming. 174 Lewis notes that pitch followers are not perfect (Lewis 1999).

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 93 captured raw performance data is analysed by the ‘mid-level smoothing routine’ to provide averages of pitch, velocity, probability of note activity and spacing between notes (Lewis 2000:35). The results of this analysis are used by the ‘setresponse’ foutine to affect the system’s response to the human improvisers, influencing parameters such as tempo, probability of playing a note, the spacing between notes, melodic interval width, choice of primary pitch material, octave range, microtonal transposition and volume (Lewis 2000:35-36).

Figure 3.16 Voyager–outline of internal structure175

Voyager performs its own improvisation via sixty-four independent MIDI controlled voices or players. The voices are rendered by a MIDI sampler using identifiable real world timbres in preference over synthesised non-referential timbres. Operating

175 Derived from a description of the Voyager programme (Lewis 2000).

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 94 aperiodically and asynchronously every five to seven seconds, the software recombines the voices into new groupings or ensembles. The ‘setphrasebehavior’ routine associates each new ensemble created with a new behaviour defining parameters such as—timbre, melody algorithm, microtonal pitchset, volume range, microtonal transposition, tactus (beat), tempo, note probability, note spacing, interval-width range, chorusing, reverb portamento and tessitura (range) and tempo change over time (Lewis 2000:35). The number of players assigned to an ensemble is variable. Different ensemble configurations with different sonic behaviours (including metre) can be active at the same time. The system also decides how to interpret the human improvisers’ performances. Options include—listen to one, both or none; imitate, oppose or ignore.

Lewis clearly considers Voyager as a performer (a multi-instrument player), not as an interactive instrument (Lewis 1999:103). When specified, the software can play independent of human performer input, using random (white noise) functions to determine system parameters. In this manner the computer’s independent performance can affect the human performer just as the human performer can affect the computer. The interactive performance does not fit a leader follower paradigm but rather is the result of negotiation between computer and improviser (Figure 3.17). The only communication possible with the system is via performance. There are no pedals, sliders or dials to control the system with. Performers are required to change their improvisation to change the system.

Figure 3.17 Voyager system as two parallel streams176

176 Derived from text description of the Voyager system (Lewis 1999).

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 95

Voyager’s overall form is the combined outcome of its individual and independent players’ actions (heterarchical). Lewis compares this concept of intelligent orchestra in Voyager to the Javanese gamelan orchestra (Lewis 1999:107). In this context Lewis describes four different modes of improvisation— Kembangan (“flowering”) an improvisation that adds beauty

Isen-isen (“filling”) an improvisation that “pleasantly fills a vacuum”

Ngambang (“floating”) musicians who are improvising without clear knowledge of where the music is going

Ngawur (“blunder”) denotes an out-of-style or irrelevant improvisation.

Lewis considers that Voyager has its own sound (Lewis 1999:105). Independent of the timbres used by the MIDI voices, the improvisation style and instrument choice of the human performers, Voyager’s internal software architectures, non-motivic approach to form and its aggregation of individual behaviours create an identifiable sound persistent across different performances. Lewis also considers that Voyager has lead to a concept of a computer music that reflects an African-American cultural practice (Lewis 1999:109).

3.4.2 Robert Rowe–Cypher

Another composer created interactive music system in which the computer functions as a virtual performer is Robert Rowe’s Cypher (Rowe 1993). Although intended as a generic programming environment for interactive music, the software has been designed within the context of Rowe’s models of interactive music systems177. The software provides a practical example of his machine listening paradigm, designed to respond to a performance as a trained musician would, albeit an artificial one. Rowe writes of the design criterion for Cypher—

177 See Chapter 4.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 96

The most fundamental design criterion is that at any one time one should be able to play music to the system and have it do something reasonable; in other words, the programme should always be able to generate a plausible complement to what it is hearing (Rowe 1993:40).

Rowe (1993:6-7) describes Cypher as an example of a performance-driven system as opposed to score-driven, with its musical responses functioning as a player rather than an instrument178. In terms of internal data processing the software consists of two main components, a listener and a player. The listener analyses and classifies performance input while the player generates new musical output using various algorithmic techniques179. Both the listener and player function on two hierarchical levels, contributing to a sense of machine cognition. These functions of the listener and player can be summarised as follows— Listener: Level 1: Performs analysis of individual events measuring parameters including density, register, speed, dynamic, duration, and harmony. Level 2: Analysis of level 1 data to detect phrases and determine regular or irregular behaviour within phrases. Player (Composition): Level 1: Transformation and generation of individual sound events. Level 2: Effects regularity, groupings and types and direction of change in response to analysis results of the level 2 listener.

Two listeners are used, one to listen to the MIDI input from the human performer and the other functioning as an internal ‘critic’ listening to the output of the system’s internal player.

178 Rowe’s classification dimensions are described in Section 4.3.2. 179 Cypher uses MIDI to input performance data and output its responses.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 97

The second listener represents a compositional ‘critic’, another level of observation and compositional change, designed to monitor the output of the Cypher player and apply modifications to the material generated by the composition methods before they are actually sent to the sound making devices (Rowe 1993:200).

Defining relationships between the various input and output functions via the software’s graphic interface creates different Cypher interactive performances. Rowe’s approach of generating sophisticated, intelligent systems by combining the behaviours of much simpler individual units (multiagent systems) is inspired by Marvin Minsky’s The Society of Mind (Minsky 1988). The central idea of Minsky’s theory is that the performance of complex tasks, which require sophisticated forms of intelligence, is accomplished through the coordinated action and cross-connected communication of many small, relatively unintelligent, self-contained agents. Similarly, the basic architecture of Cypher, on both the listening and playing sides, combines the action of many small, relatively simple agents. Agencies devoted to higher-level, more sophisticated tasks are built up from collections of low- level agents handling smaller parts of the problem and communicating the progress of their work to the other agents involved (Rowe 1993:208-209).

A similar approach can be found in Lewis’ Voyager system, discussed in 3.4.1 in which complex, musically interesting behaviour is created from the combined effects of simpler individual components.

3.4.3 Gil Weinberg and Scott Driscoll–Haile

Giving anthropomorphic form to Rowe’s machine listener, Haile180 is a robotic percussionist created by Gil Weinberg and Scott Driscoll that interacts with human percussionists (Weinberg et al. 2005). Haile ‘listens’ to the performance of human players and responds with its own improvisation derived from real-time analysis of what it is hearing. In this interactive performance system the computer responds by

180 http://www-static.cc.gatech.edu/~gilwein/pow.htm viewed 1/12/2006.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 98 performing on an acoustic Native American Pow-Wow drum via its mechanical interface, rather than using a synthesiser, sampler or other digital instrument. The conceptualisation of the computer part as an improviser of equal status to the human performer is reinforced by giving the robot a humanoid looking shape, constructed out of a wooden frame (Figure 3.18). In performances realised to date the human percussionists either share the use of the Pow-Wow drum with the robot or perform on their own individual drums.

Figure 3.18 Haile in performance181

Haile has two mechanical arms, each controlled by a 18F452 PIC182 microprocessor (Weinberg and Driscoll 2006). The robot’s arms are able to control both the pitch and volume of their drum strikes. Striking the drumhead in different locations produces different pitches and the strike velocity determines the volume. The right arm has a maximum strike rate of fifteen hertz and the left arm eleven hertz. While lacking the performance subtleties and virtuosity of a human performer, Weinberg considers his interactive robot percussionist has distinct advantages over a purely interactive electroacoustic system183. In contrast to interacting with sound produced from inanimate speakers, Haile’s robot arms provide clear and intuitive visual cues with each drum strike. Haile’s creators write—

181 From Weinberg, G. and Driscoll, S. (2006). The Perceptual Robotic Percussionist - New Developments in Form, Mechanics, Perception and Interaction Design. http://www-static.cc.gatech.edu/~gilwein/pow.htm viewed 1/12/2006. 182 The 18F452 PIC microprocessor is a 10 MIPS 8-bit microcontroller with 8 channels 10-bit analog-to-digital http://www.microchip.com Viewed 1/12/2006 183 It is interesting to reflect on the use of acoustic sound in the live electronic works of some forty years previous. For example Alvin Lucier’s Music for Solo Performer (1965), Gordon Mumma’s Hornpipe (1967) and David Tudor’s Rainforest IV (1973) (see sections 2.1.4, 2.1.5 and 2.1.3 respectively).

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 99

Musical robots can bring together the unique capabilities of computational power with the expression, richness and visual interactivity of physically generated sound. A musical robot can combine algorithmic analysis and response that extend on human capabilities with rich sound and visual gestures that cannot be reproduced by speakers (Weinberg and Driscoll 2006).

Directional microphones are used to enable Haile to hear. The software to analyse the incoming audio signals and determine Haile’s responses has been developed in Max/MSP. Ethernet is used to communicate between the main computer and the robot’s microprocessors. The system analyses the real-time audio from the human percussionists to determine hit onset time, amplitude, pitch, beat and density measure, rhythmic stability and similarity184. The results of this analysis process are used to create Haile’s improvised responses.

Haile has been programmed with six interactive modes that can be organised and combined in different configurations as follows (Weinberg and Driscoll 2006)— • Imitation mode uses analysis of a performer’s onset, pitch, and amplitude performance data to select recorded rhythmic sequences to play and repeat. • Stochastic transformation mode improvises in a call-and-response style. The human performer’s phrases are altered via stochastic algorithms dividing, multiplying and skipping beats to create rhythmic variations, yet still maintaining their connection to the original material. • Simple accompaniment mode plays back pre-recorded MIDI files as an independent rhythmic layer. • Perceptual transformation mode measures the stability of the human percussionists’ rhythms and selects a corresponding rhythm to play with a similar measure of stability.

184 Max/MSP objects used for the analysis include Tristan Jehan’s beat~ and pitch~ and Miller Puckette’s bonk~.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 100

• In beat detection mode Haile attempts to follow the tempo and beat of the human percussionists. • Perceptual accompaniment mode creates supporting rhythmic material based on density and amplitude analysis of the human players’ performances. Dense rhythmic passages are supported by sparse reinforcement of the main beats, while sparse passages from the human percussionists create dense solos from Haile, based on stochastic transformation of the original material. In this mode the robot interprets amplitude laterally, responding to loud with loud and soft with soft.

Haile differs from most of the current research into sound making robots in that the project’s focus is on ways in which the percussionist robot can listen to and interact with its human counterparts. Robotic musical instruments such as LEMUR’s185 collection of robotic instruments (Singer et al. 2004) and the robotic bagpipe player McBlare (Dannenberg et al. 2005) are automated mechanical devices that can be performed live or programmed to play themselves. While Sony’s QRIO Conductor Robot (Sony 2003) provides an example of an area of research that is focused on imitating the actions of humans. Closer to Haile’s focus on perception and interaction is Baginsky’s The Three Sirens, (Baginsky 2005) a self-learning robotic rock band controlled by neural networks.

3.5 Summary

This chapter concludes the survey of interactive art practice. Timelines of the works discussed in Chapters Two and Three can be found in Figure 3.19 and Figure 3.20 respectively. The timeline entries have been colour coded and arranged to reflect the organisation of the works, as presented in the chapters, into the following broad classifications—Live-Electronic Music of the 1950s and 1960s (2.1), First Programmable Interactive Computer Music (2.2), First Commercial Interactive Music Software (2.3), Computer Networked Performance Ensembles (2.4),

185 LEMUR: League of Electronic Musical Urban Robots.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 101

Interfaces Mapping Gesture To Sound (3.1), Installation and Audience Interaction (3.2), Collaborative Interactive Virtual Environments (3.3), Machine Listening and Improvising Systems (3.4).

Figure 3.19 Timeline of works discussed in Chapter Two

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 102

Figure 3.20 Timeline of works discussed in Chapter Three

While far from exhaustive—there are literally many hundreds of works that could have been included—the survey presents a comprehensive overview of the major trends, concepts and models explored in interactive create practise over the last sixty years. The survey reveals many similarities in the approaches taken by artists to designing and working with interactive systems and a number of significant themes emerge.

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 103

3.5.1 Instrument Building, Composing or Performing

Many of the systems described blur the traditional distinctions between composing, instrument building, systems design and performance. The score for Tudor’s Rainforest IV (2.1.3) is in effect the instructions and circuit diagrams for building the installation. The work is realised through creating the system itself. Mumma (2.1.5) considered both composing and instrument building as part of the same creative process. For Mumma, designing circuits for his cybersonics was analogous to composing. Similarly, the design of system architectures for networked ensembles such as the Hub (2.4.2) and HyperSense Complex (2.4.4) is integrally linked to the process of creating new compositions and performances.

3.5.2 Shared Control

A different notion of instrument control is presented from that usually associated with acoustic instrument performance. Martirano (2.2.1) wrote of guiding the SalMar Construction through a performance, referring to an illusion of control. Similarly Chadabe (2.2.2) described sharing the control of the music with the interactive system. Schiemer (3.1.5) refers to an illusion of control, describing his interactive instruments as improvising machines and compares working with an interactive system to sculpting with soft metal or clay. Sensorband (3.1.4) performers working with the Soundnet also set up systems that exist at the edge of control, due no less in part to the extreme physical nature of their interfaces.

3.5.3 Collaboration

Interactive music systems have recently had wide application in the creation of collaborative musical spaces, often with specific focus for non-expert musicians. Blaine’s Jam-O-Drum (3.3.4) was specifically designed to create such a collaborative performance environment for non-expert participants to experience ensemble based music making. The tabletop as a shared collaborative space has proved to be a powerful metaphor as revealed by projects such as the reacTable (3.3.1), Audiopad (3.3.2) and Composition on the Table (3.3.3). Interactive systems have also found application providing musical creative experiences for non-expert musicians in computer games such as Iwai’s Electroplankton (3.3.3).

Chapter Three: Overview of Interactive Electroacoustic Art Practice Part 2 104

3.5.4 Interactive Conversations

Performing with an interactive system is often compared to a conversation, with elements of predictability but also the potential for surprise and mutual influence Both Chadabe (2.2.2.1) and Perkins (2.4.2) from the Hub describe performing with their systems as like having a conversation. A significant element of a conversation is that you can’t always predict what the other person will say in response to you. The potential for interactive systems to respond with something new, inspiring and possibly challenging the human performer is a concept identified frequently throughout the survey.

The following chapter will examine in detail the definitions, classification, and models that have been proposed for interactive music systems.

105

Chapter Four: Definitions, Classification and Models

4.1 Introduction

In this chapter we examine differing approaches to the definition, classification and modelling of interactive music systems. It should be noted that some of these research issues could be considered to intersect with the wider study of Human Computer Interface (HCI) design. However, the more general user interface requirements addressed by HCI are invariably of limited value when addressing the artistic and creative needs of interactive music systems, thus justifying the specific research focus undertaken herein.

The development of a coherent conceptual framework for interactive music systems presents a number of challenges. As we revealed in Chapters Two and Three, interactive music systems are used in many different contexts including installations, networked music ensembles, new instrument designs and collaborations with robotic performers. Interactive music systems are stylistically agnostic; meaning the same interactive model can be applied to very different musical contexts. Even though an interactive system may use the same underlying technical processes—for example machine listening or stochastic algorithms—this does not necessarily correlate to sonic outcomes in the same musical style.

Critical investigation of interactive works requires extensive cross-disciplinary knowledge in a diverse range of fields including software programming, hardware design, instrument design, composition techniques, sound synthesis and music theory. Furthermore, the structural or formal musical outcomes of interactive systems are invariably not static, i.e. not the same every performance, thus traditional music analysis techniques derived for notated western art music are inappropriate and unhelpful. Not surprisingly then, the practitioners themselves are the primary source of writing about interactive music systems, typically creating definitions and classifications derived from their own creative practice. The following practitioners are regarded as significant in this area and their work is therefore presented here as a Chapter Four: Definitions, Classification and Models 106 foundation for discussions pertaining to the definition, classification and modelling of interactive music systems.

4.2 Definitions

4.2.1 Joel Chadabe–Interactive Composing

Joel Chadabe has been developing his own interactive music systems since the late 1960s and has written extensively on the subject of composing with interactive computer music systems186. In 1981 he proposed the term interactive composing to describe “a performance process wherein a performer shares control of the music by interacting with a musical instrument” (Chadabe 1997:293)187. As mentioned in Chapter Two, Chadabe (1997:291) considers Martirano’s SalMar Construction188 and his own CEMS System189, both developed some twelve years earlier, as the first examples of interactive composing instruments. Notably, these interactive instruments were programmable and could be performed in real-time. These instruments were interactive in the same sense that performer and instrument were mutually influential. The performer was influenced by the music produced by the instrument, and the instrument was influenced by the performer’s controls (Chadabe 1997:291).

Chadabe highlights that the musical outcome from these interactive composing instruments was a result of the shared control of both the performer and the instrument’s programming, the interaction between the two creating the final musical response. These instruments introduced the concept of shared, symbiotic control of a musical process wherein the instrument’s generation of ideas and the performer’s musical judgment worked together to shape the overall flow of the music (Chadabe 1997:291).

186 See Section 2.2.2 for a discussion of Chadabe’s interactive works. 187 Chadabe first proposed the term interactive composing at the International Music and Technology Conference, University of Melbourne, Australia, 1981. From http://www.chadabe.com/bio.html viewed 3/4/2007. 188 Martirano’s SalMar Construction is discussed in Section 2.2.1 189 Chadabe’s CEMS System is discussed in Section 2.2.2.1

Chapter Four: Definitions, Classification and Models 107

Further emphasising that the musical outcomes of these interactive systems are a result of and dependant upon the interactive transaction, Chadabe (1997:288) describes his process of creating interactive compositions as a ‘design-then-do’ process. The system is first designed, constructed and programmed. Interactive input is required for the system, to realise the composition defined and encoded in the previous design stage.

Programmable interactive computer music systems such as these challenge the traditional clearly delineated western art music roles of instrument, composer and performer. In interactive music systems the performer can influence, effect and alter the underlying compositional structures, the instrument can take on performer like qualities, and the evolution of the instrument itself may form the basis of a composition. In all cases the composition itself is realised through the process of interaction between performer and instrument, or machine and machine. In developing interactive works the composer may also need to take on the roles of, for example, instrument designer, programmer and performer. Chadabe writes of this blurring of traditional roles in interactive composition— When an instrument is configured or built to play one composition, however the details of that composition might change from performance to performance, and when that music is interactively composed while it is being performed, distinctions fade between instrument and music, composer and performer. The instrument is the music. The composer is the performer (Chadabe 1997:291).

Chadabe provides a perspective of interactive music systems that focuses on the shared creative aspect of the process in which the computer influences the performer as much as the performer influences the computer. The musical output is created as a direct result of this shared interaction, the results of which are often surprising and not predicted.

Chapter Four: Definitions, Classification and Models 108

4.2.2 Robert Rowe–Interactive Music Systems

Robert Rowe in his book Interactive Music Systems (Rowe 1993) presents an image of an interactive music system behaving just as a trained human musician would, listening to ‘musical’ input and responding ‘musically’. He provides the following definition— Interactive computer music systems are those whose behaviour changes in response to musical input. Such responsiveness allows these systems to participate in live performances, of both notated and improvised music (Rowe 1993:1).

In contrast to Chadabe’s perspective of a composer/performer interacting with a computer music system, the combined results of which realise the compositional structures from potentials encoded in the system, Rowe presents an image of a computer music system listening to, and in turn responding to, a performer. The emphasis in Rowe’s definition is on the response of the system; the affect the system has on the human performer is secondary. Furthermore the definition is constrained, placed explicitly within the framework of musical input, improvisation, notated score and performance.

Garth Paine (2002b) is also critical of Rowe’s definition with its implicit limits within the language of notated western art music, both improvised and performed— The Rowe definition is founded on pre-existing musical practice, i.e. it takes chromatic music practice, focusing on notes, time signatures, rhythms and the like as its foundation; it does not derive from the inherent qualities of the nature of engagement such an ‘interactive’ system may offer (Paine 2002b:296).

Sergi Jordà (2005) questions if there is in fact a general understating of what is meant by Rowe’s concept of ‘musical input’— How should an input be, in order to be ‘musical’ enough? The trick is that Rowe is implicitly restraining ‘interactive music systems’ to systems which posses the ability to ‘listen’, a point that becomes clearer in the subsequent pages of his book. Therefore, in his definition, ‘musical input’ means simply ‘music input’; as trivial and as restrictive as that! (Jordà 2005:79)

Chapter Four: Definitions, Classification and Models 109

Paine further identifies the definition’s inability to encompass interactive music systems that are not driven by instrumental performance as input— Clearly Rowe’s position only remains true while the input is of an instrumental nature. If the system input is a human gesture, be it a dance troupe or a solo music performer, or for that matter, a member of the public (as is true of most interactive, responsive sound installation works), the definition becomes problematic (Paine 2002b:296).

However, Rowe’s definition should also be understood in the context of the music technology landscape of the early 1990s. At this time most of the music software programming environments were MIDI based, with the sonic outcomes typically rendered through the use of external MIDI synthesisers and samplers. Real-time synthesis, although possible, was significantly restricted by processor speed and the cost of computing hardware. Similarly, sensing solutions (both hardware and software) for capturing performance gestures were far less accessible and developed in terms of cost, speed and resolution, than are currently available. The morphology of the sound in a MIDI system is largely fixed and so the musical constraints are inherited from instrumental music—i.e. pitch, velocity, duration. Thus the notions of an evolving morphology of sound explored through gestural interpretation and interaction are not intrinsic to the system.

4.2.3 Todd Winkler–Composing Interactive Music

Todd Winkler in his book Composing Interactive Music (Winkler 1998) presents a definition of interactive music systems closely aligned with Rowe’s, in which the computer listens to, interprets and then responds to a live human performance. Winkler’s approach is also MIDI based with all the constraints mentioned above. Winkler describes interactive music as— a music composition or improvisation where software interprets a live performance to affect music generated or modified by computers. Usually this involves a performer playing an instrument while a computer creates music that is in some way shaped by the performance (Winkler 1998:4).

Chapter Four: Definitions, Classification and Models 110

Winkler considers that the design of interactive music systems can be informed by studying the interactions that take place in live performance contexts between human musicians performing acoustic instruments (Winkler 1998:5). He suggests that interactive music systems can be considered in terms of— • Instruments • Performers • Composers • Listeners.

As is the case with Rowe’s definition, there is little direct acknowledgment by Winkler of interactive music systems that are not driven by instrumental performance (i.e. acoustic instrument). Although Winkler (1998:7) does identify that interactive music systems can enable even the audience to take on roles of both composer and performer through other forms of interaction (dance for instance). In discussing the types of input that can be interpreted, the focus is again restricted to event-based parameters such as notes, dynamics, tempo, rhythm and orchestration. Where gesture is mentioned, the examples given are constrained to MIDI controllers (key pressure, foot pedals) and computer mouse input— interactive music works by having a computer interpret a performer’s actions in order to alter musical parameters, such as tempo, rhythm, or orchestration. These parameters are controlled by computer music processes immediately responsive to musical data (such as individual notes or dynamics) or gestural information (such as key pressure, foot pedals, or computer mouse movements) (Winkler 1998:6).

However, Winkler does acknowledge the new potentials that interactive music systems present— interactive techniques may suggest a new musical genre, one where the computer’s capabilities are used to create new musical relationships that may exist only between humans and computers in a digital world (Winkler 1998:4-5).

Chapter Four: Definitions, Classification and Models 111

Interactive music systems are of course not ‘found objects’, but rather the creation of composers, performers, artists and the like (through a combination of software, hardware and musical design). For a system to respond musically implies a system design that meets the musical aesthetic of the system’s designer(s). For a system to respond conversationally, with both predictable and unpredictable responses likewise is a process inbuilt into the system. In all of the definitions discussed, to some degree, is the notion that interactive systems require interaction to realise the compositional structures and potentials encoded in the system. To this extent interactive systems make possible a way of composing that at the same time is performing and improvising.

4.3 Classifications and Models

4.3.1 Empirical Classifications

One of the simplest approaches for classifying interactive music systems is with respect to the experience afforded by the work. For example, is the system an installation intended to be performed by the general public or is it intended for use by the creator of the system and/or other professional artists? Burt Bongers (2000:128) proposes just such an empirically based classification system, identifying the following three categories190— • Performer with System • Audience with System • Performer with System with Audience.

These three categories capture the broad form and function of an interactive system but do not take into account the underlying algorithms, processes and qualities of the interactions taking place. The performer and system category encompasses works such as Lewis’ Voyager (Section 3.4.1), Waisvisz’s The Hands (Section 3.1.1), Sonami’s Lady’s Glove (Section 3.1.2) and Schiemer’s Spectral Dance (Section 3.1.5). The audience with system category includes interactive works designed for

190 Of course, as Bongers acknowledges, that there is always some form of interaction between the performer and audience, however in this instance Bongers’ focus is on the interactions mediated by an electronic system only.

Chapter Four: Definitions, Classification and Models 112 gallery installation such as Paine’s Gestation (Paine 2007), Gibson and Richards’ Bystander (Section 3.2.1) and Tanaka and Toeplitz’s The Global String191. Bongers’ third category, performer with system with audience places the interactive system at the centre, with both performer and audience interacting with the system. Examples of this paradigm are less common, however Bongers puts forward his own—The Interactorium (Bongers 1999)—developed together with Walter Fabeck and Yolande Harris as an illustration. The Interactorium includes both performers and audience members in the interaction, with the audience seated on chairs equipped with “active cushions” providing “tactual” feedback experiences and sensors so that audience members can interact with the projected sound and visuals and the performers.

To this list of classifications I would add the following two extensions— • Multiple performers with a single interactive system • Multiple systems interacting with each other and/or multiple performers. Computer interactive networked ensembles such as the Hub (Section 2.4.2), austraLYSIS electroband (Section 2.4.3) and HyperSense Complex (Section 2.4.4) are examples of multiple performers with a single interactive system, exploring interactive possibilities quite distinct from the single performer and system paradigm. In a similar manner the separate category for multiple systems interacting encompasses works such as Felix Hess’s Moving Sound Creatures (Chadabe 1997) for twenty-four independent moving sound robots, which is predicated on evolving inter-robot communication, leading to Artificial Life (AL) like development of sonic outcomes.

4.3.2 Rowe’s Classification Dimensions

Developing a framework further than just simply categorising the physical manifestations of interactive systems, Rowe (1993:6-7) proposes a “rough classification system” for interactive music systems consisting of a combination of three dimensions — 1. Score-driven v. performance-driven systems

191 http://www.sensorband.com/atau/globalstring/ viewed 1/6/2007.

Chapter Four: Definitions, Classification and Models 113

2. Transformative, generative or sequenced response methods 3. Instrument v. player paradigms.

For Rowe, these classification dimensions do not represent distinct classes; instead a specific interactive system would more than likely encompass some combination of the classification attributes. Furthermore, the dimensions described should be considered as points near the extremes of a continuum of possibilities (Rowe 1993:6).

4.3.2.1 Score-Driven v. Performance-Driven

Score-driven systems have embedded knowledge of the overall predefined compositional structure. A performer’s progress through the composition can be tracked by the system in real-time, accommodating subtle performance variations such as a variation in tempo. Precise, temporally defined events can be triggered and played by the system in synchronisation with the performer, accommodating their performance nuances, interpretations and potential inaccuracies.

A clear example of a score-driven system is demonstrated by score following (Dannenburg 1984; Dannenburg and Mukaino 1988; Vercoe 1984)192 in which a computer follows a live performer’s progress through a pre-determined score, responding accordingly (Figure 4.1). Examples of score following works include Lippe’s Music for Clarinet and ISPW (Lippe 1993) and Manoury’s Pluton for piano and triggered signal processing events (Puckette and Lippe 1992)193.

192 Score following was first presented at the 1984 International Computer Music Conference independently by and Roger Dannenburg (Puckette and Lippe 1992). 193 Other examples of works using score following techniques include Manoury’s Sonus ex Machina, En echo, and Jupiter and Boulez’s Anth`emes II and Explosante-fixe.

Chapter Four: Definitions, Classification and Models 114

Figure 4.1 Model of a score following system194

Score following is however, more reactive than interactive, with the computer system typically programmed to faithfully follow the performer. Score following can be considered as an intelligent version of the instrument and tape model, in which the performer follows and plays along with a pre-constructed tape (or audio CD) part. Computer based score-following reverses the paradigm, with the computer following the performer. Although such systems extend the possibilities of the tape model, enabling real-time signal processing of the performer’s instrument, algorithmic transforation and generation of new material, the result from an interactive perspective is much the same, perhaps just easier for the performer to play along with. As Jordà observes— score-followers constitute a perfect example for intelligent but zero interactive music systems (Jordà 2005:85).

A performance-driven system conversely, has no pre-constructed knowledge of the compositional structure or score and can only respond based on the analysis of what the system hears. Lewis’ Voyager (Section 3.4.1) can be considered an example of a performance-driven system, listening to the performer’s improvisation and responding dynamically, both transforming what it hears and responding with its own independent material.

4.3.2.2 Response Type

Rowe’s three response types—transformative, generative or sequenced—classify the way an interactive system responds to its input. Rowe (1993:163) moreover,

194 Derived from Orio, N., Lemouton, S., and Schwarz, D. (2003). Score Following: State of the Art and New Developments, In Proceedings of the 2003 Conference on New Interfaces for Musical Expression (NIME-03). Montreal.

Chapter Four: Definitions, Classification and Models 115 considers that all composition methods can be classified by these three broad classes. The transformative and generative classifications imply an underlying model of algorithmic processing and generation. Transformations can include techniques such as inversion, retrograde, filtering, transposing, filtering, delay, re-synthesis, distortion and granulating. Generative implies the system’s self-creation of responses, either independent of, or influenced by the input. Generative processes can include functions such as random and stochastic selection, chaotic oscillators, chaos based models and rule based processes. Artificial life (AL) algorithms offer further possibilities for generative processes, for example flocking algorithms, biology population models and genetic algorithms. Sequenced response is the playback of pre-construct and stored materials. Sequence playback often incorporates some transformation of the stored material, typically in response to the performance input.

4.3.2.3 Instrument v. Player

Rowe’s third classification dimension reflects how much like an instrument or another player the interactive system behaves. The instrument paradigm describes interactive systems that function in the same way that a traditional acoustic instrument would, albeit an extended or enhanced instrument. The response of this type of system is predictable, direct and controlled, with a sense that the same performance gestures or musical input would result in the same or at least similar, replicable responses.

The player paradigm describes systems that behave as an independent, virtual performer or improviser, interacting with the human musician, responding with some sense of connection to the human’s performance, but also with a sense of independence and autonomy. Lewis (2000:34) defines his Voyager system as an example of a player paradigm, with the system capable of both transformative responses and also able to generate its own independent material. For Lewis, an essential aspect of Voyager’s system design was to create the sense of playing interactively with another performer.

Chapter Four: Definitions, Classification and Models 116

4.3.3 Winkler–Models based on Acoustic Instrument Ensembles

Winkler (1998:21), building on his view that the relationships between human performers and interactive computer systems can be informed by the interactive relationships inherent in traditional acoustic instrument performance practice and composition, proposes that acoustic instrumental ensembles can be used as models for interactive systems. Performers need to interpret music notation, which in turn leads to interaction between ensemble players as they collectively negotiate a score, integrating their own individual interpretations and performances into a coherent musical whole. Winkler (1998:23-27) proposes the following four models derived from acoustic instrument ensemble practice to inform interactive system design— • Conductor and orchestra • Chamber music • Improvising jazz combo • .

The conductor model, as demonstrated by the interactions in a symphony orchestra, requires all the performers to follow a master controller—the conductor. In this model the conductor interprets the score and directs the performers accordingly. The chamber music model, for example a string quartet, presents an example of shared control in which the role of leader can be passed from player to player during a performance. Performers influence each other and the interpretation of the score is shared. In the improvisation model, or jazz combo, performers not only interact with each other sharing control of the performance, but also influence and develop the musical material being performed. The performers play within a shared musical template or framework, according to a predefined set of rules. The free improvisation model is an extension of the improvisation model in which the performers have much greater and in some cases total control of the musical material, with little or no predefined formal structures.

The ordering of these models reveals a hierarchy of the control structures, starting with the centralised master control of the conductor, to the distributed decentralised control of the chamber ensemble and ending with the progressive freeing of control

Chapter Four: Definitions, Classification and Models 117 represented by the jazz combo and free improvisation models. This not only relates to the interactions between the players themselves, but also with respect to the nature and amount of control the performers have over the predetermined musical structures (e.g. the score). The orchestra follows the interpretation of the score made by the conductor with limited room for personal interpretation, while in the chamber ensemble it is the performers themselves interpreting the score. In both the orchestra and the chamber ensemble the score is typically predetermined. By comparison, the members of an improvising jazz combo have considerable freedom, being able to both modify the underlying musical framework and share control of the performance between the individual performers. The free improvisation model increases the individual performers’ freedoms through the ability to dynamically create the musical material at the time of performance in response to the other players’ performances.

4.3.4 Laurie Spiegel’s Multi-dimensional Model

Laurie Spiegel (1992)195, proposes an open-ended list of some sixteen categories intended to define, as axes or continua, a highly multi-dimensional space in which to model and represent interactive musical generation. Spiegel considers the representation model an alternative to an Aristotelian taxonomy of interactive computer-based musical creation consisting of “finite categories with defined boundaries, usually hierarchical in structure”. The categories put forward are— 1. degree of human participation (completely passive listening versus total creative responsibility)

2. amount of physical coordination, practice, and/or prior musical knowledge required for human interaction

3. number of variables manipulable in realtime

4. number of variables manipulable by the user (not the coder) prior to realtime output (“configurability”)

195 The article is also available from Spiegel’s personal website – http://retiary.org/ls/index.html viewed 1/12/2006.

Chapter Four: Definitions, Classification and Models 118

5. amount of time typically needed to learn to use a system

6. balance of task allocation (human versus computer) in decision-making in the compositional realms of pitch, timbre, articulation, macrostructure, et cetera, and/or in labor-intensive tasks such as notation or transcription

7. extensiveness of software-encoded representation of musical knowledge (materials, structures, procedures)

8. predictability and repeatability (versus randomness from the user's viewpoint) of musical result from a specific repeatable human interaction

9. inherent potential for variety (output as variations of a recognizable piece or style, versus source not being recognizable by listeners)

10. ratio of user’s non-realtime preparation time to realtime musical output

11. degree of parallelization of human participation

12. degree of parallelization of automated processing

13. number of discrete non-intercommunicative stages of sequential processing (e.g. composer-performer- listener or coder-user-listener versus integrated development and use by single person or small community)

14. degree of multi-directionality of information flow

15. degree of parallelization of information flow

16. openness versus finiteness of form, duration and logic system

17. et cetera.

Spiegel’s list is certainly long, perhaps too long to create an effective and practical classification space for interactive music systems. However, the list does open up the categories and models discussed previously in this chapter, as proposed by Rowe and Winkler. A number of Spiegel’s categories still resonate strongly with contemporary

Chapter Four: Definitions, Classification and Models 119 concerns of interactive music system design. These include issues such as mapping, the nature of the interactions and expertise required, how formal musical structure is defined and engaged with and system responsiveness.

4.3.5 System Responsiveness

The way an interactive music system responds to its input, directly effects the perception and the quality of the interaction with the system. A system consistently providing precise and predictable interpretation of gesture to sound would most likely be perceived as reactive rather than interactive, although such a system would function well as an instrument in the traditional sense. Conversely, where there is no perceptible correlation between the input gesture and the resulting sonic outcome, the feel of the system being interactive can be lost, as the relationship between input and response is unclear. It is a balancing act to maintain both a sense of connectedness between input and response while also maintaining a sense of independence and freedom; that the system is in fact interacting not just reacting. Winkler (1998) writes of the relationship— Interactivity comes from a feeling of participation, where the range of possible actions is known or intuited, and the results have significant and obvious effects, yet there is enough mystery maintained to spark curiosity and exploration (Winkler 1998:3).

A sense of participation and intuition is difficult to achieve in designing interactive systems and each artist and participant will bring their own interpretation of just how connected input and response should be for the system to be considered interactive. Winkler, although warning against simplistic relationships, clearly places the emphasis on the connectedness between the input and the response— Live interactive music contains an element of magic, since the computer music responds “invisibly” to a performer. The drama is heightened when the roles of the computer and performer are clearly defined, and when the actions of one has an observable impact on the actions of another, although an overly simplistic approach will quickly wear thin. On the other hand, complex responses that are more indirectly influenced by a performer may produce highly successful musical

Chapter Four: Definitions, Classification and Models 120

results, but without some observable connection the dramatic relationship will be lost to the audience (Winkler 1998:9).

4.3.6 Other Metaphors

Chadabe offers the following three metaphors to describe different approaches to creating real-time interactive computer music (Jordà 2005:75)196— 1. Sailing a boat on a windy day and through stormy seas 2. The net complexity or the conversational model 3. The powerful gesture expander.

The first of these poetic images describes an interactive model in which control of the system is not assured—sailing a boat through stormy seas. In this scenario interactions with the system are not always controlled and precise but instead are subject to internal and/or external disturbances. This effect can be seen in Lewis’s (2000) use of randomness and probability in his Voyager system— the system is designed to avoid the kind of uniformity where the same kind of input routinely leads to the same result (Lewis 2000:36).

The second metaphor depicts an interactive system in which the overall complexity of the system is a result of the combined behaviour of the individual components. Just as in a conversation, no one individual is necessarily in control and the combined outcome is greater than the sum of its parts. Examples of this type of system include the work of networked ensembles such as The League of Automatic Composers and The Hub.

A number of artists have drawn comparisons between this model of information exchange presented by a conversation and interactive music systems. Chadabe has used the conversation metaphor previously, describing interacting with his works

196 Also from Chadabe, J. (2005) The Meaning of Interaction, a public talk given at the Workshop in Interactive Systems in Performance (WISP), HCSNet Conference, Macquarie University, Sydney Australia. http://www.hcsnet.edu.au/files2/summerfest/HCSNetSummerFestFinalSchedule.pdf viewed 1/12/2006.

Chapter Four: Definitions, Classification and Models 121

Solo (Chadabe 1997:292) and Ideas of Movement at Bolton Landing (Chadabe 1997:287) in both instances as “like conversing with a clever friend”. Perkins compares the unknown outcomes of a Hub performance with the surprises inherent in daily conversation197. Winkler, likewise makes use of the comparison, noting that conversation, like interaction, is a: two-way street … two people sharing words and thoughts, both parties engaged. Ideas seem to fly. One thought spontaneously affects the next (Winkler 1998:3).

Paine (2002b:297) also considers human conversation a useful model for understanding interactive systems, identifying that a conversation is— • unique and personal to those individuals

• unique to that moment of interaction, varying in accordance with the unfolding dialogue

• maintained within a common understood paradigm (both parties speak the same language, and address the same topic) (Paine 2002b:297).

Paine observes that the conversation model is a journey from the known to the unknown, undertaken through the exchange of ideas— Within such an interaction the starting point is known by one of the parties, and whilst in some discussions there is a pre-existing agenda, in general the terrain of the conversation is not know in advance. It is a process of exchange, of the sharing of ideas. It is a cybernetic- like product of the whole, a product that is unique to the journey they share during their discourse (Paine 2002b:297).

Chadabe’s third metaphor, the powerful gesture expander, defines a deterministic rather than interactive system in which input gestures are re-interpreted into complex musical outputs. This category includes instrument oriented models such as Tod Machover’s hyperinstruments198, Leonello Tarabella’s exploded instruments199 and

197 The HUB, an article written for Electronic Musician Magazine, 1999 http://www.perkis.com/wpc/w_hubem.html viewed 1/11/2006. Also see section 2.4.2. 198 http://www.media.mit.edu/hyperins viewed 18/4/2007.

Chapter Four: Definitions, Classification and Models 122

Spiegel and Mathews’ intelligent instruments200. To this extent Spiegel’s Music Mouse software presents a specific example of such a system.

4.4 System Anatomy

4.4.1 Rowe’s Three Stage System Model

Rowe (1993:9) separates the functionality of an interactive system into three consecutive stages—sensing, processing and response (Figure 4.2).

Figure 4.2 Rowe’s three stage system model

In this model the sensing stage collects real-time performance data from the human performer. Input and sensing possibilities include MIDI instruments, pitch and beat detection, custom hardware controllers and sensors to capture the performer’s physical gestures. The processing stage reads and interprets the information sent from the sensing stage. For Rowe, this central processing stage is the heart of the system, executing the underlying algorithms and determining the system’s outputs. The outputs of the processing stage are then sent to the final stage in the processing chain, the response stage. Here the system renders or performs the musical outputs. Possibilities for this final response stage include—real-time computer based software synthesis and sound processing, rendering via external instruments such as synthesisers and samplers or performance via robotic players.

This three-stage model is certainly concise and conceptually simple. However, Rowe’s distinction between the sensing and processing stages is somewhat blurred.

199 http://tarabella.isti.cnr.it/research.html viewed 18/4/2007. 200 http://retiary.org/ls/writings/cmj_intelligt_instr_hist.html viewed 18/4/2007.

Chapter Four: Definitions, Classification and Models 123

Some degree of processing is needed to perform pitch and beat detection, i.e. it is not simply a passive sensing process. Furthermore the central processing stage encapsulates a significant component of the model and reveals little about the possible internal signal flows and processing possibilities in the system.

4.4.2 Winkler’s Five Stage System Model

Winkler (1998:6) expands Rowe’s three stage model (Figure 4.2) of sensing, processing and response into five stages— 1. Human input, instruments 2. Computer listening, performance analysis 3. Interpretation 4. Computer composition 5. Sound generation and output, performance.

Figure 4.3 Winkler’s five stage system model compared to Rowe’s three stage model

Figure 4.3 reveals the similarities between the two models. Winkler’s human input stage is equivalent to Rowe’s sensing stage. This is where the performer’s gestures, instrumental performance, or the actions of other participants are detected and digitised.

Winkler separates Rowe’s central processing stage into three parts—computer listening, interpretation, and computer composition. The computer listening stage analyses the data received by the sensing stage. Winkler (1998:6) defines this

Chapter Four: Definitions, Classification and Models 124 computer listening stage as the analysis of “musical characteristics”, e.g. timing, pitch and dynamics. The interpretation stage, interprets the data from the previous computer listening process. The results of the interpretation process are then used by the computer composition stage to determine all aspects of the computer’s musical performance. Winkler’s final sound generation or performance stage corresponds to Rowe’s third and final response stage in which the system synthesisers, renders or performs the results of the composition process, either internally or externally.

Winkler’s model clarifies the initial sensing stage by separating the process of capturing input data (musical performance, physical gesture etc.) via hardware sensors from the process of analysing the data. However, the separation of the processing stage into computer listening, interpretation, and computer composition is somewhat arbitrary. The exact difference between computer listening and interpretation is unclear. The computer composition stage can conceivably encompass any algorithmic process while providing little insight into the underlying models of the system. Furthermore Winkler’s descriptions of the processing are still constrained as ‘musical’.

4.4.3 Bongers–Control and Feedback

Focusing on the physical interaction between people and systems, Bongers (2000:128) identifies that interaction with a system involves both control and feedback. In both the aforementioned Rowe and Winkler interactive models there is little acknowledgment of potentials for feedback in the system itself or with the actual performers interacting with the system. Bongers outlines the flow of control in an interactive system, starting with the performance gesture, leading to the sonic response from the system and completing the cycle with the system’s feedback to the performer. Interaction between a human and a system is a two way process: control and feedback. The interaction takes place through an interface (or instrument) which translates real world actions into signals in the virtual domain of the system. These are usually electric signals, often digital as in the case of a computer. The system is controlled by the user, and the system gives

Chapter Four: Definitions, Classification and Models 125

feedback to help the user to articulate the control, or feed-forward to actively guide the user. Feed forward is generated by the system to reveal information about its internal state (Bongers 2000:128).

System-performer feedback is not only provided by the sonic outcome of the interaction, but can include information such as the status of the input sensors and the overall system (via lights, sounds etc.) and tactile feedback from the controller itself (haptic). Acoustic instruments typically provide such feedback inherently, for example the vibrations of a violin strings provides feedback to the performer via his/her finger(s) about its current performance state, separate to the pitch and timbral feedback the performer receives acoustically. With interactive computer music systems, the strong link between controller and sound generation, typical of acoustic instruments, is no longer constrained by the physics of the instrument. Virtually any sensor input can be mapped to any aspect of computer based sound generation. This decoupling of the sound source from the controller can result in a loss of feedback from the system to the performer that would otherwise be intrinsic to an acoustic instrument and as a result can contribute to a sense of restricted control of an interactive system (Bongers 2000:127).

Figure 4.4 presents a model of a typical instance of solo performer and interactive music system, focusing on the interactive loop between human and computer. The computer system senses the performance gestures via its sensors, converting physical energy into electrical energy. Different sensors are used to capture different types of information—for example kinetic energy (movement), light, sound or electromagnetic fields to name a few. The actuators provide the system’s output— loudspeakers produce sound, video displays output images, motors and servos provide physical feedback. The sensors and actuators are defined as the system’s transducers enabling the system to communicate with the outside world.

Chapter Four: Definitions, Classification and Models 126

Figure 4.4 Solo performer and interactive system–control and feedback 201

Similarly, the human participant in the interaction can be defined as having corresponding senses and effectors. The performer’s senses (inputs) are their ability to see, hear, feel and smell, while the performer’s effectors (outputs) are represented by muscle action, breath, speech and bio-electricity. For artists such as Stelarc202, the separation between human and machine interface becomes extremely minimal with both machine actuators and sensors connected to their own body, leading to the concept of Cybernetic Organisms or Cyborgs. For example, Ping Body203 (1996) allowed participants using a website to remotely access, view and actuate Stelarc’s body via a computer-interfaced muscle-stimulation system.

In Bongers’ model (Figure 4.4), the computer’s memory and cognition is comparable to Rowe’s processing stage and Winkler’s combined computer listening, interpretation, and computer composition stages. Bongers (2000:129) considerers that when cognition is left out of the model a system becomes reactive rather than interactive. Furthermore, Bongers suggests that most interactive systems in new media arts can be considered as reactive systems of this type.

201 Adapted from Bongers (2000). Physical Interfaces in the Electronic Arts – Interaction Theory and Interfacing Techniques for Real-Time Performance. In M. M. Wanderley and M. Battier (eds.), Trends in Gestural Control of Music. Paris: IRCAM-Centre Pompidou, p.129. 202 http://www.stelarc.va.com.au viewed 18/4/2006. 203 http://www.stelarc.va.com.au/pingbody/index.html viewed 1/5/2007.

Chapter Four: Definitions, Classification and Models 127

4.4.4 Mapping

Bongers’ model of interactive music systems incorporates the concept of feedback and accommodates directly gestural based input into the system. Connecting gestures to processing and processing to response are the mappings of the system. In the specific context of a digital musical instrument (Miranda and Wanderley 2006:3), mapping defines the connections between the outputs of a gestural controller and the inputs of a sound generator. Figure 4.5 depicts a typical and often cited example of such a system (Wanderley 2001).

Figure 4.5 Mapping in the context of a digital musical instrument204

In this example a performer interacts with a gestural controller’s interface, their input gestures mapped from the gestural controller’s outputs to various sound generating control parameters. Digital musical instruments such as Waiswisz’s The Hands, Sonami’s Lady’s Glove and Hewitt’s Emic can be considered as examples of this model. While a performer may be described as interacting with the gestural controller in such a system, the digital musical instruments represented by the model are intended to be performed (and thus controlled) as an instrument and consequently function as reactive, rather than interactive systems.

204 From Miranda, E. R. and Wanderley, M. (2006). New Digital Musical Instruments: Control and Interaction Beyond the Keyboard. Middleton, Wis: A-R Editions, p3.

Chapter Four: Definitions, Classification and Models 128

In the context of an interactive music system, mappings are made between all stages of the system, connecting sensing outputs with processing inputs and likewise processing outputs with response inputs. Furthermore the connections made between the different internal processing functions can also be considered as part of the mapping schema.

Mappings can be described with respect to the way in which connections are routed, interconnected and interrelated. Mapping relationships commonly employed in the context of digital musical instruments and interactive music systems are (Hunt and Kirk 2000; Miranda and Wanderley 2006:17)— • One-to-one • One-to-many • Many-to-one • Many-to-many.

One-to-one is the direct connection of an output to an input, for example a slider mapped to control the pitch of an oscillator. Many inputs can be mapped individually to control many separate synthesis parameters, however, as the number of multiple one-to-one mappings increases, systems become more difficult to perform effectively. One-to-many connects a single output to multiple inputs, for example, a single gestural input can be made to control multiple synthesis parameters at the same time. One-to-many mappings can solve many of the performance interface problems created by multiple one-to-one mappings. Many-to-one mappings, also referred to as convergent mapping (Hunt and Kirk 2000:7) combine two or more outputs to control one input, for example a single synthesis parameter under the control of multiple inputs. Many-to-many is a combination of the different mapping types (Lazzetta 2000). In Figure 4.6 three different mappings for a reed instrument are depicted, each model increasing in complexity and sophistication of the potential sonic response (Wanderley 2001).

Chapter Four: Definitions, Classification and Models 129

Figure 4.6 Examples of different mapping strategies for a reed instrument205

4.4.5 Separating the Interface from the Sound Generator

Mapping arbitrary interfaces to likewise arbitrarily chosen sound generating devices creates the potential for the interrelated physical and acoustical connections between an instrument’s interface and its sound output—that are typically inherent in traditional acoustic instruments—to be lost. For traditional acoustic instruments the sound generating process dictates the instrument’s design. The method of performing the instrument—blowing, bowing, striking—is inseparably linked to the sound generating process—wind, string, membrane. In the case of electronic instruments this relationship between performance interface and sound production is no longer constrained in this manner (Bongers 2000:126). Sensing technology and networked communication methods such as OSC allow virtually any input from the real world to be used as a control signal for use with digital media. This disconnect between control interface and sound generation can result in a loss of the feedback information that traditional acoustic instruments typically provide their performers— the feel of a string’s vibration, lip pressure, the feel of a bow, the subtleties of touch

205 From Wanderley, M. M. (2001). Gestural Control of Music, In Proceedings of the 2001 International Workshop - Human Supervision and Control in Engineering and Music. Kassel, .

Chapter Four: Definitions, Classification and Models 130 from a piano keyboard. However, the ability to freely map interfaces to sound generating devices enables virtually any type of performance gesture to be used as an input to an interactive system.

4.4.6 Gesture

The term gesture can imply different meanings, depending on the field of research. In the context of interactive music systems and digital musical instruments, gesture is generally used in a fairly broad sense, encompassing any human action that could potentially be used to generate sounds including grasping, manipulation, non-contact movements and general body movements (Miranda and Wanderley 2006:5-14). Many different ways have been proposed to classify and describe gesture, with much of the literature focusing on hand movements.

Gestures that involve physical contact with an object can be described as manipulative, ergotic, haptic, or instrumental gestures. Gestures that do not involve physical contact with an object can be described as empty-handed, free, semiotic or naked gestures (Miranda and Wanderley 2006:6). Shaw’s joystick interface used in Points of View (1.6.1.2) and Richards and Waterson’s periscope interface used in sub_scapeBALTIC (1.6.1.5) are examples of gestural controllers involving physical contact with the interface. The theremin antenna interfaces used in Chadabe’s Solo (1.6.2.1.1) and the video mapped reactive space created in Krueger’s Videoplace (1.6.1.1) are examples of gestural input involving no physical contact with an object.

With respect to music, the term gesture can be used in the context of musical gesture, performance gesture, or instrumental gesture. Based on a study of instrumental gesture and the playing technique of the pianist Glen Gould, Delalande (1988) suggested that gestures used in instrumental performance can be considered with respect to at least three levels— • Effective gestures–the gestures that produce the sound • Accompanist gestures–movements not directly producing sound

Chapter Four: Definitions, Classification and Models 131

• Figurative gestures–gestures perceived by a listener that do not necessarily directly correspond to a movement by the performer (Miranda and Wanderley 2006:9). The accompanist gestures are perceived as part of the performance but do not directly produce the sound, for example a pianist’s shoulder and head movements. These ancillary or expressive movements (Davidson 1993) can convey expressive information about the performance. In this manner they have been used as control parameters in Hyperinstruments (1.6.2.2.2), using the expressiveness conveyed by theses gestures as control inputs to a digital instrument.

Delalande (1988) considers the notion of gesture crucial in music, combining the observable performance actions with mental representations and linking performers with listeners through the creation of these mental representations. Delalande writes that gesture— lies in the intersection of two axes: one that binds together an observable action (the gesture of the instrumentalist) and a mental representation (the fictive movements evoked by sound forms), and another one that establishes a link between the interpreter (that produces the gestures) and the listener (who guesses, symbolises and transforms them on an imaginary plane) (translated in Miranda and Wanderley 2006:8).

The listener’s mental model of an interactive system contributes significantly as to how interactive or reactive that system will be perceived (4.3.5). If the system does not allow a listener to learn and guess the sonic response to a given input, then the system will most likely be perceived as random and chaotic (or conversely static like a pre-rendered composition). Likewise a system that is too predictable—in that the listener knows, rather than guesses the connection between input and response—will most likely be perceived as reactive and instrument like, rather than interactive.

Chapter Four: Definitions, Classification and Models 132

Codoz (1988) classified performers’ hand gestures used in instrumental performance according to their function— • Semiotic–to communicate meaningful information and results from shared cultural experience • Epistemic–learning from the environment through tactile experience or haptic exploration • Ergotic–the notion of work, manipulating the physical world, an energy transfer between hand and object (Axel Mulder 1996).

While many interactive systems utilise remote sensing interfaces (1.6.2.1) such as ultrasonic, infrared, and video tracking, equally, many interactive systems and digital instruments require ergotic movements—the manipulation of physical interfaces (1.6.2.2)—typically via the hands, to perform and interact with them. Pressing (1991) classifies ergotic hand movements into three subcategories— Use of control effect: modulation (parametric change), selection (discrete change), or excitation (input energy)

Use of kinetic images: scrape, slide, ruffle, crunch, glide, caress etc.

Use of spatial trajectory: up, down, left, right, in, out, circular, sinusoidal, spiral, etc.

It is interesting to reflect on just how many words there are to describe hand gestures. Table 4.1 shows a list of words describing hand gestures identified by Mulder (1996). The list is organised into three broad groupings—goal directed manipulation, empty-handed gestures, and haptic exploration (Lederman and Klatzky 1987). These are loosely related to Codoz’s classifications of ergotic, semiotic, and epistemic respectively.

Chapter Four: Definitions, Classification and Models 133

Goal directed manipulation Changing position: lift, move, heave, raise, move, translate, push, pull, draw, tug, haul, jerk, toss, throw, cast, fling, hurl, pitch, depress, jam, thrust, shake, shove, shift, shuffle, jumble, crank, drag, drop, pick up, slip Changing orientation: turn, spin, rotate, revolve, twist Changing shape: mold, squeeze, pinch, wrench, stretch, extend, twitch, smash, thrash, break, crack, bend, bow, curve, deflect, tweak, cut, spread, stab, crumble, rumple, crumple up, smooth, fold, wrinkle, wave, fracture, rupture Contact with the object: grasp, seize, grab, catch, embrace, grip, lay hold of, hold, snatch, clutch, take, hug, cuddle, hold, cling, support, uphold Joining objects: tie, pinion, nail, sew, button up, shackle, buckle, hook, rivet, fasten, chain up, bind, attach, stick, fit, tighten, wriggle, pin, wrap, envelop Indirect manipulation: whet, set, strop

Empty-handed gestures twiddle, wave, snap, point, hand over, give, take, urge, show, size, count, wring, draw, tickle, fondle, nod, wriggle, shrug

Haptic exploration touch, stroke, strum, thrum, twang, knock, throb, tickle, strike, beat, hit, slam, tap, nudge, jog, clink, bump, brush, kick, prick, poke, pat, flick, rap, whip, hit, slap, struck, caress, pluck, drub, wallop, whop, thwack, rub, swathe

Table 4.1 Mulder’s list of words describing hand gestures206

McNeill (McNeill 1992) proposes that gestures should not be considered equivalent to speech, but instead, gesture and speech complement each other. To this extent there may well not be equivalent language for every gesture (Axel Mulder 1996). Paine (2007) considers that each person has a unique gesture vocabulary or body language— I also suggest that each person has a unique vocabulary of gesture, a ‘body language’ that if read with sufficient detail brings into being an individual sonic signature.

206 From Mulder, A. (1996), Hand Gestures for Hci. Hand Centered Studies of Human Movement Project, Technical Report 96-1. http://xspasm.com/x/sfu/vmi/HCI-gestures.htm, viewed 1/5/2007.

Chapter Four: Definitions, Classification and Models 134

Paine (2007) further considers that through physical gesture the nature of a space is revealed— we come to know space through body and movement, an examination in time - it takes time to move through space, to understand the relationships of and within space that through a myriad of complex sensual relationships provide us with a notion of place.

It is the feedback resulting from a gesture that enables a space to be sensed and understood in this way. In discussing the use of gesture in the context of interactive systems it is useful to define the types feedback the system provides to the performer. Feedback can be categorised as visual, auditory or tactile-kinaesthetic. Feedback can be further classified as primary or secondary and passive or active (Miranda and Wanderley 2006).

Tactile perception receives its information through the cutaneous sensitivity of the skin, without movement of the body. While kinaesthetic sense is the awareness of movement, position and orientation of limbs and parts of the body. Together tactile- kinaesthetic feedback, also referred to as haptic or tactual (Bongers 2000), contributes to the sense of touch.

Secondary feedback is the resulting sound produced by the system or instrument (Figure 4.5). The primary feedback is the visual, auditory and tactile-kinaesthetic feedback from the system or instrument itself, for example key clicks, lights, and feel of the interface. Similarly active feedback is the response of the system to a user action, for example the sound of the instrument. Passive feedback is provided from the physical characteristics of the system, for example the noise of an interface switch (Miranda and Wanderley 2006:11).

Remote sensing interfaces (1.6.2.1), while offering considerable freedom of movement and consequently the potential to sense a wide variate of gestural input, lack the strong primary and passive feedback naturally provided by the manipulation of physical interfaces (1.6.2.2). Remote sensing interfaces rely primarily on the

Chapter Four: Definitions, Classification and Models 135 secondary and active feedback provided by the sonic response of the system, a feature that for many artists is in fact desirable. Instruments and systems such as Waisvisz’s The Hands (3.1.1) and Jordà’s reacTable (3.3.1) can be considered as providing a hybrid of both interface types, supporting both primary and passive feedback provided by physical interfaces yet still maintaining some of the flexibility of gestural input offered by remote sensing, employing ultrasonic and video sensing technology respectively.

136

Chapter Five: Folio of Creative Works

This chapter presents the folio of creative works. Five works are discussed—Book of Changes, Plus Minus, Sonic Construction, Sounding the Winds and Six Degrees of Tension, each exploring a different interactive model and design. All the works are documented on the accompanying CD-ROM and audio CD (Appendix A).

5.1 Book of Changes

5.1.1 Overview

Book of Changes is an interactive work for two instrumentalists and interactive computer system. Conceptually the work was designed as a system in which performers would play with an interactive computer system functioning as a third performer (4.3.2.3). Rather than a model in which the computer follows the performers, interpreting and responding accordingly (4.3.2.1), instead, the Book of Changes computer system directs the performers using a simple technique of displaying graphic icons to define which sections to play.

Originally scored for bass clarinet, cello and computer, a second version was later arranged for piano, violin and computer. The title of the work, Book of Changes, is a direct reference to the ancient Chinese mystical divination text the I Ching (Wilhelm and Baynes 1967). The predictions of the I Ching are represented by a set of sixty- four abstract line arrangements called hexagrams. Each of the hexagrams can also be considered as a pair of three-line arrangements called trigrams. There are eight possible trigrams, each in turn derived from four basic symbols. Thus the sixty-four hexagrams of the I Ching are the result of different combinations of the eight trigrams. The hexagrams are cast through coin tossing or casting yarrow sticks, a process which is in essence a biased random number generator. Similarly Book of Changes selects from eight basic sonic fragments or phrase structures through Markov chain decision processes, assigning them to the two performers and Chapter Five: Folio of Creative Works 137 computer part, realising the work through this process. While no two performances will ever be exactly the same, it is not a completely random or aleatoric work, but rather a piece that has a clearly identifiable compositional identity, reflecting different moods or characteristics with each performance.

5.1.2 System Architecture

Book of Changes can be considered as a reversal of the score following paradigm (1.6.2.3.2) in which the computer follows the performer. Instead, Book of Changes requires the performers to follow the computer. Each performer is provided with a notated score consisting of eight score fragments (Figure 5.1).

Figure 5.1Example score fragment from Book of Changes

Each score fragment is associated with a graphic icon (Table 5.1). During the performance the computer displays the graphic icons in succession, directing the performers as to which score fragment to play. A blank image indicates the performer is to remain silent until the next icon is displayed. The graphic icons associated with the performer’s score fragments are projected onto a large screen via a data projector so that both the performers and the audience can see the projection

Chapter Five: Folio of Creative Works 138

(Figure 5.2). Icons rather than numbers were used to avoid imposing any unintentional hierarchy onto the score fragments.

Figure 5.2 Book of Changes main projection screen–two icons are displayed referencing two score fragments for the piano and violin parts. Sliders on the left and right display the time remaining for the currently selected fragments.

Chapter Five: Folio of Creative Works 139

Table 5.1 Book of Changes–the graphic icons associated with the eight score fragments. The blank icon indicates a silent section.

5.1.2.1 Markov Chains

The computer’s choice of which specific score fragment to choose is determined through the use of predetermined first order Markov chains. Formulated in 1906 by Russian Mathematician A. A. Markov, a Markov chain is a “probability system in which the likelihood of a future event is determined by the state of one or more events in the immediate past” (Roads 1996:878). Markov chains have a long history of use in the context of algorithmic music composition. Rather than encode the probabilities in a state transition table I constructed a sperate patch (Figure 5.3) for each state in the probability chain (Figure 5.4). This facilitated easy modification of the state transitions values during rehearsal.

Chapter Five: Folio of Creative Works 140

Figure 5.3 Book of Changes–example of an individual probability function for each step

For each choice of score fragment made, a corresponding duration for that passage is also chosen from a table of preset duration values. Similarly the computer is assigned its own part, chosen from a set of eight predefined possibilities defining audio buffer selections for playback and processing functions to be applied to the performers’ acoustic inputs into the system. The computer’s choice of response material is likewise made through the use of Markov chains and assigned a duration time for the response from a table of values.

Figure 5.4 Book of Changes–the eight probability choices for each score fragment

Chapter Five: Folio of Creative Works 141

5.1.2.2 Duration Choice

Separate durations are assigned by the system to each score fragment chosen for the two performers and to the computer’s response choice. In its default state the system assigns one of three possible durations to a score fragment. The specific default duration values are twenty, thirty, and forty seconds. These proportional relationships of ‘one to one’, ‘one to two’ and ‘one to three’ ensure that there are both moments of synchronicity and overlapping between the start of new fragments as they are assigned to the two instrumental performers and the computer itself. The durations assigned to the performers’ parts are displayed on screen together with the graphic icons, both numerically and via graphic sliders showing the passage of time through the current section.

5.1.2.3 Ending a Performance

If a performer has not completed playing their score selection and a new icon is displayed for their part, they are required to immediately move on to the new section. If they complete their performance of the current score selection before the end of the currently assigned duration, they are then free to make a musically appropriate choice between remaining silent or repeating or improvising with the currently chosen material. The piece ends when all three parts are assigned silence concurrently. In achieving this state it is more than likely that one of the parts, computer or either of the two performers, will be ending the work solo. The performers can assign an overall duration for the work, thus forcing the system into this end state at a predefined time. A total duration of four to eight minutes is suggested for the piece. Independent of the overall piece length, the performers can also modify the overall duration lengths assigned to the score fragments, selecting between very fast, fast, medium and slow, while still maintaining the proportional relationships between the three duration options.

Chapter Five: Folio of Creative Works 142

5.1.3 Interacting with the System

Book of Changes was developed through a series of workshops in 2002 with bass clarinettist Ros Dunlop and cellist Julia Ryder. Performing together under the name Charisma207 they played the work extensively over 2002–2003 including presentations at Logos Foundation208, Belgium (2002) and the Darwin International Guitar Festival, Australia (2002). A version of the work arranged for piano and violin was performed at the 2003 International Computer Music Conference, Singapore (Kim-Boyle 2004)209.

The system has prior knowledge pre-encoded into the Markov chains. Using this knowledge the system directs the performers as to what score fragment to play and for how long based on the last choice made. The system also directs itself, defining how the performers audio will be processed and what audio buffers to play. In this sense the system is autonomous, in total control of the performers and itself.

5.1.4 Interaction Design

A Book of Changes incorporates elements of Rowe’s model of the computer as player paradigm (4.3.2.3) and includes transformative, generative and sequenced methods (4.3.2.2). Pre-sequenced elements are used as both score fragments for the performers to play and internally stored material to be played by the computer system. The computer system also transforms the performers’ material via digital signal processing and generates ever changing paths through the composition, defining for both the performers and itself, which score fragments are to be played and for how long. Comparisons can also be made with Lewis’ Voyager (3.4.1) and score following paradigms such as Lippe’s Music for Clarinet and ISPW (1.6.2.3.2). However, in A Book of Changes the computer does not follow the performers but instead directs what the performers are to play. So rather than the computer improvising or playing its own pre-composed part, it is the human performers who become an integral part of the interactive system under the control of the computer

207 http://www.greatwhitenoise.net/charisma/ viewed 1/5/2007. 208 http://logosfoundation.org/ viewed 1/5/2007. 209 http://music.nus.edu.sg/events/icmc2003/concert_prog.php#fri_noon viewed 1/5/2007.

Chapter Five: Folio of Creative Works 143 system and its algorithms. For the performers this proved to be somewhat disconcerting as it undermined their classical training to perform pre-composed, consistent linear narratives and instead put them under the direction of the computer system.

While A Book of Changes was certainly successful in creating a system in which the computer was able to influence the performers by directing them through different compositional paths, it relied heavily on pre-composed (sequenced) elements and a predefined compositional framework, despite the decision making abilities of the system. I felt that although the performers were incorporated into the interactive system, there seemed to lack a sense of conversation (4.3.6) between the performers and the system. What was needed was a system that could learn from the musical events taking place and respond accordingly. This became the focus of Plus Minus (+-).

Chapter Five: Folio of Creative Works 144

5.2 Plus Minus (+-)

5.2.1 Overview

Plus Minus (+-) uses a similar Markov chain (5.1.2.1) model to A Book of Changes (5.1), but rather than using predefined probability transition tables, Plus Minus dynamically creates the transition tables, thus learning from the performer’s musical choices and responding in kind, not with an exact repetition but with variation and with the possibility of change. Thus Plus Minus was designed to behave in a way more like a conversation between computer system and performer (4.3.6).

Plus Minus is an interactive system for performer and computer (typically laptop) exploring nineteen-tone equal temperament, intended for use in live performance contexts. Sonically, Plus Minus is based on pure sine tones and focuses on the intervallic relationships inherent in nineteen-tone equal temperament; specifically the beating effects created by sounding together adjacent notes in the tuning system and close approximations to just tuned minor thirds and major sixths. Each time a note is played, or rather initiated, it is assigned a duration time from a range of five to forty- five seconds and panned at varying rates between the output channels. Initiating a note with the system is as if the note were dropped in to a pool of sound, slowly rippling between the speakers, interacting with the other frequencies present and slowly decaying.

5.2.2 System Architecture

The system can be interacted with and performed in a number of different ways. Playing the system as an instrument, the performer has complete control over all six monophonic sine-tone voices, controlling pitch, duration and panning rate for each note initiated. The system can also learn pitch and rhythmic patterns played by the performer using second-order Markov chains (5.1.2.1) . Using this technique, the system looks two steps back in the event list to determine the next choice. The system can use these learnt transition probabilities to either play with the performer as a parallel part or play itself as a solo. The Markov chain transition probabilities can be continually updated during a performance and can be stored for later use, for example to initiate a future performance. Stochastic control of the instrument is also

Chapter Five: Folio of Creative Works 145 possible with the performer controlling note range distribution and density, rather than individual pitch relationships. Stochastic performance can also be routed to update the Markov chain transition probabilities. Reverberation and granular effects processors are inserted at the end of the signal paths and can be brought into the mix either manually or automated via pre-defined time-line based envelopes.

5.2.2.1 Nineteen-Tone Equal Temperament

One of the starting points for the work, from a sonic perspective, was an interest in exploring the beating effects that can occur between two or more closely pitched notes, sounded simultaneously. By empirically exploring different equal divisions of the octave, I discovered that nineteen-tone equal temperament provided both pitch beating effects and other interesting intervallic relationships while at the same time employing a division of the octave that was not too unwieldy in terms of numbers of notes per octave. Research into microtonal tuning systems (Darreg 1982; Yasser 1932) revealed that nineteen-tone equal temperament provides an almost just minor third and major sixth and close approximations to perfect fifths and major thirds (Table 5.2).

Chapter Five: Folio of Creative Works 146

Note degree Value in Nearest just Difference in Interval Name cents interval cents from just interval 1 0 1/1 0 unison 2 63.158 36/35 +14.388 1/4-tone, septimal diesis 3 126.316 15/14 +6.873 major diatonic semitone 4 189.474 10/9 +7.070 minor whole tone 5 252.632 7/6 -14.239 septimal minor third 6 315.789 6/5 +0.148 minor third 7 378.947 5/4 -7.367 major third 8 442.105 9/7 +7.021 septimal major third 9 505.263 4/3 +7.376 perfect fourth 10 568.421 7/5 -14.092 septimal tritone 11 631.579 10/7 -14.091 Euler’s tritone 12 694.737 3/2 -7.218 perfect fifth 13 757.895 14/9 -7.021 septimal minor sixth 14 821.053 8/5 +7.366 minor sixth 15 884.211 5/3 -0.148 major sixth 16 947.368 12/7 +14.238 septimal major sixth 17 1010.526 9/5 -7.070 just minor seventh 18 1073.684 15/8 -14.585 classic major seventh 19 1136.842 35/18 -14.388 octave - septimal diesis

Table 5.2 Table of nineteen-tone equal temper intervallic relationships compared to equivalent just intonation intervals210

5.2.2.2 Interface

The system is performed via six modified keyboard interfaces using the computer’s mouse (Figure 5.5). The keyboard interfaces have been designed to support the nineteen-tone equal temperament and are implemented using Max’s javascript user interface object—jsui.

210 Based on http://www.microtonal-synthesis.com/scale_19tet.html viewed 1/5/2007.

Chapter Five: Folio of Creative Works 147

Figure 5.5 Customised keyboard interface for Plus Minus

Each keyboard represents one of the six independent voices of the system, each with its own individual automated panning function, selectable duration, envelope shape for amplitude and reverberation parameters. These aspects of the sound can be modified during a performance via graphic interfaces accessible from sub-patches associated with each voice (Figure 5.6).

5.2.2.3 Automated Panning Function

The automated panning function uses two oscillators (cycle~ objects), one phase shifted, to automate equal powered panning effects between the output channels. The speed of these two panning LFO’s can themselves be increased and decreased automatically, creating a panning effect that is in a continuous cycle of speeding up and slowing down. The automation can be turned off and the panning rate controlled manually.

Chapter Five: Folio of Creative Works 148

Figure 5.6 Plus Minus–interface for pan and envelope controllers assigned to each voice

5.2.2.4 Performance Options

Max’s prob and anal objects are used to calculate and store the second-order Markov chain transition probabilities, derived from the performer’s choice of pitch intervallic relationships and note durations. The system can also be performed stochastically, generating clouds of notes, through the use of tables of pitches selected randomly. The process is controlled by graphically selecting probability ranges (high and low) and note densities. A granular signal processing effect created using Pluggo’s211 Fragulator plug-in can be applied to the outputs of system and returned to the main mix, with or without the original source signals. The amplitude of the effect outputs returned to the main mix can be controlled manually or automated through the use of graphically defined envelope functions.

211 A collection of audio plug-ins that support Audio Unit, VST, and RTAS plug-in format. http://www.cycling74.com/products/pluggo viewed 1/6/2007.

Chapter Five: Folio of Creative Works 149

5.2.3 Interacting with the System

The intention was to create a system in which the musical outcomes were the holistic result of interacting with the system, rather than conceptualising the system as a duet between human performer and computer, or as a soloist performing a computer based instrument. To this extent, the use of Markov chains and stochastic processes was intended as a means for the performer to guide, teach and interact with the system’s processes, rather than to create an accompaniment for the performer to play along with.

Constraining the performance interface to the computer’s mouse and QWERTY keyboard to control the system’s various graphic interfaces enabled me to focus on a reduced set of interactive possibilities and thereby gain a more thorough understating of the system’s responses and interactive potentials. Although interaction with the system was primarily event based with possible actions consisting of, clicking notes on the nineteen-tone keyboards, selecting durations, defining probability ranges, drawing envelopes, selecting panning functions and fading in and out effects into the mix, interactions with the system also effected dynamic processes. For example, although I could ‘gestimate’ the effect on the musical output of adding new pitch relationships into the Markov chains, I could not predicate specifically the exact effect on the musical response, or when in time the effect might occur.

Plus Minus has had a number of public presentations including performances at the first Xenharmonic Horizons212 concert, Sydney Conservatorium of Music, The University of Sydney (2002) and the Sonic Connections Festival213, University of Wollongong (2003). I found that I was able to gain expertise with the system relatively quickly and easily. Performing Plus Minus revealed that, although many aspects of the system could behave autonomously, the more expert I became, the more control I had over moulding, shaping and directing the musical output. While such a system has obvious advantages for creating pre-planned performances,

212 http://www.squelch.com.au/xenharmonic/ viewed 1/5/2007. 213 http://www.uow.edu.au/crearts/sonicconnections/SonicConnectionsPrograms.pdf viewed 1/5/2007.

Chapter Five: Folio of Creative Works 150 improvisations or compositions, the more control that I perceived I had over the system, the less interactive it felt.

Another aspect of the work revealed through performance, was that for many audience members, the difference between a performer sitting in front of a laptop screen generating sound interactively or by comparison simply playing pre-rendered audio-files was debatable. Issues of the laptop used in the context of live musical performance have been discussed extensively (Cascone 2000, 2003). However, it is of relevance in the current context to identify that in the absence of other clear non- musical cues, for example physical performance gestures or video displays, audience members could only rely on the sonic output of the work to understand its interactive nature. A number of listeners considered that seeing the interface displayed during the performance would have been of benefit in understating the interactive nature of the performance.

In contrast to Book of Changes (5.1) Plus Minus can behave both as an instrument, controlled by the performer, and as an autonomous system. As an autonomous system Plus Minus can perform itself based on rules learnt through a performer’s interaction with the system, or through stochastic processes. A performer interacts with the system by updating the Markov transition tables through performing with the system, controlling stochastic parameters and modifying other aspects of the system such as panning functions and signal processing parameters.

5.2.4 Interaction Design

Influences for Plus Minus can be found in Spiegel and Mathews’ concept of “intelligent instruments” (1.6.2, 2.2.3, 2.3.1) with the system responding to the performer’s input in a multitude of ways. Influences also came from Chadabe’s notion of “interactive composing” (4.2.1), in which control of the music is shared between performer and computer system. But unlike Lewis’ Voyager (3.4.1), which also learns from the performer’s input to the system, Plus Minus is not a duet between performer and computer but rather it is the interaction between the performer and computer system that creates the musical outcomes. To this extent we

Chapter Five: Folio of Creative Works 151 can see Plus Minus as a conversation model of interaction as described Chadabe and Paine (4.3.6).

The use of performance gesture (4.4.6) in Plus Minus is minimal, with similarities to the computer based performance style of networked ensembles such as The League of Automatic Music Composers (2.4.1), The Hub (2.4.2), and austraLYSIS electroband (2.4.3). As a consequence a fair amount of trust is required between audience and performer to understand and recognise the interactive nature of the performance. Unlike A Book of Changes (5.1) which occupied a constrained compositional space and set of potentials, Plus Minus’ response is determined by the performer’s inputs to the system. So although the timbre and tuning space is constrained the compositional shape of the work is completely dependant upon the feedback loop of the performer’s choices, the system’s response and subsequent choices made by the performer. To this extent, with practice, I found I was able to guide the system quite expertly and with some degree of precision through a performance, the implications of which are that while the system was a conversation of sorts, the performer had the final say on the direction of the conversion. The development of a system where by control was a secondary consideration gave rise to Sonic Construction.

Chapter Five: Folio of Creative Works 152

5.3 Sonic Construction

5.3.1 Overview

In creating Sonic Construction I wanted to further explore the idea of a conversation model as employed in Plus Minus (5.2). After some experience and practice with Plus Minus, the performer seems to have the upper hand in the conversation, able to guide the interactive system reasonable expertly through many different compositional spaces. I wanted to create a system over which I had less direct control; ideally being influenced by the system’s responses as much as I was influencing the system. I also sought to create a system that would give an audience or spectator greater sense of connection with and insight into the interactive processes being employed than was reported for Plus Minus. I chose to do this by providing a clear and strong visualisation of the gestural inputs to the system.

Inspired by the way water and other fluids move, for example the swirls and vortices of a mountain stream or the lemniscate like pattens created by Flowforms (Wilkes 2003), Sonic Construction uses the movement of coloured dyes in a semi-viscous liquid to generate and control sound. The system is performed by dropping different coloured dyes (red, green, yellow, blue) into a clear glass vessel filled with water, made slightly viscous through the addition of a sugar syrup (Figure 5.7). By using video tracking technology, the speed, colour and spatial location of the different coloured drops of dye are analysed as they are dropped into the glass vessel and subsequently expand, swirl, coil and entwine in the water. The control data derived from the video tracking of the ink drops is used to define both the shape and the way in which individual grains of sound are combined using FOF214 synthesis (Rodet 1984, 2000), to create a rich and varied timbral sound environment.

214 FOF: Fonction d’Onde Formatique translated as Formant Wave-Form or Formant Wave Function.

Chapter Five: Folio of Creative Works 153

Figure 5.7 Performance interface for Sonic Construction

Timbres produced by the system include bass-rich pulse streams, vocal textures and a variety of bell like sounds. The fluid movement of the coloured dye in the liquid is further used to spatialise the outputs of the FOF synthesis engine. To this extent the system is intended for use with a speaker array of configurable number and arrangement. The video capture of the dyes in the liquid, used for motion analysis and colour matching, is also projected back into the performance or installation space via a data projector. The projected video is slightly processed using contrast, saturation and hue effects (Figure 5.8).

Chapter Five: Folio of Creative Works 154

Figure 5.8 Sonic Construction–re-projected image into the installation space215

5.3.2 System Architecture

5.3.2.1 Video Analysis

A FireWire216 iSight217 video camera is used to capture and digitise the performance interface, capturing the initial descent of the drops of dye in the liquid, the subsequent shapes that slowly unfold and the changing colours created by the dyes in the liquid (Figure 5.9). The video analysis and processing functions have been programmed in Jitter218. Motion analysis of the live video is performed using Tap.Tool’s219 tap.jit.motion object while colour matching is performed using Eric Singer’s jit.cyclops220 object.

215 From performance at 1/4inch, University of Wollongong, Australia 2005. 216 Apple’s name for the IEEE 1394 High Speed Serial Bus. http://developer.apple.com/hardwaredrivers/firewire/index.html Viewed 1/5/2007. 217 Specifications for the external version of Apple’s iSight video camera: f-2.8 lens, focus from 50mm to infinity, 1/4-inch colour CCD image sensor, 640x480 VGA resolution, full motion video capture at up to 30 frames-per-second, 24-bit colour. http://www.apple.com/uk/isight/ Viewed 1/5/2007. 218 http://cycling74.com/products/jitter viewed 1/5/2007. 219 Timothy Place Tap.Tools http://electrotap.com/taptoolsmax/ Viewed 1/5/2007. 220 Eric Singer (Code Artistry LLC) Cyclops http://www.ericsinger.com/cyclopsmax.html Viewed 1/5/2007.

Chapter Five: Folio of Creative Works 155

Figure 5.9 Sonic Construction–signal flow

The tap.jit.motion object provides both a normalised measure of the amount of motion together with coordinates (x y) for the centre of the motion detected by comparing successive video frames to determine the difference and the weighted location of the difference. By slicing the video into four separate regions, the system is able to perform motion analysis on four distinct spatial areas as well as the whole video window. The jit.cyclops object is used to perform colour matching by comparing input colour values to a preset palette of colours. The test colour palette can be preset by graphically selecting regions of the video input window. This was typically done before a performance to calibrate the system to account for different lighting conditions.

5.3.2.2 FOF Synthesis

The results of the motion analysis and colour matching functions are used to control FOF synthesis parameters. FOF synthesis uses fragments of exponentially decaying sinusoids, shaped by an amplitude envelope consisting of a cosine shape for the

Chapter Five: Folio of Creative Works 156 attack and decay with a flat sustain (Eckel et al. 1995). It was devised by Xavier Rodet (1984) from research into speech synthesis and was first incorporated into IRCAM’s CHANT software. Sonic Construction uses Michael Clarke and Rodet’s fofb~ object, developed to enable real-time FOF synthesis in Max/MSP (Clarke and Rodet 2003). The fofb~ object can generate grains with multiple formants, with each formant defined by the following parameters— Formant number Amplitude (amp) Formant frequency (formantfrq) Bandwidth, exponential decay (bndwdth) Temps d’excitation – impulse attack time (tex) Début d'atténuation – starting time of the decay (debatt) Atténuation – duration (atten). The fofb~ object is also defined by its fundamental frequency and an ocatviation coefficient (Figure 5.10).

Figure 5.10 Sonic Construction–parameters used to control the FOF Synthesis

In Sonic Construction, I assign a separate control data envelope for each of these parameters, defined using Max/MSP’s graphical breakpoint function editor (Figure 5.11). The results of the motion analysis are used as inputs to these control data

Chapter Five: Folio of Creative Works 157 envelopes. Specifically, the amount of motion of the coloured dyes, as measured by the video analysis functions, is used as x inputs into the breakpoint functions, returning the corresponding y values for use as FOF synthesis control data. The different colours detected by the video analysis functions are used to assign different preset control data envelopes. This method allowed me to define distinct timbral spaces that could be explored through the amount of movement of the coloured dyes. The graphical breakpoint functions provide a clear visualisation of the different parameter spaces, ranging from little or no movement to large amounts of turbulence and movement in the system.

Figure 5.11 Sonic Construction–control data envelopes for mapping movement to FOF synthesis parameters

A second formant is defined sharing the same parameter values, except for the formant frequency, which is assigned its own control data envelope, defining a control ratio with respect to the first formant’s frequency. The effect of adding the second formant is to create a richer, warmer timbral space. Two separate fofb~ objects are used, effectively functioning as two separate voices, each with its own control data envelopes and presets. Each FOF voice has associated with it nine sets of preset control data envelopes, corresponding to a pallet of nine colours recognised by the system.

Chapter Five: Folio of Creative Works 158

5.3.2.3 Spatialisation

Spatialisation of the FOF synthesis outputs is controlled by the coordinates of detected motion (x y) as determined by the video analysis functions. The spatialisation is performed using vbap221 for panning via configurable speaker arrays with separate reverbs assigned to simulate distance. To date the work has been presented with a variety of speaker configurations including a sixteen speaker array arranged in a hemisphere, standard 5.1 and even two-channel stereo. The outputs of the FOF synthesis can also be routed to long delay and feedback network effects and added to the final mix, generating background textures. The projected video is processed slightly using Jitter’s jit.brcosa object, controlling saturation, contrast and brightness.

5.3.3 Interacting with the System

Sonic Construction has a strong performative component, i.e. the theatre of dropping coloured dyes into water and hearing the sonic results that unfold. The video re- projected into the space representing the camera’s viewpoint is also engaging. In many performances I have found myself thinking from a visual perspective, ‘painting’ with the coloured dyes as much as ‘listening’ to the sonic environment being created. Interestingly though, this performative element does not rely on the more frequently used physical archetypal performance gestures as employed by Waisvisz’s The Hands, Sonami’s Lady’s Glove and traditional performance practice with acoustic instruments.

Many interactions with the system result in clearly connected sonic responses. For example a performance typically begins with silence until the first drop enters the clear, liquid. With the entry of the first drop, the first FOF grains are generated. Adding different colours will typically create a change in the timbral response of the system. A number of fast moving drops added to the environment can often result in clearly connected descending frequencies or repeated bell like sounds.

221 Ville Pulkki, Vector Base Amplitude Panning (vbap) http://www.acoustics.hut.fi/~ville/ viewed 1/5/2007.

Chapter Five: Folio of Creative Works 159

Other interactions are less directly linked to the immediate sonic response yet still maintain as sense of interaction and connection. For example an unfolding shape traced by the dye may create a complex sound that evolves over a much larger time frame. Another feature inherent in the system is that dye can only be added to the environment, not taken away, and similarly the timbral spaces explored by the system are created and evolve out of past events. Eventually the system becomes saturated, with new drops of dye no longer distinguishable from the murky solution.

Presentations of Sonic Construction have been well received with the work being curated or programmed into a number of festivals, conferences and performance series including – Sonic Connections Festival University Of Wollongong 2004, 1/4inch performance series Sydney and Wollongong 2005, Australasian Computer Music Conference Brisbane 2005 and the Adelaide Festival 2006. Reviewing the 2006 Adelaide Festival performance in RealTime Samara Mitchell writes of the work— The result is a drug-free and neurologically intact synaesthesia. Experiencing the seamless choreography of movement, colour and macro-sound in Drummond’s work lends itself to that phenomenal sensation of universal is-ness or, as a friend of mine once described it, that feeling you get when you sense the “thingy-ness of things”222.

I had originally conceived of Sonic Construction as a performed installation, with the audience free to enter and leave as they wished, able to walk up to and around the interface. However, as revealed by the above list of public presentations, on the whole the work has been presented in the context of a more structured performance. The overall structure of Sonic Construction easily fits the paradigm with a clear beginning—the first drop of dye; a natural end to the process—saturation of the liquid with dye; and a duration of approximately fifteen to twenty minutes.

222 http://www.realtimearts.net/article/issue72/8084, viewed 1/4/2005.

Chapter Five: Folio of Creative Works 160

I found that a significant amount of familiarity and rehearsal with the system was required to present it effectively. Obviously this is true to some extent for any performance practice, however Sonic Construction required significantly more expertise with the system to realise its sonic potentials than was the case with the previously discussed work Plus Minus (5.2). Furthermore, Sonic Construction proved difficult to control directly. The most successful performance strategy was to work with the material the system was creating at that moment in time, drawing on my own expertise and knowledge of the system’s behaviour rather than to try and direct the system into specific timbral, pitch and rhythmic spaces based on a preconceived plan. Of course as an interactive system, lack of direct control over the system and a sense of being directed by the system’s responses as much as directing the system itself are many of the aims I was setting out to achieve.

5.3.4 Interaction Design

Sonic Construction achieves a balance in terms of a conversation model of interaction (4.3.6). The performer is not entirely in control, neither is the system entirely autonomous. If the performer does nothing, the system will come to a point of stasis or rest; conversely the performer’s actions do not always produce entirely predictable sonic events as they are mediated by the unpredictability of a natural system (water), yet the nature of the outcome is constrained by the design of the software and the mappings of gesture to sound synthesis parameters (4.4.4).

An appropriate metaphor for the system is Chadabe’s “sailing a boat on a windy day and through stormy seas” (4.3.6). A high degree of expertise with the system is required to perform Sonic Construction effectively. However increasing expertise does not result in increased control of the system. Working with the system’s responses is a far more effective approach than trying to guide the system into a known possible state achieved in the past.

Chapter Five: Folio of Creative Works 161

Similar to other video tracking interactive systems such as Krueger’s Videoplace (1.6.1.1), Rokeby’s Very Nervous System (1.6.1.3) and Paine’s Gestation 223, Sonic Construction can be considered as a sonification of gesture, in this case not a performer’s physical movements but the movement of ink in liquid, the shapes or gestures clearly and dramatically visualised.

It is the association between specific visual events (e.g. the dropping of a single ink blob) and the resulting sonification–system responsiveness (4.3.5)–that enable the audience to build a relationship with the interactions at play in the work and comprehend the more complex mappings that exist over time. But unlike an instrument that is simply performed through a performer’s gestures, Sonic Construction has compositional potentials encoded in the system design. Through gesture the work unfolds, the moments of potentiality are discovered, through a conversation between performer and system.

The journey nature of Sonic Constructions still challenged the audience, many of who reported a desire for greater performer control and for more evidence of a clearly or pre-defined musical structure. These responses are of interest because, although conditioned by the majority of prior musical experience, they also illuminate a principle challenge when developing an interactive system; the desire of an audience to be able to perceive virtuosic skill, and to understand the musical structure and form. An interactive system of this kind provides neither, rather, presenting the creative conversation of the making process, which rather than being done in private in advance of the performance, is occurring in the moment of the performance.

223 http://www.activatedspace.com/Installations/Gestation/Gestation.html, viewed 1/4/2007.

Chapter Five: Folio of Creative Works 162

5.4 Sounding the Winds

5.4.1 Overview

In developing Sounding the Winds I wanted to explore the concept of an interactive system that was less instrument like. While Sonic Construction (5.3) exhibited conversation like interactive qualities, it could also be considered in terms of a traditional instrument. Specific, repeatable gestures—colour, speed and location of the ink drops—could produce sonic timbral outcomes constrained within a broadly predictable set of possibilities. In addition the system demanded a high level of expertise and familiarity to produce an effective ‘performance’, with increased practice time resulting in increased proficiency with the system.

After performing extensively with the complex system underlying Sonic Construction I wanted to develop an interactive system based on a more stable and manageable structural model, yet still maintaining in the system a sense of autonomous, complex and engaging behaviour. I was also interested in exploring less state based structures, while still encoding in the system the possibility for change, growth and transformation. Furthermore I wanted the new work to function more as a performed installation, rather than a performance for a seated audience. Taking inspiration from another type of fluid movement, the movement of air, Sounding the Winds endeavours to address these objectives.

Sounding the Winds uses the movement of a kite flying high in the sky, pulling against its flying line as it traces the eddies and currents created by the wind to generate and control electroacoustic sound. Sensors placed on the kite measure changes in its orientation, speed and tension, transmitting the data to a ground-based computer via Bluetooth224. Taking inspiration from the archetypal wind-played string instrument, the Aeolian harp, the underlying sound synthesis model used in Sounding the Winds is based on a physical model of a tensioned string, with the control data sent from the kite used to apply varying forces to the virtual string causing it to resonate and sound. This use of a virtual string to generate the sound also serves as

224 Bluetooth is a wireless short-range communications system. http://www.bluetooth.com viewed 1/5/2007.

Chapter Five: Folio of Creative Works 163 an analogy to the very real string tethering the kite to the ground, itself under considerable tension and occasionally resonating audibly.

The work, by its nature, is presented outside the concert hall or gallery performance space, typically in an open parkland space, oval or headland (Figure 5.5). In this environment the audience is free to explore the installation and leave and return as they wish, with no fixed concept of a start or end to the work. The electroacoustic sound is intended to be projected in sympathy with the surrounding environment, working contextually with the existing ambient soundscape.

Figure 5.12 Sounding the Winds–Electrofringe 2005 performance King Edward Park, Newcastle

5.4.1.1 Other Wind Played Instruments

Instruments designed to be performed by the wind have an extensive history in both Western and other cultures. The Aeolian harp (Æolian harp or wind harp) is named after Aeolus, the ancient Greek god of the wind. In eighteenth century Europe, Aeolian harps were designed as household instruments, intended to be placed in an open window and thus ‘sing’ with the breezes225. Aeolian harps also form the basis of a number of more recent sound sculpture designs. Ros Bandt (2003:199) has

225 Poetically, in the eighteenth century the Aeolian harp was considered analogous to the optical prism, which split white light into its spectral colours. Similarly it was thought the Aeolian harp functioned as an ‘acoustic prism’ making audible the sounds normally hidden in the wind, a mediator between nature and ‘man’. Jones, W. (1781). Physiological Disquisitions; or, Discourses on the Natural Philosophy of the Elements. London: J. Rivington et al. pp.338–345. http://members.aol.com/woinem7/html/jones.htm viewed 1/5/2007.

Chapter Five: Folio of Creative Works 164 installed Aeolian Harps at Red Cliffs (1988) and Lake Mungo (1992) in Australia. Alan Lamb, another Australian composer, has made recordings of long telegraph wires ‘singing’ in the wind226 and has created very long wire installations such as his Wogarno Wire Installation, 2001 (Bandt 2003:196).

Kites also have a history of being used as musical instruments and sound generating devices, a history that in some instances dates back thousands of years227. Chinese Nantong Kites228, 229 (Symphony on Air) lift into the air hundreds of whistles, large and small, made of gourds and bamboo. The Japanese Unari hums in the wind using narrow, ribbon shaped strings230. The Cambodian Èk231, 232 uses a sounding bow and is traditionally flown in the evening and through the night.

5.4.2 System Architecture

5.4.2.1 Kite and Sensors

Sounding the Winds uses a single line, nine foot (2.7432 metres) Delta Conyne style kite. This is a hybrid kite design, consisting of a box style Conyne kite with Delta shaped wings. The Delta Conyne design provides good lift, is stable and flies well in a large variety of winds, qualities that were well suited to the requirements of the system.

Installed on the kite are three different sensors233— • Dual-axis accelerometer measuring acceleration of the kite in two orthogonal axes234 • Gyroscope measuring angular rotation of the kite235

226 Lamb, Alan. Primal Image, Archival Recordings 1981–1988. Dorobo Records 008, 1995. 227 Uli Wahl maintains a website documenting the use of kites as musical instruments – Kite Musical Instruments & Aeolian Musical Instruments http://members.aol.com/woinem1/index/index.htm viewed 1/5/2007. 228 http://www.kitelife.com/archives/issue51/daves1-51/index.htm viewed 1/5/2007. 229 http://www.windabove.com/pages/nantong%20whistle%20kite.html viewed 1/5/2007. 230 http://www.kitelife.com/archives/APRIL98/singkites.htm viewed 1/5/2007. 231 Also referred to as Khlèng-Phnorng or Khlèn-Èk viewed 1/5/2007. 232 http://subvision.net/sky/planetkite/asia/cambodia/camb_kite-museum.htm 233 Sensors supplied by Electrotap – http://www.electrotap.com viewed 1/5/2007. 234 iMEMs (integrated Micro Electro Mechanical System) dual-axis accelerometer manufactured by Analog Devices, Inc. http://www.analog.com/en/cat/0,2878,764,00.html viewed 1/5/2007. 235 iMEMs (integrated Micro Electro Mechanical System) gyroscope manufactured by Analog Devices, Inc. http://www.analog.com/en/cat/0,2878,764,00.html viewed 1/5/2007.

Chapter Five: Folio of Creative Works 165

• Flex sensor (bend sensor), measuring the amount of deflection of the kite material236.

A Micro CV Controller237 is used to convert the analogue control voltages output by the sensors to digital, transmitting the data to a ground based computer via Bluetooth. The range of the Bluetooth signal is approximately one-hundred meters238. Some experimentation was necessary to find the most effective way to attach the electronics to the kite. Through a process of trial and error, the sensors were mounted directly to the kite structure, while the Micro CV Controller and battery were suspended from the main flying line using a cross-shaped suspension and pulley system referred to as a Picavet239 (Figure 5.13).

Figure 5.13 Picavet detail

5.4.2.2 Virtual String with RMI/Modalys

The ground based computer receives the sensor data as OSC messages (Figure 5.14). This data, sent from the kite, is used to generate sound by controlling a physical model of a tensioned string implemented using IRCAM’s240 RMI/Modalys241 objects for Max/MSP. RMI/Modalys provides a library of objects based on strings, plates,

236 Flexion Sensor manufactured by Abrams-Gentile Entertainment. 237 Angelo Fraietta http://www.smartcontroller.com.au/miniMidi/microCVController.html viewed 1/5/2007. 238 Using Class 1 Bluetooth (100mW, 20dBm). 239 In 1912 Pierre L. Picavet described this type of suspension system for lifting cameras for use in kite aerial photography (KAP). http://arch.ced.berkeley.edu/kap/equip/picavet.html viewed 1/5/2007. 240 IRCAM – Institut de Recherche et Coordination Acoustique/Musique (Institute for music/acoustic research and coordination) http://www.ircam.fr/ viewed 1/5/2007. 241 RMI/Modalys (Richard Dudas, Manuel Poletti, Carl Faia) is a library of Max/MSP objects derived from IRCAM’s Modalys (Jean-Marie Adrien, Joseph Morrison) http://forumnet.ircam.fr/355.html?&L=1 viewed 1/5/2007.

Chapter Five: Folio of Creative Works 166 membranes and tubes for building virtual instruments. The instruments can be dynamically defined and continuously manipulated with respect to the type of material (metal, wood, diamond, etc.), size and method of interaction (strike, pluck, scrape). In creating a virtual instrument with RMI/Modalys typically, the virtual objects of the instrument are defined (tube, string, membrane, plates, plectrum, reed, hammer) together with access points on the objects where an action will take place. Connections between the objects are then defined, while controllers are used to describe how the parameters of the connection evolve over time.

Figure 5.14 Sounding the Winds (rehearsal)–Kite and ground based laptop receiving OSC data via Bluetooth

5.4.2.3 Mappings

Sounding the Winds uses the string model from the RMI/Modalys library. Data from the kite’s accelerometer is used to control an excitation signal, which is in turn applied as a force to the virtual string, causing it to sound (Figure 5.15). The

Chapter Five: Folio of Creative Works 167 excitation signal is a noise source, filtered through the use of Max/MSP’s filtergraph~ and biquad~ objects. After smoothing and scaling, the two channels of accelerometer sensor data are mapped to control parameters of a lowpass filter applied to this excitation noise signal. The accelerometer data is used to control both the resonance and the cut-off frequency of the filter. Second order analysis of the acceleration data is performed to determine the rate of change and the results applied to the gain control of the filter. Orientation data from the gyroscope is used to control a customised pan function. The tension of the kite sail (flex data) is averaged over time, scaled and applied to the playing position on the virtual string.

Figure 5.15 Sounding the Winds signal flow

Through these mappings the kite’s movements in response to the wind generate a performance on the virtual string. Large changes in the kite’s movement, for example

Chapter Five: Folio of Creative Works 168 in strong gusty winds, result in strong resonances in the string. Similarly gentler movements of the kite, brush against the virtual string causing it to sound softly. The different movements of the kite are also reflected in different timbral responses of the instrument, due to the mappings employed and use of the underlying physical modelling synthesis system.

During a performance of Sounding the Winds the software facilitates the recording of sensor data from the kite. This recorded sensor data can be re-mapped to other control parameters of the virtual string at different temporal rates. In this way I am able to affect the system during a performance, applying envelope shapes created by the kite’s movement over the interval of a few minutes or less, to control other parameters of the virtual string such as tension, thickness and length, re-scaled and expanded to much longer time scales (for example minutes to hours).

5.4.3 Interacting with the System

Sounding the Winds had its first public presentation at Electrofringe 2005242. The performance took place on a headland overlooking the ocean, using a four-channel speaker system and a lead-acid truck battery for power. A number of challenges were uncovered in presenting the work. The RMI/Modalys physical modelling library utilised a significant amount of processing on the 1.67GHz laptop used for the performance. As a consequence there were restrictions on the potential for other processor intensive sound processing functions or the realisation of polyphonic voices. The work is also subject to the vagaries of the weather requiring no rain and the necessary wind conditions for kite flying.

Sounding the Winds is based on a much simpler set of mappings than the previously discussed Sonic Construction (5.3) and consequently the behaviour of the system is considerably easier to predict. Unlike Sonic Construction, however, the system behaves more autonomously and generatively. In performance I interact with the software directly, with little control over the kite’s behaviour once it is in the air. In contrast a large part of the performance aspect of Sonic Construction is the choice of

242 Electrofringe 2005 Newcastle, Australia.

Chapter Five: Folio of Creative Works 169 dye colour and where and how fast the drops enter the system. The sound world of Sounding the Winds develops on a much slower and more gradual time frame, the often subtle changes in timbre evolving in response to the kite’s movement. Sounding the Winds continues to be evolved with future developments to include multiple kites, expanded sensor arrays to include wind speed (anemometer), pressure and microphones and to explore the possibility of projecting sound from the kite itself.

5.4.4 Interaction Design

Similar to Paine’s Reeds and MeteroSonics (1.6.1.7), Sounding the Winds translates environmental signals, mapping (4.4.4) the data sensed by the system to sound synthesis parameters exploiting one-to-one, one-to-many and many-to-one relationships. But as with Schiemer’s Spectral Dance (3.1.5), Sounding the Winds incorporates an interface offering some degree of input into the system, although a kite on the end of a string is somewhat less controllable than Schiemer’s swung Tupperware UFO. It’s a system in which I as the performer influence the musical outcome rather than control it directly.

Sounding the Winds can be considered as an example of Chadabe’s “gesture expander” (4.3.6) metaphor of interaction with the movements of the kite—the gestures it describes—being interpreted as sound, the sonification being created as the result of a complex mapping of the wind to synthesis parameters. But the system also exists within a definable range of probabilities and constraints having been designed and programmed to respond to signals sensed from the kite so that an identifiable compositional space is defined through interaction with both the environment and the performer.

Chapter Five: Folio of Creative Works 170

5.5 Six Degrees of Tension

5.5.1 Overview

In developing this last work of the folio, I wanted to revisit many of the ideas explored in the first work presented—Book of Changes (5.1)—specifically including human performance on an acoustic instrument, in a way in which the system influences the performer as much as the performer influences the system. I also wanted to develop the idea of an interactive system as a conversation between computer and performer (4.3.6), explored throughout the works of the folio.

Six Degrees of Tension is an improvised work for two performers and interactive computer system, with one performer playing an acoustic instrument (guitar) and the other interacting with the computer directly (Figure 5.16). The computer system processes the acoustic signals from the acoustic guitar using granular signal processing and feedback effects. The second performer interacts with the computer system via interfaces mediated by a neural network. The neural network can be trained prior to and during a performance.

Figure 5.16 Six Degrees of Tension–system configuration

Chapter Five: Folio of Creative Works 171

Instead of creating a work consisting of parallel parts—a duet for computer and acoustic guitar as per Rowe’s classifications of performer and interactive computer system (4.3.2)—the intention is for the work to be perceived and a unified sonic outcome, the result of interaction and collaboration between performers and computer system. The sonic outcomes of the works are sculpted as the result of a three-way conversation between instrumental performer, computer performer and the interactive system itself. Although improvised, the work exists within a defined compositional space; a structured improvisation or perhaps more appropriately Dean’s concept of a comprovisation (1.2, 2.4.3). Versions of the work have also been adapted for use with voice and flute.

5.5.2 System Architecture

5.5.2.1 Signal Flow

The signal flow of the system is outlined in Figure 5.17. Two microphones are used to capture the acoustic guitar signals, one specifically placed near the fret board allowing for different timbres from the instrument to be captured. The acoustic guitar signals are sent separately to granular synthesis processors, feedback effects and a reverberation effect. A dry signal is also sent to the custom-made mixer interface (Figure 5.18).

Chapter Five: Folio of Creative Works 172

Figure 5.17 Six Degrees of Tension–signal flow

Figure 5.18 Six Degrees of Tension–mixer interface

Chapter Five: Folio of Creative Works 173

5.5.2.2 Effects Processors

The granular processing is implemented using the munger~ object from the PeRColate243 library (Figure 5.19). The munger~ parameters controlled by the system are grain separation, grain rate variation, grain size, grain size variation, grain pitch, stereo spread, grain direction (forwards, backwards, both), number of voices.

Figure 5.19 Six Degrees of Tension–the munger~ patch The feedback effects are generated using Pluggo’s244 Feedback Network plug-in.

5.5.2.3 Neural Network Interface

The parameters of the different effects processors and the software mixer are all controlled by a neural network implemented using the mlp245 object. Neural networks were developed as models of biological computation (Roads 1996:904). They attempt to mimic the processing model of an animal brain. A neural network consists of a large number of identical interconnected processing elements. Knowledge is represent by the connection strengths between the elements. In the context of Six Degrees of Tension the neural network has been trained to interpret four control values as system wide parameter mappings (Figure 5.20).

243 PeRColate is a collection of synthesis, signal processing, and image processing objects for Max/MSP by Dan Trueman and R. Luke DuBois http://www.music.columbia.edu/PeRColate/ viewed 1/5/2007. 244 A collection of audio plug-ins that support Audio Unit, and RTAS plug-in format. http://www.cycling74.com/products/pluggo viewed 1/6/2007. 245 Matt Wright and Adrian Freed (2002) mlp (multi-layer perceptron) implements simple backpropagation learning and forward-pass. http://www.cnmat.berkeley.edu/MAX/neural-net.html viewed 1/6/2007.

Chapter Five: Folio of Creative Works 174

Figure 5.20 Six Degrees of Tension–neural network interface

The four control values influence density, frequency, texture and randomness of the system’s response. By using a neural network to achieve this high level of mapping control, I am able to influence the system’s response in both subtle and dramatic ways, otherwise not possible without such high level control. The random parameter, as the name implies, introduces increasing amounts of randomness into the system’s responses.

5.5.3 Interacting with the System

Six Degrees of Tension was first performed at the 2006 Aurora Festival246. In contrast to the other works in the folio Six Degrees of Tension is a system under the shared control of two performers and the interactive system itself. The system is equally influenced by both the acoustic performer’s choice of material to respond with and the computer performer’s manipulation of the interface controls. The complex signal processing chain under the high level control of a neural network,

246 With John Encarnacao performing acoustic guitar. http://www.aurorafestival.com.au/2006archive/campbelltown.html viewed 1/5/2007.

Chapter Five: Folio of Creative Works 175 itself trained with some degree of randomness creates a system that is controllable, while still able to respond unexpectedly. Through training the system in rehearsal, prior to performance, the system reveals a sense of the same compositional identity, while still maintaining variation between performances.

5.5.4 Interaction Design

Comparisons can be made to Cage’s Cartridge Music (2.1.1.2), and Stockhausen’s Mikrophonie I (2.1.2) in that the performers do not have specific control of how their inputs to the system will be interpreted and thus do not have direct control of the system as a whole. In Six Degrees of Tension, it is the combined efforts of instrumentalist and computer performer, in conjunction with the system’s responses that sculpt the compositional shape of the work. The system defines a specific compositional space. Encoded in the software are predefined sonic potentials that define and constrain the possibilities of the piece. In a performance the performers navigate different paths through this set of potentials and in so doing create the sonic outcomes that define the work.

Six Degrees of Tension attempts to maintain a sense of equilibrium between predictability of response and serendipitous surprise. As revealed in Spiegel’s measure of predictability and repeatability verses randomness (4.3.4) and Winkler’s (4.3.5) concept of a sense of participation inspiring exploration, curiosity and mystery, it is the balance between the responsiveness of the system, a strong sense of connection between input and response and the system’s ability to surprise and inspire that enables both audience’s and performers to connect and engage with the interactive nature of the work.

176

Chapter Six: Conclusions

6.1 Overview

This thesis has explored the concept of creating music interactively with technology. As we have seen, the term interactive is applied widely in the field of new media arts, from systems exploiting relatively straightforward reactive mappings of input- to-sonification through to highly complex systems that are capable of learning and can behave in autonomous, organic and intuitive ways. Perhaps the most obvious usage of the term interactive is in the context of systems that are driven by audience participation. However, creating music interactively with technology can also be thought of in terms of interactive composition (4.2.1), intelligent instruments (2.3.1), collaborative environments (3.3) and conversational models (4.3.6).

In Chapter One I presented an overview of interactive art practice, briefly examining a large number of works, and classifying them in terms of either interactive installation (1.6.1) or interactive instruments and interactive performance systems (1.6.2). In Chapters Two and Three I presented a more in depth analysis of selected interactive works.

Chapter Two began with a discussion of key works of live electronic music from the 1950s and 1960s, with specific focus on works by Cage (2.1.1), Stockhausen (2.1.2), Tudor (2.1.3), Lucier (2.1.4) and Mumma (2.1.5). The first programmable instruments and interactive music software applications were then discussed, with specific reference to works by Martirano (2.2.1), Chadabe (2.2.2), Spiegel (2.3.1), Zicarelli (2.3.2) and Mathews (2.2.3). The chapter concluded with a discussion of interaction in the context of computer-networked performance ensembles, with specific reference to the work of the League of Automatic Music Composers (2.4.1), the Hub (2.4.2), austraLYSIS electroband (2.4.3) and HyperSense Complex (2.4.4).

Chapter Three continued this detailed investigation into interactive art practice starting with new instrument designs intended for use in performance contexts, analysing works and instruments by Waisvisz (3.1.1), Sonami (3.1.2), Hewitt (3.1.3), Chapter Six: Conclusions 177 the trio Sensorband (3.1.4) and Schiemer (3.1.5). Interactive sound installations were then discussed, looking specifically at a recent collaborative project I programmed for Gibson and Richards (3.2.1). Interactive instruments designed for performance by non-musicians were then discussed in the context of works by Jordà (3.3.1), Patten and Recht (3.3.2), Iwai (3.3.3) and Blaine (3.3.4). The overview concluded with a discussion of interactive systems intended to behave like an independent, autonomous performer with reference to examples by Lewis (3.4.1), Rowe (3.4.2) and Weinberg (3.4.3).

In Chapter Four, I investigated the differing approaches that have been taken in attempting to define, classify and model interactive music systems. Specifically the ideas of Chadabe (4.2.1; 4.3.6), Rowe (4.2.2; 4.3.2; 4.4.1), Winkler (4.2.3; 4.3.3; 4.4.2), Bongers (4.4.3), and Spiegel (4.3.4) were presented and discussed. A general model of interactive systems was then defined with specific reference to system architecture (4.4), control and feedback (4.4.3), mapping (4.4.4), gestural input and processing (4.4.5; 4.4.6).

Chapter Five presented the folio of creative works. Five works were discussed— Book of Changes (5.1), Plus Minus (5.2), Sonic Construction (5.3), Sounding the Winds (5.4) and Six Degrees of Tension (5.5). The folio explored the concept of interactive music systems as a conversation between human and computer system, each able to influence each other equally, the sonic outcomes a result of the combined interactions rather than a performer controlling an interactive instrument, or playing with an intelligent computer performer. The folio also explored the notion of responsiveness of an interactive system, the balance between cogent connection of the input and sonic response, and the possibility of surprising, unexpected outcomes.

6.2 Definition of an Interactive System

There is surprisingly little consensus on a coherent definition for interactive systems. In the wider context of new media and digital arts, the term interaction is applied broadly, encompassing a range of possibilities from sensor based triggering of pre- rendered sound files, to complex generative systems responding to gestural input.

Chapter Six: Conclusions 178

Focusing specifically on interactive music systems, it is the Rowe (1993) and Winkler (1998) definitions of an interactive system responding musically to a performer’s input that are extensively cited (4.2.2; 4.2.3). Chadabe (1984; 1997) presents one of the few alternate views, one in which an interactive system functions as a shared composition process in which the computer influences the composer as much as the composer influences the system (4.2.1). Paine (2002b; 2002a; 2007) and Jordà (2005) have presented similar definitions (4.3.6).

I consider that interactive systems enable compositional structures to be realised through performance and improvisation, with the composition encoded in the system as processes and algorithms, mappings and synthesis routines. In this way all aspects of the composition—pitch, rhythm, timbre, form—have the potential to be derived through an integrated and coherent process, realised through interacting with the system. The performance becomes an act of selecting potentials and responding to evolving relationships. The process of composition then becomes distributed between the decisions made during system development and those made in the moment of the performance. There is no pre-ordained work, simply a process of creation, shared with the public in performance.

6.3 Interactive System Models

Surprisingly, given the wide variety of aesthetic styles, interpretations and manifestations of interactive systems that exist, a coherent and widely applicable model can be defined consisting of three stages (4.4.1)—sensing, processing and response (Bongers 2000; Rowe 1993; Winkler 1998). Interconnecting these stages are the mappings of the system (4.4.4). The type, nature, flexibility and degrees of freedom of the sensing employed by the system have a considerable influence on its behaviour. Likewise, the nature and type of mappings (C. Cadoz and Wanderley 2000; Hunt and Kirk 2000) created from senor to processing and from processing to sound generation also have considerable influence on a participant’s experience of the system’s interface and the perceived behaviour of the system.

Chapter Six: Conclusions 179

The processing stage can incorporate any number and type of algorithms. Typical processing functions include— • Senor data signal conditioning–smoothing, scaling, interpolating, re- mapping/translating (5.3; 5.4) • Second order sensor data processing to calculate velocity, acceleration, energy, threshold detection (5.4). Other, more specialised processing functions include— • Stochastic processing (5.2.2.4) • Neural Networks (5.5.2.3) • Artificial Life inspired algorithms–swarms, flocks, population models, Genetic Algorithms (1.6.1.4; 1.6.1.8; 3.2.1)

6.4 Classification

While no single classification dimension can successfully or usefully encompass the breadth of artistic practice that can be identified as interactive, we can consider interactive systems with respect to a number of dimensional and interrelated parameters, specifically: process (3.1.5)—an unfolding dynamic morphology of sound—versus event based (3.1.1); instrument (3.1.3) versus composition (2.2.2.2) model; evolving internal state (3.3.1) or pre configured (1.6.2.2.2).

6.5 Challenges

The nature of interactive systems also present a number of inherent performance challenges. Performance outcomes may not be the same given the same input conditions. The response of a system functioning with a significant degree of autonomy and independence may be difficult to control and likewise difficult to consistently anticipate results. Antecedent, consequent, cadence structures are no longer viable or relevant in such contexts. However, many aspects of instrumental performance work on the edge of failure with exciting results and interactive systems share this condition. Another challenge facing the composer of interactive systems is that each new interactive composition is effectively a new instrument, requiring

Chapter Six: Conclusions 180 significant exploration, exposure and experience with the system to fully understand and realise its sonic potentials.

A balance between system, control of sonic outcomes and ‘musicality’ in terms of compositional decision making, timbre space, rhythm, pitch classes, and form remain a critical question. An interactive system may produce a range of complex, sophisticated outcomes, which the composer/performer then constrains to meet broad musical goals. Such a dialogue is illustrated in the works Sonic Construction (5.3), Sounding the Winds (5.4), Six Degrees of Tension (5.5) and forms a critical juncture in the research questions explored in this thesis. The interactive system becomes a source of material where the control may fluctuate between system, input and interface, each has been explored herein.

181

Bibliography

Adams, J. (1999). Giant Oscillations. http://www.emf.org/tudor/Articles/jdsa_giant.html. Viewed 1/12/2006.

Aleksander, I. (2000). How to Build a Mind. London: Weidenfeld & Nicolson.

Baginsky, N. A. (2005). The Three Sirens: A Self-Learning Robotic Rock Band. http://www.the-three-sirens.info. Viewed 1/12/06.

Bandt, R. (1985). Sounds in Space, Wind Chimes and Sound Sculptures. Camberwell, Melbourne: Victorian Arts Council, Council of Adult Education.

——(2001). Sound Sculpture, Intersections in Sound and Sculpture in Australian Artworks. Sydney: Craftsman House.

——(2003). Taming the Wind, Aeolian Sound Practices in Australasia, Organised Sound, 8 (2), 195–204.

Battier, M. (1999). Aesthetics of Live Electronic Music. Reading, England: Harwood Academic Publishers.

Battier, M. and Wanderley, M. (2000). Trends in Gestural Control of Music. Paris: IRCAM-Centre Pompidou.

——(2005). The Metasurface–Applying Natural Neighbour Interpolation to Two-to- Many Mapping In Proceedings of the 2005 International Conference on New Interfaces for Musical Expression (NIME–05). Vancouver, BC, Canada

Berry, R., et al. (2004). Authoring Augmented Reality: A Code-Free Approach, In Proceedings of the 2004 SIGGRAPH 2004 (Posters). Los Angeles: ACM SIGGRAPH.

——(2006). Programming in the World, Digital Creativity: Special Issue on Programming and Creativity, 17 (1), 36–48.

Bibliography 182

——(2004). The Bush Telegraph: Networked Cooperative Music-Making. Lecture Notes in Computer Science, 120–123.

——(2006). Tunes on the Table, Multimedia Systems, 11 (3), 280–289.

——(2006). An Interface Test-Bed for ‘Kansei’ Filters Using the Touch Designer Visual Programming Environment, In Proceedings of the 2006 Australasian User Interface Conference (AUIC2005). Hobart, Australia: Conferences in Research and Practice in Information Technology (CRPIT), 173–176.

——(2001). Unfinished Symphonies–Songs of 31/2 Worlds In Proceedings of the 2001 European Conference on Artificial Life (ECAL 2001). Prague, 51–64.

Blaine, T. (2006). New Music for the Masses. Adobe Design Center, Think Tank Online. http://www.adobe.com/designcenter/thinktank/ttap_music. Viewed 6/12/2006.

Blaine, T. and Forlines, C. (2002). Jam-O-World: Evolution of the Jam-O-Drum Multi-Player Musical Controller into the Jam-O-Whirl Gaming Interface, In Proceedings of the 2002 New Instruments for Musical Expression (NIME–02). Dublin, Ireland.

Blaine, T. and Perkis, T. (2000). Jam-O-Drum, a Study in Interaction Design, In Proceedings of the 2000 Association for Computing Machinery Conference on Designing Interactive Systems (ACM DIS 2000). NY: ACM Press.

Bongers, B. (1998). An Interview with Sensorband, Computer Music Journal, 22 (1), 13–24.

——(1999). Exploring Novel Ways of Interaction in Musical Performance, In Proceedings of the 1999 Creativity & Cognition Conference. Loughborough, UK, 76–81.

——(2000). Physical Interfaces in the Electronic Arts–Interaction Theory and Interfacing Techniques for Real-Time Performance. In M. M. Wanderley and M. Battier (eds.), Trends in Gestural Control of Music. Paris: IRCAM-Centre Pompidou.

Bibliography 183

Boulanger, R. C. (2000). The Book : Perspectives in Software Synthesis, Sound Design, Signal Processing, and Programming. Cambridge, MA: The MIT Press.

Brazier, M. A. B. (1970). The Electrical Activity of the Nervous System. London: Pitman.

Brown, C. and Bischoff, J. (2002). Indigenous to the Net: Early Network Music Bands in the San Francisco Bay Area. http://crossfade.walkerart.org/brownbischoff/IndigenoustotheNetPrint.html. Viewed 1/12/2006.

Burns, C. (2001). Realizing Lucier and Stockhausen: Case Studies in the Performance Practice of Electroacoustic Music., In Proceedings of the 2001 International Computer Music Conference (ICMC 2001). San Francisco: International Computer Music Association, 40–43.

Burt, W. (1999). The Spectacle and Computer Music: A Critical Assessment, In Proceedings of the 1999 Australasian Computer Music Conference. Victoria University, New Zealand: Australasian Computer Music Association.

Cadoz, C. (1988). Instrumental Gesture and Musical Composition, In Proceedings of the 1988 International Computer Music Conference (ICMC1988). GMIMIK, Kologne, Germany: International Computer Music Association, 1-12.

Cadoz, C. and Wanderley, M. (2000). Gesture–Music. In M. M. Wanderley and M. Battier (eds.), Trends in Gestural Control of Music. Paris: IRCAM-Centre Pompidou.

Cage, J. (1960). Cartridge Music. Henmar Press.

——(1960). Imaginary Landscape No. 4. Henmar Press.

——(1968). Silence: Lectures and Writings. London: Calder and Boyars.

Cascone, K. (2000). The Aesthetics of Failure: Post-Digital Tendencies in Contemporary Computer Music, Computer Music Journal, 24 (4), 12–18.

Bibliography 184

——(2003). The Laptop and Electronic Music : Shapeshifting Tool or Musical Instrument? , Philadelphia, Pa.: Routledge.

Chadabe, J. (1975). The Voltage-Controlled Synthesizer. In John Appleton (ed.), The Development and Practice of Electronic Music. New Jersey: Prentice Hall.

——(1977). Some Reflections on the Nature of the Landscape within Which Computer Music Systems Are Designed, Computer Music Journal, 1 (3), 5-11.

——(1984). Interactive Composing: An Overview, Computer Music Journal, 8 (1), 22–28.

——(1991). About M, Contemporary Music Review, 6 (1), 143–146.

——(1992). Flying through as Musical Space: About Real-Time Composition. In J. Painter, et al. (eds.), Companion to Contemporary Musical Thought, 1. London: Routledge.

——(1997). Electric Sound: The Past and Promise of Electronic Music. Upper Saddle River, New Jersey: Prentice Hall.

——(2000). Remarks on Computer Music Culture, Computer Music Journal, 24 (4), 9–11.

——(2001). Preserving Performances of Electronic Music, Journal of New Music Research, 30 (4), 303–305.

——(2002). The Limitations of Mapping as a Structural Descriptive in Electronic Instruments, In Proceedings of the 2002 New Interfaces For Musical Expression. Dublin.

Chapel, R. n. H. (2003). Realtime Algorithmic Music Systems from Fractals and Chaotic Functions: Toward an Active Musical Instrument. Universitat Pompeu Fabra Barcelona.

Clarke, M. and Rodet, X. (2003). Real-Time Fof and Fog Synthesis in Msp and Its Integration with Psola, In Proceedings of the 2003 International Computer Music Conference (ICMC2003). Singapore.

Bibliography 185

Clemen, H. (2002). Enhancing the Experience of Music-Ritual through Gesturally- Controlled Interactive Technology, In Proceedings of the 2002 Australasian Computer Music Conference. Royal Melbourne Institute of Technology: Australasian Computer Music Association, 133–141.

——(2003). Playing from Within: Some Ideas Regarding the Computer-Based Interactive Installation Space, Sounds Unlimited: building the instruments: Sounds Australian - Journal of the Australian Music Centre, 62, 14–17.

——(2003). Interfaces for Public Use Interactive Installations: Some Design Concepts, Problems and Possible Solutions, In Proceedings of the 2003 Australasian Computer Music Conference. Western Australian Academy of Performing Arts, Edith Cowan University: Australasian Computer Music Association, 27–35.

Cook, P. R. (1999). Music, Cognition, and Computerized Sound : An Introduction to Psychoacoustics. Cambridge, MA: The MIT Press.

——(1999). Music, Cognition, and Computerized Sound : An Introduction to Psychoacoustics. Cambridge, MA: The MIT Press.

Cope, D. and Hofstadter, D. R. (2001). Virtual Music : Computer Synthesis of Musical Style. Cambridge, MA: The MIT Press.

Cowell, H. (1996). New Musical Resources. Cambridge University Press.

Dannenberg, R. B., et al. (2005). Mcblare–a Robotic Bagpipe Player, In Proceedings of the 2005 International Conference on New Musical Interfaces for Music Expression (NIME–05). Vancouver, Canada, 80–84.

Dannenburg, R. B. (1984). An on-Line Algorithm for Real-Time Accompaniment, In Proceedings of the 1984 International Computer Music Conference (ICMC– 84). Paris, : International Computer Music Association, 193–198.

Dannenburg, R. B. and Mukaino, H. (1988). New Techniques for Enhanced Quality of Computer Accompaniment, In Proceedings of the 1988 International Computer Music Conference (ICMC–88). Cologne, Germany: International Computer Music Association, 243–249.

Bibliography 186

Darreg, I. (1982). A Case for Nineteen, Interval, Journal of Music Research and Development, 3 (4), 7–8, 17.

Davidson, J. W. (1993). Visual Perception of Performance Manner in the Movements of Solo Musicians, Psychology of Music, 21, 103-113.

Dean, R. T. (2003). Hyperimprovisation: Computer-Interactive Sound Improvisations. Middleton, Wis: A-R Editions.

Delalande, F. (1988). La Gestique De Gould. In Ghyslaine Guertin (ed.), Glenn Gould Pluriel. Verdun, Quebec: Louise Courteau Editrice, 85–111.

Dinkla, S. (1994). The History of the Interface in Interactive Art, In Proceedings of the 1994 International Symposium on Electronic Art (ISEA). Helsinki, Finland.

——(1997). Pioniere Interaktiver Kunst Von 1970 Bis Heute. Ostfildern: Verlag: Hatje Cantz Verlag.

Dodge, C. and Jerse, T. A. (1997). Computer Music : Synthesis, Composition, and Performance. 2nd edn., New York London: Schirmer Books Prentice Hall International.

Doornbusch, P. (2002). The Application of Mapping in Composition and Design, In Proceedings of the 2002 Australasian Computer Music Conference. Royal Melbourne Institute of Technology: Australasian Computer Music Association, 35–42.

——(2003). Instruments from Now into the Future: The Disembodied Voice, Sounds Australian - Journal of the Australian Music Centre, 62, 18.

Dorin, A. (2004). The Virtual Ecosystem as Generative Electronic Art, In Proceedings of the 2004 2nd European Workshop on Evolutionary Music and Art, Applications of Evolutionary Computing: EvoWorkshops 2004. Coimbra, Portugal: Springer-Verlag Heidelberg, 467–476.

Driscoll, J. and Rogalsky, M. (2004). David Tudor’s Rainforest: An Evolving Exploration of Resonance Leonardo Music Journal, 14, 25–30.

Bibliography 187

Duckworth, W. (2005). Virtual Music : How the Web Got Wired for Sound. 1st edn., New York: Routledge.

Eckel, G., Rocha Iturbide, M., and Becker, B. (1995). The Development of Gist, a Granular Synthesis Toolkit Based on an Extension of the Fof Generator, In Proceedings of the 1995 International Computer Music Conference (ICMC1995). San Francisco: International Computer Music Association.

Emmerson, S. (1986). The Language of Electroacoustic Music. New York: Harwood Academic.

——(1994). Timbre Composition in Electroacoustic Music. London: Harwood Academic Publishers.

——(1994). ‘Live’ Versus ‘Real-Time’. Contemporary Music Review, 10, pt 2. London: Harwood Academic Publishers, 95–101.

——(1996). Local/Field: Towards a Typology of Live Electronic Music, Journal of Electroacoustic Music, Vol.9 (January 1996), 10–12.

——(2000). Music, Electronic Media, and Culture. Aldershot: Ashgate.

Feldman, M. and Friedman, B. H. (2000). Give My Regards to Eighth Street: Collected Writings of . Cambridge, Mass.: Exact Change.

Gehlhaar, R. (1991). Sound=Space, the Interactive Musical Environment, Contemporary Music Review, 6 (1).

Grand, S. (2000). Creation Life and How to Make It. London: Weidenfeld & Nicolson.

Harris, C. R. (1996). Computer Music in Context. Reading, UK: Harwood Academic Publishers.

Heifetz, R. J. (1989). On the Wires of Our Nerves: The Art of Electroacoustic Music. Lewisburg, London, Cranbury, NJ: Bucknell University Press, Associated University Presses.

Bibliography 188

Hewitt, D. (2003). Emic–Compositional Experiments and Real-Time Mapping Issues in Performance, In Proceedings of the 2003 Australasian Computer Music Conference. Edith Cowan University, Perth, Western Australia: Australasian Computer Music Association.

Hewitt, D. and Stevenson, I. (2003). E-Mic–Extended Mic-Stand Interface Controller, In Proceedings of the 2003 International Conference on New Musical Interfaces for Music Expression (NIME–03). Montreal, 122–128.

Hofstadter, D. R. (2000). Gödel, Escher, Bach: An Eternal Golden Braid. 20th Anniversary Edition. Penguin Books.

Hunt, A. and Kirk, R. (2000). Mapping Strategies for Musical Performance. In M. M. Wanderley and M. Battier (eds.), Trends in Gestural Control of Music. Paris: IRCAM-Centre Pompidou.

Jones, W. (1781). Physiological Disquisitions; or, Discourses on the Natural Philosophy of the Elements. London: J. Rivington et al.

Jordà, S. (2004). Digital Instruments and Players: Part Ii–Diversity, Freedom and Control, In Proceedings of the 2004 International Computer Music Conference. San Francisco: International Computer Music Association, 706– 709.

——(2004). Instruments and Players: Some Thoughts on Digital Lutherie Journal of New Music Research 33 (3), 321–341.

——(2005). Digital Lutherie: Crafting Musical Computers for New Musics’ Performance and Improvisation. Ph.D. diss. Universitat Pompeu Fabra, Barcelona.

Jordà, S., et al. (2005). The Reactable*, In Proceedings of the 2005 International Computer Music Conference (ICMC). Barcelona, Spain: International Computer Music Association.

Kaltenbrunner, M., et al. (2006). The Reactable*: A Collaborative Musical Instrument, In Proceedings of the 2006 Workshop on “Tangible Interaction in Collaborative Environments” (TICE), at the 15th International IEEE Workshops on Enabling Technologies (WETICE 2006). Manchester, U.K.

Bibliography 189

Kim-Boyle, D. R. (2004). International Computer Music Conference 2003: Boundaryless Music (Review), Computer Music Journal, 28 (2), 77–80.

Kvifte, T. and Jensenius, A. R. (2006). Towards a Coherent Terminology and Model of Instrument Description and Design In Proceedings of the 2006 International Conference on New Interfaces for Musical Expression (NIME–06). Paris, France

Lazzetta, F. (2000). Meaning in Musical Gesture. In M. M. Wanderley and M. Battier (eds.), Trends in Gestural Control of Music. Paris: Ircam–Centre Pompidou.

Lederman, S. J. and Klatzky, R. L. (1987). Hand Movements: A Window into Haptic Object Recognition, Cognitive Psychology, 19, 342–368.

Lewis, G. E. (1999). Interacting with Latter-Day Musical Automata, Contemporary music review, Aesthetics of live electronic music, 18 (3), 99–112.

——(2000). Too Many Notes: Computers, Complexity and Culture in Voyager, Leonardo Music Journal, 10, 33–39.

Lippe, C. (1993). A Composition for Clarinet and Real-Time Signal Processing: Using Max on the Ircam Signal Processing Workstation, In Proceedings of the 1993 10th Italian Colloquium on Computer Music. Milan, 428–432.

Lucier, A. (1995). Reflections, Interviews, Scores, Writings. Koln: MusikTexte.

Machover, T. (1992). Hyperinstruments–a Composer’s Approach to the Evolution of Intelligent Musical Instruments. In L. Jacobson (ed.), Cyberarts: Exploring Arts and Technology. San Francisco: MillerFreeman Inc., 67–76.

——(1997), “Classic” Hyperinstruments 1986-1992: A Composer’s Approach to the Evolution of Intelligent Musical Instruments, http://brainop.media.mit.edu/Archive/Hyperinstruments/classichyper.html. Viewed 1/12/2006.

Bibliography 190

Machover, T. and Chung, J. (1989). Hyperinstruments: Musically Intelligent and Interactive Performance and Creativity Systems, In Proceedings of the 1989 International Computer Music Conference (ICMC89). San Francisco: International Computer Music Association, 186–187.

Manning, P. (2004). Electronic and Computer Music. Rev. and expanded edition. edn., New York: Oxford University Press.

McNeill, D. (1992). Hand and Mind: What Gestures Reveal About Thought. Chicago, USA: University of chicago press.

Minsky, M. (1988). The Society of Mind of Mind. New York: Simon & Schuster.

Miranda, E. R. (1998). Computer Sound Synthesis for the Electronic Musician. Oxford; Boston: Focal Press.

——(2000). Readings in Music and Artificial Intelligence. Amsterdam, Netherlands: Harwood Academic Publishers.

——(2001). Composing Music with Computers. 1st edn., Oxford ; Boston: Focal Press.

——(2003). Evolutionary Music: At the Crossroads of Evolutionary Computing and Musicology. Philadelphia, Pa.: Taylor & Francis.

Miranda, E. R. and Wanderley, M. (2006). New Digital Musical Instruments: Control and Interaction Beyond the Keyboard. Middleton, Wis: A-R Editions.

Momeni, A. and Wessel, D. (2003). Characterizing and Controlling Musical Material Intuitively with Geometric Models In Proceedings of the 2003 International Conference on New Interfaces for Musical Expression (NIME–03). McGill University Montreal, Canada.

Moser, M. A. and MacLeod, D. (1995). Immersed in Technology: Art and Virtual Environments. Cambridge, MA: The MIT Press.

Mulder, A. (1996), Hand Gestures for Hci. Hand Centered Studies of Human Movement Project, Technical Report 96-1. http://xspasm.com/x/sfu/vmi/HCI- gestures.htm. Viewed 1/12/2006.

Bibliography 191

Mulder, A. and Post, M. (2000). Book for the Electronic Arts. Amsterdam: De Balie.

Mumma, G. (1967). Creative Aspects of Live Electronic Music Technology. www.brainwashed.com/mumma/creative.html. Viewed 2/11/06.

——(2005). Gordon Mumma. www.newworldrecords.org/uploads/filexbmFI.pdf. Viewed 2/11/06.

Nattiez, J. J., Boulez, P., and Cage, J. (1993). The Boulez-Cage Correspondence. Cambridge England: Cambridge University Press.

Nelson, P. and Montague, S. (1991). Live Electronics. Chur, Switzerland: Harwood Academic.

Nikanne, U. and Zee, E. v. d. (2000). Cognitive Interfaces: Constraints on Linking Cognitive Information. Oxford: Oxford University Press.

Norman, D. A. (1988). The Psychology of Everyday Things. New York: Basic Books.

Norman, K. (2004). Sounding Art: Eight Literary Excursions through Electronic Music. Aldershot, Hants, England; Burlington, VT: Ashgate.

Nyman, M. (1999). Experimental Music: Cage and Beyond. 2nd edn., Cambridge, New York: Cambridge University Press.

Orio, N., Lemouton, S., and Schwarz, D. (2003). Score Following: State of the Art and New Developments, In Proceedings of the 2003 Conference on New Interfaces for Musical Expression (NIME–03). Montreal.

Packer, R. and Jordan, K. (2002). Multimedia: From Wagner to Virtual Reality. New York: W. W. Norton & Company.

Paine, G. (2001). Interactive Sound Works in Public Exhibition Spaces: An Artists Perspective, In Proceedings of the 2001 Australasian Computer Music Conference. University of Western Sydney: Australasian Computer Music Association, 67–73.

Bibliography 192

——(2002a). The Study of Interaction between Human Movement and Unencumbered Immersive Environments. Ph.D. diss. RMIT University, Melbourne.

——(2002b). Interactivity, Where to from Here?, Organised Sound, 7 (3), 295–304.

——(2003). Reeds: A Responsive Environmental Sound Installation, Organised Sound, 8 (2), 139–149.

——(2007). Sonic Immersion: Interactive Engagement in Real-Time Immersive Environments, Scan: Journal of Media Arts Culture, 4 (1).

Paradiso, J. (1997), Electronic Music Interfaces: New Ways to Play, IEEE Spectrum Magazine, 34 (12), 18–30.

Patten, J., Recht, B., and Ishii, H. (2002). Audiopad: A Tag-Based Interface for Musical Performance In Proceedings of the 2002 International Conference on New Musical Interfaces for Music Expression (NIME–02). Dublin, Ireland.

Perloff, N. (2001). The Art of David Tudor. http://www.getty.edu/research/conducting_research/digitized_collections/david tudor/av.html. Viewed 1/12/2006.

Pressing, J. (1991). Synthesizer Performance and Real-Time Techniques. Madison, WI, USA: A-R Editions.

Puckette, M. (1994). Is There Life after Midi?, In Proceedings of the 1994 International Computer Music Conference. DIEM, Danish Institute of Electroacoustic Music, Denmark: International Computer Music Association, 2.

——(1996). Pure Data: Another Integrated Computer Music Environment, In Proceedings of the 1996 Second Intercollege Computer Music Concerts. Tachikawa, Japan, 37–41.

——(2002). Max at Seventeen, Computer Music Journal, 26 (4), 31–43.

Bibliography 193

Puckette, M., Apel, T., and D., Z. D. (1998). Real-Time Audio Analysis Tools for Pd and Msp, In Proceedings of the 1998 International Computer Music Conference (ICMC98). Ann-Arbor, Michigan, 109–112.

Puckette, M. and Lippe, C. (1992). Score Following in Practice, In Proceedings of the 1992 International Computer Music Conference (ICMC). San Francisco: International Computer Music Association, 182-185.

Puckette, M. and Zicarelli, D. (1990), Max—an Interactive Graphical Programming Environment, (Menlo Park: ).

Ramstein, C. (1991). Analyse, Représentation Et Traitement Du Geste Instrumental. Institut National Polytechnique de Grenoble.

Revill, D. (1992). The Roaring Silence John Cage: A Life. London: Bloomsbury.

Richards, K. (2006). Report: Life after Wartime: A Suite of Multimedia Artworks, Canadian Journal of Communication, 31 (2).

——(2006). ‘Bystander’—a Responsive, Immersive ‘Spirit World’ Environment for Multiple Users, In Proceedings of the 2006 Responsive Architectures: Subtle Technologies. University of Toronto, Toronto Canada.

Riddell, A. (2005). Hypersense Complex: An Interactive Ensemble, In Proceedings of the 2005 Australasian Computer Music Conference. Queensland University of Technology, Brisbane: Australasian Computer Music Association, 123–127.

Roads, C. (1985). Composers and the Computers. Los Altos, CA: William Kaufmann.

——(ed.), (1989). The Music Machine: Selected Readings from Computer Music Journal (Cambridge, MA: The MIT Press) xiii, 725.

——(1996). The Computer Music Tutorial. Cambridge, MA: The MIT Press.

Rodet, X. (1984). Time-Domain Formant-Wave-Function Synthesis, Computer Music Journal, 8 (3), 9–14.

Bibliography 194

——(2000). Sound Analysis, Processing and Synthesis Tools for Music Research and Production, In Proceedings of the 2000 13th Colloquium on Musical Informatics (CIM 2000). L’Aguila, Italy.

Rowe, R. (1993). Interactive Music Systems: Machine Listening and Composing. Cambridge, MA: The MIT Press.

——(1996). Incrementally Improving Interactive Music Systems, Contemporary Music Review–Computer Music in Context, Volume 13 (Number 2), 47–62.

——(2001). Machine Musicianship. Cambridge, MA: The MIT Press.

Russolo, L. (1986). The Art of Noises. New York: Pendragon Press.

Schaffer, J. W. and McGee, D. (1997). Knowledge-Based Programming for Music Research. Madison, Wis.: A-R Editions.

Scheirer, E. D. (1998). Tempo and Beat Analysis of Acoustic Musical Signals, Journal Acoustical Society of America, 1 (103), 558–601.

Schiemer, G. (1995). Interactive Algorithms for Live Performance with a Dedicated Audio Signal Processor, In Proceedings of the 1995 Australasian Computer Music Conference. University of Melbourne: Australasian Computer Music Association, 6–16.

——(1998). Midi Tool Box an Interactive System for Music Composition. Ph.D. diss. Macquarie University, Sydney.

——(1999). Improvising Machines: Spectral Dance and Token Objects, Leonardo Music Journal, 9 (1), 107–114.

——(2000). Boolean Logic as a Harmonic Filter, In Proceedings of the 2000 Australasian Computer Music Conference. Queensland University of Technology: Australasian Computer Music Association.

Schiemer, G. and Havryliv, M. (2005). Pocket Gamelan: A Pure Data Interface for Java Phones, In Proceedings of the 2005 International Conference on New Musical Interfaces for Music Expression (NIME–05). University of British Columbia, Vancouver, 156–159.

Bibliography 195

Schottstaedt, B. (1983). Pla: A Composer’s Idea of a Language, Computer Music Journal, 7 (1), 11–20.

Singer, E., et al. (2004). Lemur’s Musical Robots, In Proceedings of the 2004 International Conference on New Interfaces for Musical Expression (NIME– 04). Shizuoka University of Art and Culture, Hamamatsu, Japan, 181.184.

Singer, E., Larke, K., and Bianciardi, D. (2003). Lemur Guitarbot: Midi Robotic String Instrument, In Proceedings of the 2003 Conference on New Interfaces for Musical Expression (NIME–03). Montreal, 188-191.

Smalley, D. (1986). Spectro-Morphology and Structuring Processes. In Simon Emmerson (ed.), The Language of Electroacoustic Music. New York: Harwood Academic, viii, 231.

——(1997). Spectromorphology: Explaining Sound-Shapes, Organised Sound 2, 2 (2).

Sony (2003). Qrio Conductor Robot. http://www.sony.net/SonyInfo/QRIO/works/20040325e.html Viewed 1/12/06.

Sorensen, A. (2005). Impromptu: An Interactive Programming Environment, In Proceedings of the 2005 Australasian Computer Music Conference. Queensland University of Technology, Brisbane: Australasian Computer Music Association, 149–153.

Spiegel, L. (1987). Operating Manual for Music Mouse: An Intelligent Instrument. NY: Retiary.org.

——(1987). A Short History of Intelligent Instruments, Computer Music Journal, 11 (3), 7-9.

——(1992). Performing with Active Instruments–an Alternative to a Standard Taxonomy for Electronic and Computer Instruments, Computer Music Journal, 16 (3), 5–6.

——(1998). Graphical Groove: Memorium for a Visual Music System, Organised Sound, 3 (3), 187–191.

Bibliography 196

——(2000). Music as Mirror of Mind, Organised Sound, 4 (3), 151–152.

Stockhausen, K. (1974). Mikrophonie I. London: Universal Edition.

Tanaka, A. (2000). Musical Performance Practice on Sensor-Based Instruments. In M. M. Wanderley and M. Battier (eds.), Trends in Gestural Control of Music. Paris: IRCAM-Centre Pompidou.

Tarabella, L. (2004). Handel, a Free-Hands Gesture Recognition System, In Proceedings of the 2004 Second International Symposium Computer Music Modeling and Retrieval (CMMR 2004). Esbjerg, Denmark: Springer Berlin / Heidelberg, 139–148.

——(2004). Improvising Computer Music: An Approach, In Proceedings of the 2004 Sound and Music Computing ’04 (SMC04). Ircam, Paris 2004.

Tenney, J. (1992). Meta-Hodos and Meta Meta-Hodos: A Phenomenology of 20th Century Musical Materials and an Approach to the Study of Form. 2nd Rev edn., Hanova USA: Frog Peak Music.

Toop, R. (2000). Writing Music’s Boundaries, In Proceedings of the 2000 Sydney Society of Literature and Aesthetics Conference: Film, Performance, Kinetic Art. University of Sydney, 61–69.

Vella, R. (2000). Musical Environments: A Manual for Listening, Improvising and Composing. Sydney: Currency Press.

Vercoe, B. (1984). The Synthetic Performer in the Context of Live Performance, In Proceedings of the 1984 International Computer Music Conference (ICMC). Paris, France: International Computer Music Association, 199–200.

Vercoe, B. and Puckette, M. (1985). Synthetic Rehearsal: Training the Synthetic Performer, In Proceedings of the 1985 International Computer Music Conference (ICMC). Vancouver, Canada: International Computer Music Association, 275–278.

Viola, B. (2004). David Tudor: The Delicate Art of Falling Leonardo Music Journal, 14, 48–56.

Bibliography 197

Waisvisz, M. (1985). The Hands, a Set of Remote Midi-Controllers, In Proceedings of the 1985 International Computer Music Conference. San Francisco, CA: International Computer Music Association, 86–89.

——(1999). On Gestural Controllers. www.crackle.org/writing.php. Viewed 29/10/2006.

Wanderley, M. M. (2001). Performer-Instrument Interaction: Applications to Gestural Control of Music. University Pierre et Marie Curie - Paris VI, Paris.

——(2001). Gestural Control of Music, In Proceedings of the 2001 International Workshop - Human Supervision and Control in Engineering and Music. Kassel, Germany

Wanderley, M. M. and Orio, N. (2002). Evaluation of Input Devices for Musical Expression: Borrowing Tools from Hci., Computer Music Journal, 26 (3), 62– 76.

Weinberg, G. (2003). Interconnected Musical Networks - Bringing Expression and Thoughtfulness to Collaborative Music Making. Massachusetts Institute of Technology

Weinberg, G. and Driscoll, S. (2006). The Perceptual Robotic Percussionist - New Developments in Form, Mechanics, Perception and Interaction Design. http://www-static.cc.gatech.edu/~gilwein/pow.htm. Viewed 1/12/2006.

Weinberg, G., Driscoll, S., and Parry, M. (2005). Musical Interactions with a Perceptual Robotic Percussionist, In Proceedings of the 2005 14th IEEE International Workshop on Robot and Human Interactive Communication (RO-Man 2005). Nashville, TN, USA, 456- 461.

Wessel, D. (1991). Improvisation with Highly Interactive Real-Time Performance System, In Proceedings of the 1991 International Computer Music Conference (ICMC). San Francisco: International Computer Music Association, 344–347.

Wessel, D. and Wright, M. (2002). Problems and Prospects for Intimate Musical Control of Computers, Computer Music Journal, 26 (3), 11–22.

Bibliography 198

Whitby, B. (2003). Artificial Intelligence: A Beginner’s Guide. Oxford: Oneworld Publication.

Wilhelm, R. and Baynes, C. (1967). The I Ching or Book of Changes, with Forward by Carl Jung. 3rd edn., Princeton NJ: Princeton University Press.

Wilkes, J. (2003). Flowforms: The Rhythmic Power of Water. Edinburgh, Scotland: Floris Books.

Winkler, T. (1995). Making Motion Musical: Gesture Mapping Strategies for Interactive Computer Music, In Proceedings of the 1995 International Computer Music Conference. Banff, AB, Canada: The International Computer Music Association, 261–264.

——(1998). Composing Interactive Music: Techniques and Ideas Using Max. Cambridge, MA: The MIT Press.

Wishart, T. (1994). Audible Design : A Plain and Easy Introduction to Practical Sound Composition. York, England: Orpheus the Pantomime.

——(1996). On Sonic Art. New and Rev. edn., The Netherlands: Harwood Academic Publishers.

Wright, M., et al. (1998). New Applications of the Sound Description Interchange Format, In Proceedings of the 1998 International Computer Music Conference (ICMC). , Ann Arbor, USA: International Computer Music Association, 276-279.

Wright, M., Freed, A., and Momeni, A. (2003). Open Sound Control: State of the Art 2003, In Proceedings of the 2003 International Conference on New Interfaces for Musical Expression (NIME–03). Montreal, Quebec, Canada.

Xenakis, I. (1992). Formalized Music, Thought and Mathematics in Composition. Rev. edn., NY: Pendragon Press.

Yasser, J. (1932). A Theory of Evolving Tonality. New York: American Library of Musicology.

Zicarelli, D. (1987). M and Jam Factory, Computer Music Journal, 11 (4), 13–29.

199

Discography

Bischoff, J. & Perkis, T. (1989). Artificial Horizon. Artifact Recordings, AR102.

Cage, J. (1991). Music for Merce Cunningham (incl. Cartridge Music). Mode 24.

——(1999). Music for Percussion (incl. Imaginary Landscapes No. 1). Hungaroton 31844.

——(2002). John Cage: Will You Give Me to Tell You (incl. Imaginary Landscapes No. 4). Albedo 21.

Chadabe, J. (1981). Rhythms. VR 1301.

——(2004). Many Times… Electronic Music Foundation EMF CD 050.

Lamb, Alan. (1995) Primal Image, Archival Recordings 1981–1988. Dorobo Records 008.

Lewis, G. (1993). Voyager. Avant 014.

Lucier, A. (1990). I Am Sitting in a Room. Lovely Music, LO1013.

Mumma, G. (2002). Live Electronic Music. Tzadik TZ 7074.

Rockmore, C. (1987). The Art Of The Theremin. Delos. DE 1014.

Spiegel, L. (2001). Obsolete Systems. Electronic Music Foundation EM119.

Stockhausen, K. (1995). Microphony I & II; . Stockhausen–Verlag 009.

Subotnick, M. (1967). Silver Apples of the Moon. Re-released on Wergo 2035, 1994.

Discography 200

SynC (2006). Parallel Lines. Celestial Harmonies 13265–2.

Teitelbaum, R. & Braxton, A. (1977). Time Zones. Arista 1037.

The Hub (1989). Computer Network Music. Artifact 1002.

——(1993). Wreckin’ Ball: The Hub. Artifact Recordings, AR107.

Xenakis, I. (2000). Electronic Music. Electronic Music Foundation EM102.

201

APPENDIX A: DESCRIPTION OF THE COMPANION CDS

This thesis includes two companion CDs. The first is an audio CD that contains recordings of the folio of creative works described in Chapter Five. The second is a CD-ROM that includes video files, still images, audio files, working versions and source code for the folio of creative works. The contents of these CDs are as follows:

CD 1: Audio CD

1. [7:09] Book of Changes: Albert Tiu (piano), Shane Thio (violin), interactive computer system. Live recording, 2003 International Computer Music Conference, Singapore (2003).

2. [9:59] Plus Minus (+-): Jon Drummond (laptop). Live recording, Sonic Connections Festival247, University of Wollongong (2003).

3. [13:11] Sonic Construction: Jon Drummond (interactive computer system, coloured dyes, water). Live recording, 2006 Adelaide Festival (2006).

4. [8:40] Sounding the Winds: Jon Drummond (interactive computer system, kite). Live recording–excerpt, 2005 Electrofringe Festival, King Edward Park, Newcastle Australia (2005).

5. [12:25] Six Degrees of Tension: Jon Drummond (laptop), John Encarnacao. Live recording, 2006 Aurora Festival248, Campbelltown, Australia (2003).

247 http://www.uow.edu.au/crearts/sonicconnections/SonicConnectionsPrograms.pdf viewed 1/5/2007. 248 http://www.aurorafestival.com.au/2006archive/campbelltown.html#Concerts viewed 1/5/2007. APPENDIX A: DESCRIPTION OF THE COMPANION CDS 202

CD 2: CD-ROM

The CD-ROM includes video recordings of performances, still images, audio files in mp3 format, working versions and source code for the folio of creative works. For a detailed description of the contents and to access the media please open the index.html file in a web browser.