Instrument design for live electronics

Amir Bolzman Bachelor's Thesis

Institute of Sonology The Hague May 2015

Abstract

This thesis concerns my research in the field of instrument design for live performance in response to personal challenges I encountered in this field. This research resulted in two instruments, the first using a traditional instrumental paradigm of a one-to-one correspondence of physical gestures with individual sound events, and the second one concerned with a more reactive approach influenced by the work of David Tudor. I will discuss the resulting strategies I developed for integrating each instrument into solo and group performances. The last part will concern plans for future work in this field, including a description of a new instrument and new performance strategies.

2

Acknowledgements

I would like to thank Johan Van Kreij, who opened for me the door to the world of instrument design and improvisation. Without his support and kindness i couldn’t begin my journey. I would like to thank Richard Barrett for his insightful remarks and guidance. I would like to thank Joel Ryan for his inspiring lessons and conversations. I would like to thank all of the teachers and students in the institute of Sonology for creating such a fertile environment for experimentation and development. Special thanks go to Adam Juraszek, Peter Edwards and Jol Kristína Karabová for helping me with language. I would like to thank my family for supporting me and encouraging me to continue with my studies and my music.

This thesis is dedicated to the Villa K people, who gave me a roof to live under, supporting company and a lot of inspiration.

3 Contents 1.ntroduction ...... 5 Background and Motivation ...... 5 2.Guidelines for my instrument design ...... 7 2.1 Introduction ...... 7 2.2 Virtuosity ...... 7 2.3 Effort ...... 8 2.4 Hands and ears (only) ...... 8 2.5 Stability and mobility ...... 9 3.FWWM (FireWalkWithMe) ...... 10 3.1 Description and motivation ...... 10 3.2 Sound Engine: ...... 11 3.3 Interface: ...... 12 3.4 Controlling, mapping and playing techniques ...... 14 3.5 Spatialisation ...... 15 3.6 Playing live with FWWM ...... 16 3.7ABRA Vocal Ensemble and Amir Bolzman ...... 18 (Track 1 on appendix CD) ...... 18 3.8 Conclusions ...... 20 4.TudorMachine ...... 22 4.1 Description and motivation ...... 22 4.2 Sound Engine ...... 23 4.3 Interface ...... 25 A.Version 1: MagicPad ...... 25 B. Version 2: FSR Matrix Array ...... 26 C. Towards a new approach ...... 28 D. Version 3, UC 33, from active to reactive ...... 28 4.4 Mapping ...... 29 A.Global control ...... 30 B. Local control ...... 31 4.5 Spatialisation ...... 32 4.6 Playing TudorMachine ...... 32 4.7 Two examples of TudorMachine performances ...... 33 A. Fdbk Ptrns ...... 33 B. Improvisation with Nikolaj Kynde ...... 35 4.8 Conclusion ...... 36 5.RawTouch - Conductive Digital Instrument ...... 37 6.Conclusion and Future Work ...... 40 References ...... 41 Appendix 1: Contents of the accompanying CD ...... 42 Appendix 2: Etude for QuNeo and Noise, Notation and Performing Techniques ...... 43 Appendix 3: List of selected performances ...... 45

4 Introduction Background and Motivation

After graduating in my music studies in Musrara School for art, I started being involved with the local free improvisation scene in Jerusalem. Live performance wasn’t new for me, I had almost 10 years of experience playing keyboard and piano in various rock and pop bands, especially in electronic oriented groups where I would also operate a computer. It seems like my wide experience didn’t prepare me to perform this kind of improvisation. I knew how to play the right melodies, play a groovy rhythm or funky synth sound, but I didn’t have the tools or the right idiom for improvised electronic music. I would use different Max/Msp patches or trigger samples in Ableton live but all of those tools didn’t give me the possibility to build a coherent and complex musical idiom on stage. When playing with other musicians I felt I could not respond in the way that I wanted or imagined, my hands couldn’t follow my ears. Although I got positive comments from my colleagues I never felt that I was really playing, I felt I was toying, trying and hoping for the best. At that time, I believed that electronic music could not be improvised with the same amount of control and virtuosity as traditional music.

One day, a friend of mine invited me to see a lecture and a concert of the electronic musician Wade Matthews. Wade played a duo set with a saxophone player and later with a viola player. His performance setup was comprised of a laptop, a foot pedal and a mouse. I couldn’t believe what my eyes were seeing and what my ears were hearing. This laptop musician could actually improvise music with another “real” musician. Not trying, not “experimenting”, but actually playing along with another musician, Hitting the right sound in the right time, surprising the audience and controlling his instrument. What was the difference between my max/msp performance and Wade’s virtuosic performance?

Matthews had an instrument that he could perform with, practice, learn and eventually master. I had a few max/msp patches that I couldn’t correlate with each other, a small tape recorder and some tapes to play. My setup could create interesting sounds, but it wasn't made to be played live.

Jumping to the present time. I’m sitting in Bea7, finishing programming my new instrument TudorMachine, and for the first time connecting a midi interface to control it. Suddenly this list of numbers, objects and lines of code started to make a stream of sound that could be perceived as music. More than that, after a month of manually manipulating the patch with mouse and keyboard, when I started to use the interface, move the faders, push buttons, I started to discover new sounds, new textures and

5 possibilities that I haven’t thought about them when programming the instrument. My hands follow my ears to create new sounds that were hidden underneath the program and that suddenly reveal themselves to the world.

The path between these two stories and two points in time is the subject of this thesis. It describes the research that resulted in the creation of two new electronic instruments. This research also involved a large amount of performances with variety of musicians, realising these instruments capabilities on stage.

6

Guidelines for my instrument design 2.1 Introduction In this chapter I would like to introduce my guidelines for the design of the instruments that will be described in this thesis. Each instrument has a different sound character and capabilities, however I believe that the following guidelines are relevant for all of them. The fundamental goal of my design was to create instruments that will allow me to play live electronic music. The two keywords for me are live→ Playing music in real time, but most importantly a feeling liveness when performing. Play→ to control, have some kind of virtuosity over the instrument. 2.2 Virtuosity The instrument design should allow the performer to develop a certain amount of virtuosity.(Riad,2013,p44) By virtuosity I mean the possibility for the player to develop some kind control over his instrument, to develop a clear notion of how to produce specific sounds, pitches or texture with an ability to reach them immediately. Virtuosity could also mean a deep understanding of the instrument’s potential and the ability of the player to use this potential for his or her own musical ideas. Unity, restriction and consistency are aspects that are crucial for a design that allows virtuosity to be possible. Computer musicians, including myself, are often tempted to try to create an instrument that will be able to play “everything”, this temptation is due to the infinite possibilities and methods of creating sounds with the computer. This approach might lead the musician towards an instrument that is too complex for them to play it. A practical design tries to find a compromise between the desire for complexity and the ability to control it. Restriction is needed in order to find this point.

“The real time artist may be forced to compromise technically but always has the option to resolve an unsatisfactory computer part simply by playing a little more in the right place”. (Ryan)

Unity is needed to establish a clear connection between the performer, control interface and the sound model. This is the most essential part of an instrument design, it demands creativity, experimentation and imagination in order to find the closet connection. There is no good or bad controller or mapping method, it is only a method of finding the a controller and mapping strategies that will fit the best towards the sound model the player wants to control.

Restriction is needed in choosing the amount of controlling points and/or methods. A player can handle only a certain amount of control during a live performance, and the parameters that are being used needed to be only the most important ones.

7 Choosing the “right” parameters or the most “musical” is a matter of experimenting with the sound model, and finding points that could be interesting to play with. An instrument needs to be constant for certain amount of time in order for the player to learn it. The temptation to add features or to change the code, or to add more parameters needs to be resisted in order to give the performer time to develop virtuosity.

2.3 Effort Effort and risk could be two words that might help me to explain what is liveness for me. In STEIM Touch manifestation the writers describe the concept of effort and it’s importance in live performance: “A singer's effort in reaching a particular note is precisely what gives that note its beauty and expressiveness. The effort that it takes and the risk of missing that note forms the metaphor for something that is both indescribable and the essence of music”.(Norman, Ryan, Waisvisz, 1998) Effort can mean the physicality of a gesture, to produce or hold a sound, or the effort required to reach a certain desired texture, pitch or specific timbre. All of those are dependent on the instrument which will be versatile enough to allow the player to reach the desired goal and complex enough to let the player struggle along the way. Risk is the possibility to fail, in the quoted text the writers describe a singer who risks not reaching the right note. The possibility of failure, to make mistakes, is an essential part of improvised music and the creative process. For me, as a player, it is what makes the difference between a good performance and a bad performance. When I learn something new about my instrument or about music during the performance it is because I took a risk to try something new.

2.4 Hands and ears (only) “The blind touch of a musician is still superior to the awkward musings of mouse man”(Ryan)

A beginner instrumentalist tends to look constantly at their hands to see if they’re playing the right note, while a trained instrumentalist learns to hear and feel in order to understand what they are playing. The ability to develop listening capabilities as a tool for sound localisation is essential for any person who deals with sound, an instrumentalist, sound engineer or sound artist. Visual aids such as waveform viewers, oscilloscopes, spectrograms etc are by now ubiquitous in electronic music. They are undoubtedly useful for studio production giving the user extra visual information to help ‘find themselves’ in the sound, but I question their necessity for live performance.

My approach is to rely on my ears as my only feedback for my playing, I believe that this approach could lead to a more intuitive method of playing, meaning more natural playing that is a result of listening and not seeing or thinking. I do think that visual feedback could be useful for certain instruments, but it needs to be justified as part of the musical output rather than just a product of using the computer as a platform.

8 The one click law is a method I use to avoid any use of visual feedback (from my computer) during my performance. It means that the programmed sound engine should work and remain stable with only one mouse click after the program is loaded. The only visual feedback I get from the screen is a DB meter and a toggle button to turn on the audio.

2.5 Stability and mobility Computer based music is distinguished from traditional instrumental practice by the fear of one’s instrument failing or crashing catastrophically. Live performance and especially solo performance are delicate situations where the performer needs to be focused in order to achieve the best result. Unstable instrument design could lead to unwanted concerns during the performance. These concerns can easily be solved by ensuring that all the units that comprise the instrument, software, computer, control interface, sound interface, and cables, work together before concert. Restricting the amount of equipment and trying to use only what is essential leads to better stability and better mobility.

In the next chapters I will demonstrate how these guidelines were embedded in my design.

9 FWWM (FireWalkWithMe) Feedback based instrument

3.1 Description and motivation This is the first instrument I designed during my time in the Sonology department. I started to work on it in March 2013 and have performed with it since June 2013. These performances included: solo performances, playing with the Sonology Electroacoustic Ensemble, playing with electronic musicians and acoustic instrumentalists. All of them were based on free-improvisation.

Fire Walk With Me is a line from a poem in David Lynch’s TV series Twin Peaks. I found this line an inspiring phrase for my instrument, I want fire and excitement to accompany my playing with this instrument. As described in the chapter before, my main motivation was to create an instrument that will give me a feeling of liveness when I perform. In his masters thesis, Zadel writes about the liveness often missing from laptop performances: “Laptop performance often lacks the sense of effort and active creation that we typically expect from live music, and exhibits little perceivable connection between the performer’s actions and the resulting sound”. (Zadel, 2006, abstract) FWWM tries to overcome this problem by introducing a very active way of producing sound, and very transparent playing.

FWWM is an active instrument, which means the player needs to constantly make an effort to create and hold sounds. This effort is put into achieving certain sounds, mostly pitched sounds that sometimes “run away” because of the instrument’s inherently chaotic architecture. This method of putting an effort into achieving certain sounds is part of the beauty of the instrument (as described in the previous chapter).

In some aspects, FWWM is a very transparent instrument, most of the parameters of control are assigned to traditional acoustic instrument parameters such as pitch, timbre and amplitude. It has a clear note-to-note and ‘one gesture – one acoustic event’ playing paradigm present in all traditional instruments. (Jordà,2008, p12). Transparency gives the audience the possibility to follow the player's movement and connect it to the music produced. It allows the listener to appreciate the players effort and technique and allows virtuosity to be expressed. FWWM has a clear sonic identity, it’s sound range moves from noise to distorted pitched material. With this limited sound material the player can achieve a wide range of textures and timbres by applying a range different playing methods.

10 3.2 Sound Engine: The instrument was built in a Max/Msp environment. It uses a simple chaotic feedback model. This model gives the player a possibility to transform from pitch sounds to more abstract noisy textures. Because of the inherent chaotic structure the sound tends to be unstable, but still can be controlled and tamed. I find this chaotic behavior is useful for two main reasons: Sound wise, the chaotic signal architecture creates complex sound streams that constantly fluctuate. As a listener I would describe the sounds it creates as very alive and rich. There is a constant struggle and tension within the sound. Whether it’s a struggle to tune to the “right” pitch or overtone, trying to balance between two rhythmical patterns, the sound is always in movement. As a player I get the feeling of controlling a physical system, a stream of energy and not a stream of numbers. This feeling is important for the liveness that I’m searching for. Also, because of unstable nature of the system I need to put effort into reaching the sound that I want and to remain there. In conclusion I think FWWM presents a good balance between chaotic behavior and the possibility to control it.

FWWM contains 12 independent voices. Figure 1 is an illustration of one of the voices. Figure 1 FWWM Unit The sound is generated by a feedback loop that is based on reading through a sine wavetable with its phase input. The arctan function is used for soft clipping, and limit the amplitude between 1 and -1.

The player of the instrument can control four main parameters: 1. Pitch – by controlling the center frequency of the resonant filter. The pitch that is produce is not absolute and equal all the time, the player sometimes gets different harmonics of the desired pitch. The pitch is also affected by the buffer size and delay time. I usually work with a buffer size of 256 samples, 44100 sample rate so the minimum delay time is 5.805 milliseconds. 2. Pitch/Noise – by changing the Q factor of the resonance filter, the player can shift from a pitched sound to a band-limited noisy texture (band limited).

11 3. Distortion – this is achieved by controlling the amount of feedback. By distortion I mean increasing of the amount and amplitude of the signal’s overtones. 4. Amplitude (Gain) – controlled by an ADSR envelope (will be discussed in the next chapter)

Because it is a complex network, a change in any one parameter will affect the others. For example a change in the feedback amount could cause a jump in the overtone series of the sound. Or opening the bandwidth of the filter will cause more overtones to become audible, an effect that could also be described as distortion.

3.3 Interface: Quneo controller

The design for this instrument started with a search for a midi interface that would meet my goals as described in my introduction, namely an interface that would have a strong physical aspect to it, an interface that I could touch and press and that would afford continuous control of parameters. As a keyboard player, I tried to avoid using keyboard interfaces for playing electronic music. The reason for this is the fixation of my hands on the piano patterns and, as I mentioned in the introduction, I wanted to move from the paradigms of tonal music towards more abstract sounds .Bob ostertag describes the duality of using a midi keyboard : ”… for me it was just a bunch of buttons. I thought about that a lot. When I first started using it, it felt very disconcerting to be sitting on stage in front of a keyboard, and then an audience would come in and expect you to play this keyboard, and then you’d be using it as a bunch of switches, and display none of the facility that people would expect you to display when you sit down at a keyboard”. (Ostertag, interview @cycling74.com, 2005)

I decided to use a Quneo controller (fig 2). It uses trigger buttons and switches as a midi keyboard but has a complete different layout, which I found refreshing and new. Another important feature is that the audience doesn't have any idiomatic expectations of how it should sound or be controlled. The most important feature for of the Quneo is that it has continuous control on every pad and fader, a feature that allows much more expressive playing then a regular midi keyboard, for example 3 dimensional control on every pad and 2 dimensional control on every fader. Multi-dimensional control technique is very useful for a player who wants to express more complex gesture with one movement. Although I didn’t use this possibility for basic parts of the instrument, I find the 3D control very useful for the effects of the patch, especially for controlling higher-level processes, e.g. granulation.

12

Gaming foot pedals The foot pedals give me a possibility to have two more parameters that I can control simultaneously with my hands. Gaming foot pedals, opposite to expression pedals designed for musicians, need to be constantly pressed, if not they will snap back to their initial stage. This feature is suitable for my physical playing goal proposed in FWWM. In combination with the Quneo controller I have an instrument that encourages the use of the whole body to produce sounds and music. The downside of the pedals is that they force me to sit – not an option I would choose at every performance. Also, often I get humorous comments about the pedals, making a connection between my sound material to car racing sounds. This is not surprising or insulting, but it can be annoying sometimes. I understand these comments, it is clear that one cannot disconnect the common visual meaning of an object just because you use it differently. As a performer I should be aware of all the symbolic and aesthetic meanings of my performance tools, such as controllers, computer etc… They should all be taken into account as part of the concert. For now, I don’t have a solution for this problem. In the future I wish to find a less ambiguous model, meanwhile I find the pedals too useful to let them go.

Figure 2 FWWM mapping parameters

13

3.4 Controlling, mapping and playing techniques Examples Here I will briefly describe the most interesting mapping method that I used in the instrument. I will also give some examples of techniques I have developed for using them. In the accompanying CD you will find a live performance I had with ABRA ensemble (later to be described), most of the following techniques can be heard in the recording.

Triggering Events Pads 1-12 (and some faders) trigger instances of the feedback patch. Each pad is assigned to a different base frequency of the resonant filter from low to high, and also is able to modulate the amplitude using the pressure control of the pad. Because the Quneo has a continuous control on each pad, each instant can be played with a very wide dynamic range. I used an exponential curve on the amplitude control to give me the possibility to have a larger resolution on gentle gestures.

Timbral change When you play one of the pads you will hear different filtered noises. Pressing the left foot pedal will transform the sound from noise to pitch. This is achieved by assigning the pedal control values to the Q of the resonance filter. This is a global control on all of the instances that are triggered.

Shaping Amplitude Envelopes As described before, each of the voice pads has a continuous control on the amplitude of the voice, which is useful for dynamic playing and controlling the attack and sustain parameters. The release parameter is assigned to pad 14 and works in a similar way to the sustain pedal. The main difference is that in my design I can change the release length by the amount of pressure on the pad. This combination of the design is very powerful for creating different envelopes for each sound event. For instance, I can create a very fast crescendo that ends with very short tail, or a very short sound that has a very long release tail.

Scrubbing This is a method of playing that wasn’t planned while I was programing the instrument, it was “discovered” during practicing and performing with it. Scrubbing is a fast circular movement over the pads with the thumb and since the scrub is applied on different pads, the result is a fast sequence of filtered noise sounds that, with a filter change using the right pedal, can become a fast sequence of short-pitched sounds. In appendix 2 you can see an introduction for a score I wrote for a percussion player and a Quneo controller. The score uses mostly the mentioned scrubbing technique.

14 I wrote the score because of the interest of a percussion player in my instrument. In this piece I wanted to observe what a well-trained musician could achieve using my instrument. Unfortunately, the piece was never realised due to scheduling problems.

Random envelope generator/Granulator This unit is applied on the summing of all of the voices together. It applies a stochastic stream of percussive envelopes on the signal that creates a grainy texture. This unit uses the 3D function and it’s mapped to pad 13. The parameters that are assigned are: ● Size/Length of the grain, X Axis ● Density of the grains, Y Axis ● Amount of processing applied to the signal, Z axis (pressure) The 3d control pad allows me to shift between the grain texture just with one finger, controlling 3 parameters simultaneously. In general, my approach to this instrument was to first attempt to realise particular sounds and then to modify or add features if certain sounds that I found necessary were not available. As in the case of the granulator, so I programmed it and added a higher- level control method.

Pseudo Gendy Effect The right pedal works as a pitch bend controller. The humoristic name “Pseudo Gendy” describes a fast gliding movement over a few pitches. It reminds some of Gendy model behavior but controlled manually and not with the high level control of original model. This is a good example of reaching a desired sound aesthetic by developing my playing technique and not a programmed solution.

3.5 Spatialisation I mainly use two kinds of spatialisation techniques, one is suited for solo performance and the second is made for playing with acoustic instruments. When I perform alone, I use a stereo model with slightly different settings for the synthesis model for each speaker. That helps create a larger sound image and a stronger sonic effect. Most of the time the stereo effect is hardly noticeable, but there is a chance of “ losing” one of the voices all of a sudden due to the unstable character of the model.

When playing with acoustic instruments I try to localise my sound to where I sit in order to blend with the other players whose sound is naturally localised to their position. In this case I use a 3-channel setup - mono and stereo pair. The mono speaker is the one that is closest to me and the stereo pair is usually the venue’s regular speaker system. I use the mono speaker as the main speaker that creates my localised sound, the stereo pair is used when I play extremely loudly or when I play granular effects that

15 need to use more spatial setup. This method is quite effective when playing with more than one acoustic musician.

3.6 Playing live with FWWM Playing improvised solo performance During my first year of playing this instrument I primarily performed solo improvised sessions, which I found to be very challenging. The challenge lies in two aspects. One is the nature of solo performance, the situation of being on stage, alone, and being expected to deliver something is challenging, and more than that, even scary. FWWM, in some ways, is closer to acoustic instruments than electronic ones. This is due to its limited spectrum range, the fact that the instrument has mainly one sound that characterises it and the constant effort demanded by the player to play it. While other electronic instruments might exploit the computers’ possibility for multi- layering or other reactive features, FWWM leaves the player naked on stage.

The second challenge has its origins in the non-idiomatic nature of free-improvisation. Derek Bailey describes the difference between ‘idiomatic’ and ‘non-idiomatic’ improvisation forms in the following manner: “Idiomatic improvisation, much the most widely used, is mainly concerned with the expression of an idiom – such as jazz, flamenco or baroque – and takes its identity and motivation from that idiom. Non-idiomatic improvisation has other concerns and is most usually found in so-called 'free' improvisation and, while it can be highly stylised, is not usually tied to representing an idiomatic identity.” (Bailey, 1992) Bailey’s description of a non-idiomatic improvisation is even more radical when performing improvised electronic music, which has a relatively short tradition of live performance compared to traditional music.

What I’m trying to describe here is a very exposed feeling, where I play an instrument that nobody is familiar with and music that doesn't really have any rules of playing it. This is a challenge but also contains the potential for creative responses to this problem.

One-way of structuring my playing method and perspective was to find a familiar reference point. I decided to try to think of myself more as a classical guitar improviser and less as a “one man band” laptop performer. This reference point was useful in helping me find a starting point for performing. In practice, that means some of the following things:

● Phrasing, creating a personal idiom. First, I started trying to phrase musical sentences and events. I also tried to create longer and longer musical sentences

16 and connect them together. By musical phrasing I mean a collection of events with some kind of directionality or character. In my opinion, this is a great feature of the instrument compared to other electronic instruments, as the possibility to create short or long sentences and make variations on them, control them, gives a lot of power to the player. ● Accepting silent moments and using them. ● Practicing. Learning the instrument, its different zones and different playing techniques. Even now, when I have a concert after a long time of not playing or practicing the instrument, my ideas for playing are limited. A weeklong practice prepares my bank of ideas for the concert. ● Revealing my body, hands and interface to the audience. I wanted the audience to see what I’m doing, to allow transparency of my playing. I believe that since this instrument is quite straightforward with the playing technique, the audience can understand what I’m doing. I wanted to let the audience to see the effort that I exert while playing the instrument. Revealing my playing method and body is also exposing the fragility of the solo performance, to use it as an advantage and an opportunity to create a sincere and intimate situation.

Playing with other musicians During the last two and a half years I have had the opportunity to play with a wide variety of musicians, both electronic and acoustic, with each of them bringing new experiences and context to the collaboration. Each one needed a different playing approach, and different preparation. When playing with other electronic musicians there are usually some assumptions already taken in account. The improvisation idiom is usually about sound manipulation or textures that we could create together. What is not known, compared to acoustic instrumentalists, is the kind of sounds that the other player can produce and how fluently they can move from one sound to another. A good idea is learning each other’s frequency ranges and sound characters so as not to “step on one another’s toes”. When performing, I try to find complementary sounds to my partner and avoid a sonic clash, which could lead sometimes to a muddy sonic experience. This can be achieved by playing in different frequency range from the other musician, and by being aware of dynamics and finding room to play without taking over the dynamic and spectral range. For the listener there is usually a problem of sonic identity, when the instruments are not familiar it’s hard to define who plays what. As a player, this not such a big problem and in some situations it could even be an advantage. Compared to a solo performance, the pressure and responsibility is less crucial. I can hide behind my partners sound and choose to play or not, think and then react.

17 Playing with acoustic instrumentalists or vocalists is a more complex and more challenging case. When playing with an acoustic instrumentalist, the sonic connection, at first, feels unnatural or imposed. Also the idiom that the players are used to can be quite different. These differences raise a lot of strategic questions: How can we “blend” together and to what extent is this necessary? Should I focus more on the sound textures or should I react to the pitch and tone? The answers for these questions are usually “solved” in the process of performing. What is important for me as a player is to have the tools and possibilities to choose which approach to take from.

I would like to demonstrate some of the strategic solutions I took during the improvised performance I made with the vocal ensemble named ABRA. Most of them are methods I used in other performances, especially when playing with a large group of players.

3.7 ABRA Vocal Ensemble and Amir Bolzman, an excerpt from live improvised session at the Jerusalem season of culture festival, Aug 2013, Jerusalem (Track 1 on appendix CD) All the time marks mentioned refer to the recording.

Background In July 2013 I was commissioned by the Jerusalem Season of Culture to create a performance collaborating with a vocal ensemble from Jerusalem named ABRA. ABRA ensemble is comprised of four vocalists who perform contemporary improvised music. Both sides were very excited about the collaboration and possibility to perform with new musicians and music styles.

What language are you speaking? Developing a mutual improvisation idiom Although both of us (me and ABRA) had experience improvising with contemporary musicians, we hadn't experienced playing with musicians from such a different musical and aesthetic background. How would my harsh electronic sounds blend with ABRA’s tender humane sound? Our first rehearsals were dedicated to learning each other's idioms and trying to find musical meeting points

Simulation and imitation After learning each other’s basic musical and sonic range we started trying to imitate or simulate each other’s sounds and timbers. ABRA had learned some of my sound qualities and made their own interpretations (example 00:00-01:00), adding them to their music vocabulary for the performance. On my part, for example, I

18 tuned to some of their harmonised moments adding another voice to their vocal cluster (11:26-13:20). We used this method as a development strategy in the following way: finding our meeting point as practiced at the rehearsals, developing it, playing variations on it, moving to more independent layers. In a way, it’s a very useful strategy for improvised music: finding the safe place and starting to move from there in a more independent direction. In order to not spend too much performance time on searching we created familiar meeting points.

Sampling My preferable situation when playing with another acoustical instrumentalist is always to leave and embrace the different sonic identity of the players, rather than to use effects to create easier sonic blending. In order to enrich my meeting points with ABRA and to have more expressive tools for the performance, I added an extra granulation unit to my instrument to sample and play the individual voices. Each voice of the singers was assigned to a different short buffer that would continuously record their sound in a loop. If I played the recorded voice, the recording would stop. I set my playing possibilities to be very limited: I could control mainly two parameters – amplitude and position on the buffer – for playing. Each voice control was assigned to a fader on the Quneo (Fig 2). As stated before, the faders on the Quneo have two dimensions of control – position and pressure. Similar to the rest of the mapping strategies in FWWM , the pressure parameter is assigned to the amplitude and the position was assigned to the position in the buffer to be played. I found this mapping to be very useful for the following reasons: 1. Applying the same playing and mapping methods in all parts of the instrument. That means an active approach to playing and triggering events is applied to all the units in the instrument. Also, avoiding using another interface helps hold to the unity of the instrument and playing. 2. Ability to play this unit along with the regular part of the patch. You can hear a use of the granular engine from 3:30-07:30. Playing with the sample voices and regular units of FWWM can be heard on 07:25-08:10.

Spatialisation For this performance we decided to use a 4.1 speaker setup. (fig 3).

19 For my own spatialisation I used a similar method to the 3.1 setup as was described before. Each vocalist was amplified locally and routed to the speaker next to her, and was also sent to my sound card. Afterwards, the processed granulated voice was sent back to the speaker closest to the vocalists who were being processed. This setup succeeded in keeping the sound localisation of the musicians, and I could thus exploit the spatialisation possibilities of this setup. For the audience, this setup helped appreciate the sonic aspects of the performance, while keeping the sound image clear and recognisable.

A month later we performed again in another venue where these sound settings weren’t possible. The difference between the quality of the performances was obvious and in favor of the first one. The poor sound equipment didn’t give me any possibility to really explore the sonic domain, so I needed to react more Figure 3 Abra stage design to the ensemble pitch content.

3.8 Conclusions FWWM had became my main instrument for live performing, and definitely my first choice for performing improvised music. After almost two years of performance I feel a very strong connection to it and a very comfortable feeling when playing it. The design of the instrument, the patching, is rarely changed, the instrument is a “frozen” project which is ready to use whenever it is needed, I change sometimes the way I play, lately I play more continuous sounds and more noisy sounds. I feel more inclined to play minimally and less hectically. Bob Ostertag describes the importance of developing a longtime relationship with an electronic instrument:

“I played that thing for ten years, which was another deliberate choice of mine, because I think in electronic music people are in such a rush to get the latest thing, and to upgrade their system, and to get something faster and with more voices, that they never actually learn to play anything. I think particularly if you’re going to perform, then you

20 have to develop some kind of… not virtuosity, but you have to learn to play something.”(Ostertag,2005)

Ostertag mentions a learning time that is needed to learn to play an electronic instrument. What is it to learn to play an electronic instrument? An electronic instrument is differentiated from a traditional instrument by it’s non idiomatic playing method. At first there is no notion of playing the instrument ‘correctly’. Practice time is not about learning how to play a tuned pitch like violin, or how to play fast scales on the piano. What is needed to be learned is what the instrument can do, and how the player can use it in order to achieve musical ideas he or she want to achieve.

In retrospect I can define my early solo performances as exploring the sonic possibilities of the instrument and its chaotic sound structure. I think that gradually, by learning it, I started using it as a tool for live composition.

21 TudorMachine Hybrid Feedback network instrument

4.1 Description and motivation

TudorMachine is a feedback-based instrument. The word Hybrid in the title refers to its hybrid control interface: reactive and active. The reactive interface uses an Evolution UC33 midi interface which is designed like a mixer. The active part is controlled by an Apple Magic Pad. In the accompanying CD you will find a composition, Kapara (Ma Nishama?){Sweety, (What's up?)}, composed from edited material of live playing with TudorMachine. The composition was premiered this April at Barbur gallery, Jerusalem.

The instrument is named after American composer David Tudor who served as a source of inspiration for creating the instrument. It is a machine that produces sounds inspired by Tudor aesthetics as perceived by me, namely with reference to his Rainforest Series, where the author creates a very rich sonic environment. The reason why his sound environments are especially interesting is that they are composed of a lot of small, independent voices with sonic similarity, together creating an overwhelming sonic experience. In an interview with Douglas Khan, John Bischoff describes Tudor’s music as follows: “I was drawn to a music that sounded as if you were hearing the heart of the electronics, of electricity as a material. That meant a huge range of tones and noise and interruptions, unpredictable events and unpredictable control. But it also meant going down into the heart of it, where it's blossoming. That's what Tudor was doing, burrowing down.” (Khan, 2004,p77) Tudor himself describes his composition idea of Rainforest IV in the following manner: “... the object was to make the sculptures sound in the space themselves. Part of that process is that you are actually creating an environment. The contact mics on the objects pick up the resonant frequencies which one hears when very close to the object, and then are amplified through a loudspeaker as an enhancement." (Fullemann, 1984)

TudorMachine tries to imitate this environment and sounds in a digital domain. The resonating object of a rainforest turns into manifold virtual nodes that resonate in a virtual space with the ability to connect them together and create a complex sonic experience.

Tudor’s technique of playing is well defined by him: "I try to find out what's there – not to make it do what I want, but to release what's there. The object should teach you what it

22 wants to hear." (Schonfeld, 1972) Therefore, the act of performing lies in the revealing of the system, listening to it, and reacting to the result.

A large part of this chapter will describe the quest for the right controller that would allow players to play this instrument. The quest included understanding the nature of Tudor’s playing approach, and changing the playing techniques to fit this nature. The last part of the chapter will give a general account of two concerts that were performed with TudorMachine where the performing strategy as well as the conclusions will be described.

4.2 Sound Engine I did some experiments in order to achieve the sound which would be similar to a unit in Tudor System(fig 4). A unit or a voice in Tudor system is part of many units that create the spatial environment of Rainforest. In an interview in the 1980s, Tudor described this units as "oscillators that made animal- like and bird-like sounds" (Fullemann, 1984). Using more technical language, a unit could be described as a complex pattern of short pulsed pitched sounds, where the pitch would oscillate around some limited set of pitches with a natural harmonic relationship. The used model is a product of experimentation with different feedback methods and chaotic models and it is based on trial and error methodology. Although it is surely not the most efficient model, in terms of CPU usage, the sound result suited the aesthetic intentions. The idea was to create a model that would resonate around certain Figure 4 Tudor Unit frequencies, which would result in rhythmical patterns and pitch changes.

23 The code was written in the max/msp programming environment and uses gen~ units in it. The next paragraph will describe the main features of the sound engine.

“Carrier” and “Modulator” The used model (fig 4) is based on the FWWM model, but with addition of an extra resonance filter that works as a modulator, while the first resonance filter becomes a carrier. The effect is similar to an AM model but with more complex chaotic behavior. Although the carrier and the modulator are not multiplied by each other (as in an AM model), the effect is the same. Similar to the FWWM, players can control the carrier and modulator pitch by controlling the resonance filter cutoff frequency. The modulator frequency range is mapped to the values underneath 20HZ to achieve rhythmical patterns.

Wavetable Also, TudorMachine implemented an option to transform between different wavetables. Two kinds of functions were used in the wavetable, sine wave and sinc function. The sinc function creates more complex pitch fluctuations than the sine wave function. The player may also control the frequency scaling of the sinc function wavetable (Fig 5). Generally speaking, the higher the frequency, the noisier the output, the noisier the pitch sounds.

Figure 5 sinc functions

Sigmoid function+WaveShaper Later, a sigmoid function as a wavetable was added on the modulator column; the sigmoid function gives an almost binary result and very sharp waveforms. It is useful for creating an envelope effect on the carrier signal. The waveshaper is used in the same way: it works as an expander and it allows the player to make the envelope more or less sharp.

24 Delay line A delay line makes it possible to create a feedback loop in Max/Msp. In TudorMachine, the delay line is also used to create pitch shifting by changing the delay time. The change could be performed manually or through a random signal generator (rand~). The minimum delay time in this model is 8 samples with a sample rate of 44100.

Auxiliary send and return The Aux send and return are the connection points with the other voices in the network. Each voice can send a sound only to one voice that follows it in the patch. And each voice could receive only the sound which precedes in the voice chain. It could be interesting to create more complex matrix connections, however this patch was designed in correlation to the UC33 interface that has a limited number of knobs.

4.3 Interface A quest for the right interface A. Version 1: MagicPad As described in the previous chapter, the starting point for FWWM was to find the right control interface. The desire to work with a tactile interface was clear from the beginning, so after finding the suitable interface (Quneo), the programming could start. TudorMachine started from the sound engine, finding the sound I wanted to use. At that time, I was very interested in multi-touch control – I had been using Apple Magic Pad as an effect controller for some time, similarly to the KORG Kaoss pad effects processor. The trackpads such as MagicPad have a big potential to be used as music controllers, as they introduce three-dimensional control per finger (up to 10 fingers): X,Y and Z, where Z axis is referred to surface area of the fingertip on the pad, which could have seen as pressure(although pressure is not really perceive by the user). The first version of TudorMachine was made with help of Apple MagicPad, an external trackpad that works identically to any regular MacBook trackpad. Using the external object for Max/Msp fingerpringer data of the fingers from the trackpad could be received and sent into Max/Msp. The following parameters to control Figure 6 MagicPad mapping TudorMachine sound engine were processed:

25

Triggering Events Each finger that touches the trackpad triggers a voice. The trackpad works as a two- dimensional map, where the X axis is assigned to the modulator frequency and the Y axis is assigned to the carrier frequency (fig 6). The Z axis controls the amount of artificial reverberation added to each voice.

Sustain and hold One of the most useful features of trackpads is their ability of using finger gestures. Especially interesting one is the page-scrolling gesture on Apple OS X. This gesture simulates an effect of inertial movement where a page appears to continue scrolling for less than a second after the track pad is released, giving the user a feeling of natural movement. This method was used to simulate the sound decay of the voice. The Y axis delta time was assigned to the release time of each voice. If you make a quick sweep up or down, the sound will sustain for a time. The faster the sweep, the longer the sound decay will be. The X axis was handled in a similar way, only with a very long release time; it was used as a method to hold sounds in order to create multi-sound layering.

Problems and conclusion The most acute problem while using a trackpad or tablet as the main sound controller seemed to be the flatness of the object. Despite its high usefulness and power, the lack of tactile dimension, i.e. a physical dimension, makes this control surface not suitable for the intended musical and performative needs.

B. Version 2: FSR Matrix Array An FSR (Force Sensing Resistor) Matrix Array is a sensor used to achieve a tactile multi-touch control. The sensor is composed from small FSR (force sensor units) that are arranged as a matrix, a touch of a fingertip on the surface applies pressure to several FSR points. By analysing the amount of pressure per point, a dedicated program could recognise the finger location, the pressure amount and the surface area of the fingertip on the sensor. Using a microcontroller board such as an arduino, the matrix data can register real 3D gestures just like in the case of a trackpad but an extra tactile dimension can be added by using a rubbery surface on top of the sensor - thus pressure sensitivity can be achieved by actually pressing down onto the surface which is then translated to a measure of surface area on the FSR array. This technology is still very new and not used very often, however, there are already some new projects in the field of musical instruments which apply this technology, e.g. linnstrument, Seaboard etc. All of them present a new array of possibilities for musicians, but for a very expensive price tag. This technology will undoubtedly become

26 more affordable in the following years, but currently it is still inaccessible for a common user. A long and strenuous search led to discovering a company named Kitronyx which develops an Arduino- like board that is designed to interface with an FSR Matrix made by sensitronics (Fig 7). The board is called SnowBoard.

Different from the MagicPad which utilizes an API that sends the processed information (as described before) about each finger status, the Snowboard sends only the matrix data, without any blob Figure 7 FSR matrix array sensor, recognition. The information that the user receives is sensitronics a matrix array with 10*16 terms, a measurement of each sensor on the array. I have developed a Max/Msp abstraction called 3d (fig 8) that analyzes the matrix data and produces finger recognition data similar to the fingerpinger object.

3d makes use of a computer vision external class named cv.jit for blob recognition. For each finger that is being pressed 3d sends the following parameters: 1. X,Y coordinates each finger 2. Z for pressure level 3. Finger size/blob size 4. Delta time for X and Y axis Figure 8 screenshot kitronyx This object was posted on Kitronyx blog with a very helpful tutorial video. Also, the 3D abstraction was released as an open source code on github with examples, description and a demo patch with a simple FM feedback model (as shown in Kitronyx video).

Mapping The same mapping method as the MagicPad method was applied, only with one essential difference: having the Z axis (pressure) assigned to the amplitude of each voice.

Problems and conclusion The 3d project had to be stopped due to a technical problem – the size of the matrix array was too small to be used as a practical musical interface. In spite of Kitronyx’s

27 intentions to release a larger matrix, they haven’t yet released a new matrix array on account of financial problems. All in all, more time was dedicated to finding technical solutions than creating music, while the actual solution wasn’t technical but conceptual. There was something in the way of controlling the sound engine which didn’t work: the playing technique didn’t suit the sound source.

C. Towards a new approach

During the time of trying to overcome technical problems I went to listen to the realisation of the fourth version of Rainforest by Nick Collins at the Volkspaleis festival. This version was described as “a collaborative environmental work, spatially mixing the live sounds of suspended sculptures and found objects, with their transformed reflections in an audio system” (Fullemann, 1984). Collins with his workshop participants spread resonating objects around the hall of Zuiderstrandtheater. I walked around the hall listening to the sound environment built by various independent small sound objects responsible for a complex sonic experience. The conclusion made afterwards was that my design was missing two essential aspects: Time and Space.

Time is understood as providing the voices with time to resonate, to be heard without manipulating them all the time. Time is needed for the listener to hear the system, to listen to its behavior, to immerse oneself in the sound environment. The active approach of playing the system wasn’t in line with the aesthetics intended to achieve. Thus, a more passive way, a reactive method to control the sound engine needed to be invented. Space Collins’ realisation of Rainforest created a space containing and bounded by a small resonating objects. In this space, each unit had a different amplitude, different spectral identity and different localisation. However, the model I made created short instances that were constantly changing, they were panned in the same way and had more or less the same amplitude.

Consequently, these two conclusions led me to find a different interface and a different approach to performing.

D. Version 3, UC 33, from active to reactive UC 33 (Fig 9) is a midi controller which is designed like a sound mixer. It has 8 faders and 24 knobs, 3 on each virtual channel.

28

Figure 9 UC33 mapping

The UC33 design changed the emphasis in playing. From an active approach focused on triggering the sound events and phrasing them, to a reactive approach that was concentrated more on shifting the sound stream and transforming it. By moving to a reactive-based system controller, the chaotic sound engine could be freed to express itself. Rather than spending the effort on creating the sound, the efforts could focus on sculpting the sound environment and reacting to it .

4.4 Mapping

The instrument has two main ways of control, Global and Local. The global control is assigned to all of the voices together and local control affects only one specific voice. The global control is used to create big timbre changes and sharp gestures, whereas the local control is used to play small gestures and small sound tuning. Some of the parameters have both a local and global control. Like in FWWM, a change in one parameter will affect other parameters due to the chaotic behavior of the model, so absolute control is never possible.

29 A. Global control

Carrier and modulator The main behavior of a unit is defined by applying a carrier frequency along with a modulator frequency, which will define the spectral range of the unit and its range of change and fluctuations. Fader number 7 controls the carrier frequency with use of a distribution function. It means that when, for example, the fader is at 0 the carrier frequencies of each unit will tend to be at low range. This method is used to achieve an overall behavior of the system – a good starting point to start playing or a way to shift radically between one mode to another.

Sigmoid function+Waveshaper Fader 8 is assigned to the Z factor using a sigmoid function, like the following

. The higher the z value, the sharper the function slope. Fader 9 works alike, but it uses an expander wavetable, transforming the lookup table from linear to exponential.(fig 10)

Figure 10 The effect of these changes on the sound is not linear and depends on the mutual relationship of the sigmoid function and the wavetable. When both of the functions are at their sharpest slope point, the sound is very noisy and when they are at their initial stage, the unit behavior is only defined by the carrier and modulator frequency. The task of the control parameter is to transform the sound from pitched sound to noisy harsh sound. Likewise, it could be used to encourage more active behavior of the system.

30 B. Local control

Carrier and modulator The local method of controlling the carrier and modulator frequencies utilizes the Magic Pad in the same manner as described in the Magic Pad chapter. Each finger is assigned to a voice and receives the frequencies depending on its position on the trackpad. Another way of assigning a frequency to a voice is by clicking the voice number on the number pad (right corner of the UC33) and scrolling the trackpad. This method may be used in two ways: 1. Creating more complex voices environment, by giving each voice different oscillation values. 2. Active playing of specific voice behavior by scrolling the trackpad, achieving a gliding effect of pitch and fluctuation.

Mixing the voices Faders 1-6 control the gain of each unit. It allows the user to create an environmental mix of the voices. It could also be used for building up the environment and/or focusing on certain interesting voices.

Fine-tuning a voice The local control allows the player to fine-tune each voice. This is possible by controlling the following parameters: Delay time – Changing the delay time of a voice usually changes the pitch harmonic and it could be used for live playing of this pitched or for different tuning. Half of the range of the delay knobs is assigned to the delay time and second half is assigned to random signal generators. This half controls the rate of change in the random signal generator. Wavetable – There is a possibility to transform the carrier wavetable function on each voice. The transform is between cosin wavetable to a sinc function and to a more noisy sinc function. This transform controls how stable and noisy the sound will be. Aux send – The amount of voice being sent to the next voice in the chain (1 to 2, 2 to 3, etc…). Aux send is a pre-fader, that means that a voice could affect another voice even if it is not being heard. The Aux send signal is taken from the modulator part of the patch in order to achieve more complex rhythmical patterns. Reverb – Each voice has a separate reverb unit. The assigned knob controls the dry and wet parameter in the reverb unit. The reverb helps the player achieve longer pitch sounds and more constant pitches. It also gives a metallic timbre to the sounds.

Unlike FWWM, this instrument is not a “frozen” project yet; explorations of what the best method of controlling the system might haven’t come to an end yet. The strategy of

31 having two layers of control, Global and Local, has proven to be a successful method that enables to research the possibility of this instrument in the future.

4.5 Spatialisation TudorMachine has two spatialisation modes, one for a stereo setup and one for a four- channel setup. The spatialisation method is used in order to create a space, an environment that is composed of each of the 6 voices. The units are placed in a virtual circle in the following manner (the numbers refer to the voice number): 1. FL, 2. FC, 3. FR, 4. BR, 5. BC, 6. BR. Apart from the spatial effect, this method makes it possible for the player to recognize which voice plays what, and differentiate between them. In Stereo mode the sounds are panned from left to right.

4.6 Playing TudorMachine “I think I find an equal relationship with no-input mixing board, which I didn't see with the guitar. When I played the guitar, I had to play the guitar. But with the mixing board, the machine would play me and the music would play the other two, and I would do something or maybe nothing. I would think some people would play the guitar and create their music with this kind of attitude, but for me, no-input mixing board gives me this equal relationship between the music, including the space, the instrument, and me.“ Toshimaru Nakamura about playing No Input Mixer Board(NIMB)(Meyer,2003)

Reactive instruments (such as NIMB and TudorMachine) create a new relationship between the player and their instrument. The machine becomes a collaborator with whom the player creates music. So it is inevitable to ask: what will be the player’s role after such a change in relationships? John Bischoff describes this essential difference between playing an active instrument, (like FWWM), and a reactive one: “There is something unsettling in the move from acoustics to electronics about letting loose, but in exchange you step back one little half step and become an influencer rather than an initiator.” (Khan, 2004, p77) The player becomes an influencer, he or she no longer creates the sounds (initiator); their role is rather to react to the machine, shift it, tweak it and explore it.

For performers with more traditional backgrounds who are used to active playing (including myself), this approach might seem very new. However, I’m still learning how to play TudorMachine, trying to find a balance between the nature of the instrument and my traditional playing approach. With the mapping architecture of TudorMachine, having two layers of control, global and local, the player can achieve a certain balance between active and reactive methods of playing. As described in the mapping chapter, the global layer allows the player to play sharper gestures and could be used to articulate sound events. The local layer is used for slow tuning and reacting to the system’s behavior.

32 When involving more musicians the picture becomes more complicated. As far as my limited experience with playing TudorMachine permits, I must, unfortunately, claim it is not a versatile instrument. By versatile I mean the possibility to perform different kinds of compositions or improvisation sessions with varied instrumentalists. As time is an important aspect of TudorMachine’s nature (since playing TudorMachine means dealing with a long timescale of a continuous event), the ability to react on-the-fly and be flexible is not possible. Spontaneous playing thus occurs to be very problematic. Improvising is, naturally, possible, but the fellow players need to understand the time aspect of playing the machine. How the other players will react to such playing is left for their choice. However, discussions before a performance are required.

4.7 Two examples of TudorMachine performances I would like to describe two examples of performances I did with TudorMachine. In each one I used a slightly different approach in order to experiment with new playing possibilities which TudorMachine has to offer.

Figure 11 Solo performance with TudorMachine at Villa K festival 25.4.2015

A. Fdbk Ptrns For live electronics and percussion, performed with the percussionist Mei-Yi Lee. Performed at Studio Loos, The Hague, 9/4/2015 The piece will be performed again for my final exam concert.

33 The intention of this composition was to reveal the sonic potential of TudorMachine in a musical way and compose a path between the different behaviors of the system. I used the percussion part to add a parallel layer that could reflect on and emphasize the characteristics of the system. In the center of my interest was a dialogue that would drive me to play and find new sounds. During rehearsals both of us tried to research the potential sounds and playing technique that could fit the sound from the instrument, or be a counterpoint to it.

The composition is divided into two movements: The first movement was based on the relationship between TudorMachine’s pitched material and the percussion. The structure of this part was a very soft beginning with a sparse texture composed from short high-pitch sounds morphing towards a dense and noisy texture, finishing with a crescendo. Mei-Yi created pitched sounds by scratching percussive objects with metal sticks, playing gradually more and more sounds with higher frequency.

The main idea of the second movement was that the percussion section will create a multi-layered sonic environment based on rhythmical patterns parallel to electronic ones. This environment was created by using a tiny electronic bugs called HexBug on the percussive instruments (fig 12). HexBug is a small toy designed like a bug with a tiny engine inside. The engine creates a pseudo-brownian movement of the bug on a surface, creating chaotic sound patterns. Figure 12 Bug in a drum

Both parts used the same method – an improvisation that is based on certain rules and directionality. The particular rules were: what parameters to emphasise (pitch, patterns, rhythm) and which technique to use to achieve these points of emphasis. The directionality contained a starting point, middle point and end point. These movements regard: density of events, amplitude and pitch to noise relationship. This method leaves a large space for improvisation and at the same time creates a loose structure for the composition.

34 In the next performance I aim to add a new movement which will deal with more defined rhythms. The internal structure of the movements should be defined more closely and with a more complex flow than just silent to loud or sparse towards dense.

B. Improvisation with Nikolaj Kynde Amir Bolzman Live electronics, Trumpet Nikolaj Kynde Trombone Performed at Grondwater Festival, The Hague, 22/4/2015

The main concept of this improvised session was to examine TudorMachine’s potential to be used as a sound environment, a sound platform that me and Nikolaj could improvise on or with, and with each other. For this performance I added my foot pedals in order to articulate sound events. Similar to FWWM, the foot pedals were assigned to the Q factor of the resonant filter, allowing me to change the sound character from pitched to noisier.

At our rehearsals me and Nikolaj experimented with pitches that would fit TudorMachine’s sounds. We chose to improvise on a pentatonic scale with five pitches, aiming to create different intervals between the trumpet, trombone and TudorMachine.

A small remark before I continue to describe the performance shall be mentioned. Originally, I was asked to perform a solo piece, nevertheless I decided to add Nikolaj to the session without telling the organizers. We kept it a secret and used the surprise element during the last part of the performance. This will be described later.

The improvisation was divided to three parts: The first part introduced a solo trumpet section. Opening the session with the trumpet was a tool for me to attract attention of the listeners. I used it also as a physical warm-up and as a method to clear my mind towards the rest of the session. The trumpet played repetitive pitches, focusing on making variation on the timbre of the sound, trying to imitate sounds that were later to be produced by the TudorMachine.

The second part was mainly composed of TudorMachine sounds, with a short overlap between the trumpet introduction and the electronic sounds. I chose to play more noisy and aggressive sounds with the purpose to slowly stabilise the system into more pitched sounds, setting the sound environment to be ready for the third part.

35 The third part introduced a return of the trumpet, playing a dialogue with the electronic pitch sounds of TudorMachine. Me and Nikolaj agreed that the return of the trumpet will be a cue for him to reveal himself and start playing. Because Nikolaj hid in a different room, the audience thought at the beginning that there is a delay effect multiplying my sounds, slowly noticing Nikolaj’s presence. Our improvisation started by playing long sounds, trying to overlap the events between us. We directed the improvisation towards more hectic playing, using the foot pedal as a tool to create dramatic sound events. At the end the two of us were jumping on it leading the improvisation to the end with a crescendo.

4.8 Conclusion TudorMachine is an ongoing project: the research of its sonic possibilities and development of playing and performative methods are still in progress and, hopefully, will lead to using it to its fullest potential.

I consider the two depicted performances as a good starting point for future performances. The first performance introduced a more “classic” approach of playing, the approach that is closer to Tudor and Bischoff’s playing methods. The player’s role, according to Tudor, is to reveal the system’s sonic possibilities and its behavior, while Bischoff puts emphasis on the player as an influencer who shifts and routes the machine sound stream, collaborating with it. The processes of revealing the system could be done by a solo performance or via collaboration with another instrumentalist, as applied in fdbk/ptrns.

The approach of the second performance was to utilize TudorMachine as a platform which would allow improvisation to take place on top of it, or with it. Since there was no more need to create or hold the sound, there was a chance to play another layer of music that would be in dialogue with the sounds of TudorMachine. This layer could be any sound source, in the last case it was the trumpet and trombone, but it could be a voice, tape or any layer that could use TudorMachine sounds as a platform for improvisation.

36 RawTouch - Conductive Digital Instrument

This chapter will be dedicated to a presentation of a new instrument I have been working on for the past three months. Although it is barely finished, the concept behind it could inform my future approach towards instrument design.

Figure 13 RawTouch future design RawTouch (Fig 13) is a digital instrument that suggests an interesting fusion between analog control and digital sound. It is portable, self-powered, modular and with a built-in speaker. RawTouch has two main inspirations: Michael Waisvisz’s CrackleBox and Paul Berg’s PILE synthesis. This inspiration is mostly referring to the sound production methods they used.

The interface of instrument is inspired by the famous CrackleBox of Michel Waisvisz. He writes about his experience that led him to create it as follows: “Touched electronics sounded rougher and sort of rebellious against the clean and high-tech quality of the electronic music from the fifties and early sixties. At some point I started playing by placing my fingers on the print board of a damaged electronic organ. By patching the different parts of the circuit through my – conductive – fingers and hands I became the thinking [wet] part of an electronic circuit and I started seeing my skin as a patchable cable, potentiometer and condensator.”(Waisvisz, 2004)

More or less concurrently, a group of musicians were trying to find a new aesthetic in the digital domain of computer synthesis. These methods were later described as “non- standard synthesis” (Holtzman,1979,p53). In this group we can find G. M. Koenig's

37 Sound Synthesis Program (SSP), Herbert Brün's Sawdust Program and, of course, P. Berg with PILE. Here is what P. Berg wrote about his intentions with PILE: “A myriad of sound synthesis programs exist based on models related to instrumental music or to the design of a traditional analog electronic studio... They all require the use of a computer because of the magnitude of the task. For many, this is perhaps the only reason why they require the use of a computer. It is a valid reason, but it is certainly not the most interesting one. More interesting ones are: to hear that which without the computer could not be heard; to think that which without the computer would not be thought; to learn that which without the computer would not be learned.” (Berg, 1979,p30)

We can see a strong connection between these two approaches (Waisvisz and Berg), even though the method of generating sound is completely different. The first one cherishes the intuitive approach of learning and creating by touch (crackle box) and intervention (circuit bending). The second approach is made with help of equations and coding. Nevertheless, both of them seek a new method for creating new electronic sounds that would not be derived from the established electronic methods of that time. Holtzman in his article An automated digital sound synthesis instrument describe this difference between the “traditional” approach and the “non standard” approach: “first, the noises this technique tends to generate differ greatly from those of the traditional instrumental repertoire and even from much electronic music; second, in this technique, sound is specified in terms of basic digital processes rather than by the rules of acoustics or by traditional concepts of frequency, pitch, overtone structure…”(Holtzman, 1979, p53)

The two main ideas that govern this project are: 1. To reduce the gap between control interface and digital sound engine. 2. To increase the involvement of the control interface in the sound creating procedure.

FWWM and TudorMachine used models inspired by the analog domain, using filters, delay lines and a modular approach. The control over this model was based on a metaphor: the player will move a knob and control the Q factor of a filter, which will cause a change in the sound. This is a metaphor, or layer over what is actually happening under the software shield, as what the player is actually doing is manipulating a collection of binary operations, processing a flow of numbers and turning it into machine code.

38 RawTouch, in contrast, seeks to avoid the metaphor and tries to “work” directly with the numbers, similar to Berg’s in the article about PILE.

I’m currently working on a simple model that connects conductive touch points to an arduino microcontroller board. The arduino itself works as a sound engine. This is done with use of ideas from Bytebeat, a musical microgenre considered to be invented by Ville-Matias Heikkila(Heikkila,2011). Bytebeat composition is based on one-line fragments of C code that produce an audio output, it uses bit shift and logical operators to create waveforms. A simple composition in Bytebeat would look like this: main(t){for(t=0;;t++)putchar(t*(((t>>12)+(t>>8))&(63&(t>>4))));} What is presented here is a simple callback function that sends PCM stream/values directly to DAC, or in case of PWM output. Sound wise, this function produces very short iterative sequence of pitches, with a very rough 8 bit sound quality.

Bytebeat is a very reduced form of non-standard synthesis, nevertheless I consider it a good starting point for my RawTouch experiment. In the future I want to research more about PILE and other non-standard synthesis methods in order to embed a similar model in my instrument.

Figure 14 RawTouch test model

39 Conclusion and Future Work

In summary, the research about instrument design for live electronics was very interesting and insightful; it resulted in two wonderful tools for live performance, FWWM and TudorMachine. Both could be used in different live performance situations and each one has its own advantages and disadvantages. FWWM presents very expressive method of playing with a great deal of flexibility. It suited my need back then, as an instrumentalist with traditional background, for a physical method of playing. TudorMachine presents a more complex sonic experience and a more reactive mode of interaction. The instrument and the path I took to create it, taught me a different perspective for live performance, which I am still learning. My main desire for the near future is to continue developing performance capabilities using these instruments and to enrich my unique idiom for improvisation and performance.

In my next performances I want to examine more thoroughly the idea of playing two different musical layers. Specifically I would like to address the possibility to use TudorMachine as a platform allow me to play another active musical layer on top of it, such as playing the trumpet or using my voice. I believe this method could enable me to create large variety of composition based on the relationship of the two layers. As a former piano player used to playing two kinds of layers when performing (harmony and melody) I find the possibility of playing a background layer and foreground layer while performing with electronics quite powerful.

In a technical aspect I want to continue my work on RawTouch and realise the ideas that I presented in the chapter before. As well, I want to continue my work on TudorMachine. My goal is to emancipate TudorMachine from the laptop and transform it into a self-contained instrument, meaning, an instrument that contains its own DSP unit and a DAC inside the machine itself without needing to be carry around a laptop to produce sound. This direction is a natural continuation of my instrument design guidelines presented in the beginning of this thesis. This will lead to a much more practical and mobile instrument, better stability (if programed well) and overall more unity between the player, the interface and the sound engine. This will be implemented using the Raspberry Pi or a similar single-board computer unit. As a more personal perspective, I hope this new technology will emancipate computer musicians, like me, from the growing influence of the computer screen.

40 References Bailey, D. (1992). Improvisation: Its Nature And Practice In Music. Da Capo Press, Inc. Berg P. PILE: A Language for Sound Synthesis, in: Computer Music Journal, Vol. 3, No. 1 (Mar., 1979), pp. 30-41 Published by: The MIT Press, Retrieved from: http://www.istor.orgistable/3679754 Fullemann D. (1984) An Interview with David Tudor, Retrieved from: http://davidtudor.org/Articles/fullemann.html#rainforest 4 Heikkila V. M.(2011) Algorithmic symphonies from one line of code -- how and why? countercomplex bitwise creations in a pre-apocalyptic world, Retrieved from: http://countercomplex.blogspot.nl/2011/10/algorithmic-symphonies-from-one-line- of.html Holtzman, S. R. An Automated Digital Sound Synthesis Instrument in Computer Music Journal, Vol. 3, No. 2 (Jun., 1979), pp. 53-61 Published by: The MIT Press Jordà , S. (2008) - Interpretació amb Mitjans Electrònics, Especialitat de Sonologia (Presentation ,IUA, Music Technology Group Barcelona) Khan D. (2004) A Musical Technography of John Bischoff, Leonardo Music Journal, Vol. 14, Composers inside Electronics: Music after David Tudor ( 2004), pp. 74-79 Published by: The MIT Press Retrieved from: http://www.jstor.org/stable/1513509 Meyer W. (2003) Interview with Toshimaru Nakamura, Perfect Sound Forever Magazine, Retrieved from: http://www.furious.com/perfect/toshimarunakamura.html Norman, Ryan, Waisvisz.(1998) Touchstone STEIM Touch manifestation http://www.crackle.org/touch.htm Ostertag, B. [cycling74.com] (2012). An Interview With Bob Ostertag, Retrieved from: https://cycling74.com/2005/09/13/an-interview-with-bob-ostertag/ Riad, Y. Towards a design of a versatile instrument for live electronic music, (Bachelor’s Thesis), institute of Sonology 2013 Retrieved from: http://194.171.57.139/sonology/NL/thesis- pdf/YounesRiad_SonologieBA_Scriptie.pdf Ryan, J. As If By Magic Some Remarks on Musical Instrument Design at STEIM Retrieved from: http://jr.home.xs4all.nl/MusicInstDesign.htm Schonfeld V. (1972) "From Piano to Electronics," Music and Musicians 20 (August 1972) pp. 24--26. As quoted in: http://www.jstor.org/stable/1513497 Waisvisz, M. (2004) Crackle history , Retrieved from: http://www.crackle.org/CrackleBox.htm Zadel, M. (2006). A Software System for Laptop Performance and Improvisation (Master's thesis, McGill University), Retrieved from http://www.music.mcgill.ca/~zadel/research/publications/zadelmastersthesis.pdf

41 Appendix 1: Contents of the accompanying CD 1. ABRA Vocal Ensemble and Amir Bolzman, an excerpt from live improvised session at the Jerusalem season of culture festival, Aug 2013, Jerusalem - 09:56

2. Kapara (Ma Nishama?){Sweety, (What's up?)} - 05:24

42 Appendix 2: Etude for QuNeo and Noise, Notation and Performing Techniques

43

44 Appendix 3: List of selected performances

22.04.15 Grondwater Festival @ De Vinger, Den Haag Duo performance with Nikolaj Kynde, Trumpet, trombone and electronics 09.04.15 ephemere @ Studio Loos, Den Haag Fdbk/ptrns composition for live electronics and percussion , together with Mei-Yi Lee 28.11.14 Kernel Panic @ De Vinger, Den Haag Solo performance 17.11.14 Oorsprong Curators Series no. 18 @ Amsterdam Solo performance 14.11.14 ETHER SNUIVEN @ Worm, Collaboration with Dutch sound artist Dewi De Vree 09/10.10.14 Landen Festival @ Amsterdam Performing with Biovon Van Tube 03.09.14 Rain forest in Jerusalem @ Hansen House for art, Jerusalem Audio/Video performance 19.08.14 Sananess quartet @ HaMizkaka, Jerusalem Free Jazz electronic improvisation 11.08.14 Abra Ensemble and Amir Bolzman @ Jerusalem season of culture festival 16.07.14 Sananess quarttet @ Yellow Submarine club, Jerusalem Free Jazz electronic improvisation 01/09/21.04.14 Sonology Electroacoustic Ensemble plays Nono Interventions, Holland Festival,l Rijksmuseum Amsterdam 01.03.14 All wild animals @ Stichting Centrum Den Haag Performing with Zvov Trio (Viola, Cello and computer) 06.02.14 Sonology discussion concert, Den Haag With the Sonology Electroacoustic Ensemble. 09.02.14 'Frames of Thinking' independent Exhibition@ Poortgebouw, Rotterdam With Kacper Zimerman

45