Developing Interactive Electronic Systems for Improvised Music-Justified
Total Page:16
File Type:pdf, Size:1020Kb
Developing Interactive Electronic Systems for Improvised Music Jason Alder Advisor: Jos Herfs ArtEZ hogeschool voor de kunsten 2012 Contents INTRODUCTION ii 1. EVOLUTION OF ELECTRONICS IN MUSIC 1 2. IMPROVISATION 5 3. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING 16 4. ARCHITECTURE 27 A. CLASSIFICATION PARADIGMS 27 B. LISTENER 33 C. ANALYZER 39 D. COMPOSER 59 5. CONCLUSION 69 REFERENCES 73 ii Introduction This paper will discuss how one can develop an interactive electronics system for improvisation, looking at how this system differs from one designed for composed music, and what elements are necessary for it to “listen, analyze, and respond” musically. There will be a look at the nature of improvisation and intelligence, and through discussions of research done in the fields of cognition during musical improvisation and of artificial intelligence, insight will be gathered as to how the interactive system must be developed so that it too maintains an improvisational nature. Previous systems that have been developed will be examined, analyzing how their design concepts can be used as a platform from which to build, as well as look at what can be changed or improved, through an analysis of various components in the system I am currently designing, made especially for non-idiomatic improvisation. The use of electronics with acoustic instruments in music is generally the result of the goal of opening up possibilities and using a new sonic palette. There is a wealth of approaches for how the electronics get implemented, such as a fixed performance like tape-playback pieces, or the use of effects to manipulate the acoustic sound like guitar pedals, or pre-recorded/sequenced material being triggered at certain moments. A human is often controlling these electronics, whether that is the performer or another person behind a computer or other medium, but the possibility of the electronics controlling themselves brings some interesting ideas to the improvisation world. With the advances in technology and computer science, it is possible to create an interactive music system that will “interpret a live performance to affect music generated or Introduction iii modified by computers” (Winkler, 1998). Using software such as Max/MSP, the development of a real-time interactive system that “listens” and “analyzes” the playing of an improviser, and “responds” in a musical way, making its own “choices” is closer to fact than the science-fiction imagery it may impart. 1 1. Evolution of Electronics in Music An initial question some may have when considering improvisation with a computer is, “Why?” More specifically, “Why improvise with a computer when you could improvise with other humans?” The use of electronics in music is not an entirely new concept. The Theremin, developed in 1919, is one of the earliest electronic instruments1. Utilizing two antennae, one for frequency and the other for amplitude, it produces music through pitches created with oscillators. The instrument is played by varying the distance of one’s hands to each of the antennae. Moving the right hand towards and away from the antennae connected to the frequency changes the sounding pitch, while the other hand does the same in respect to the amplitude antennae to change the volume (Rowe, 1993). Throughout the 20th century, more and more instruments utilizing electric current were developed, for example monophonic keyboard instruments like the Sphärophone (1927), Dynaphone (1927-8), and the Ondes Martenot (1928). These first attempts at electronic instruments were often modeled to try to provide characteristics of acoustic instruments. Polyphonic inventions such as the Givelet (1929) and Hammond Organ (1935) became more commercially successful as replacements for pipe organs, although the distinct characteristic sound of the Hammond also gave rise to those wanting to experiment with its sonic possibilities beyond the traditional manner (Manning, 2004). As has been the case throughout the development of music, the change and development of new technology opens doors and minds to previously 1 For an explanation and demonstration of Theremin playing, see http://www.youtube.com/watch?v=cd4jvtAr8JM Chapter 1 2 unexplored musical territory. Chopin and Liszt had the virtue of inspiration “by the huge dramatic sound of a new piano design. The brilliance and loudness of the thicker strings was made possible by the development of the one-piece cast- iron frame around 1825” (Winkler, 1998). In late 1940s Paris, Pierre Schaeffer was making Musique Concrète using the new recording technology available by way of the phonograph and magnetic tape, and “the invention of the guitar pickup in the 1930s was central to the later development of rock and roll. So it makes sense today, as digital technology provides new sounds and performance capabilities, that old instruments are evolving and new instruments are being built to fully realize this new potential” (Winkler, 1998) Balilla Pratella, an Italian futurist, published his Manifesto of Futurist Musicians in 1910 calling for “the rejection of traditional musical principles and method of teaching and the substitution of free expression, to be inspired by nature in all its manifestations” and in his Technical Manifesto of Futurist Music (1911) that composers should “master all expressive technical and dynamic elements of instrumentation and regard the orchestra as a sonorous universe in a state of constant mobility, integrated by an effective fusion of all its constituent parts” and their work should reflect “all forces of nature tamed by man through his continued scientific discoveries, […] the musical soul of crowds, of great industrial plants, of trains, of transatlantic liners, of armored warships, of automobiles, of airplanes” (Manning, 2004). In response, Luigi Russolo published his manifesto The Art of Noises: “Musical sound is too limited in qualitative variety of timbre. The most complicated of orchestras reduce themselves to four or five classes of instruments differing in timbre: instruments played with the bow, plucked instruments, brass-winds, wood-winds and percussion Evolution of Electronics in Music 3 instruments… We must break out of this narrow circle of pure musical sounds and conquer the infinite variety of noise sounds.” (Russolo, 1913) John Cage’s interest in improvisation and indeterminacy was an influence to the composers of the sixties that first began experimenting with electronic music in a live situation. Gordon Mumma’s Hornpipe (1967), “an interactive live- electronic work for solo hornist, cybersonic console, and a performance space,” used microphones to capture and analyze the performance of the solo horn player, as well as the resonance and acoustic properties of the performance space. The horn player is free to choose pitches, which in turn affects the electronics in the “cybersonic console”. The electronic processing emitting from the speakers then changes the acoustic resonance of the space, which is re- processed by the electronics, thus creating an “interactive loop” (Cope 1977). Morton Subotnick worked with electrical engineer Donald Buchla to create the multimedia opera Ascent Into Air (1983), with “interactive computer processing of live instruments and computer-generated music, all under the control of two cellists who are part of a small ensemble of musicians on stage” (Winkler, 1998). Subotnick later worked with Marc Coniglio to create Hungers (1986), a staged piece where electronic music and video were controlled by the musicians. Winkler comments on the “element of magic” in live interactive music, where the “computer responds ‘invisibly’ to the performer”, and the heightened drama of observing the impact that the actions of the clearly defined roles of computer and performer have on one another. He continues by saying that “since the virtue of the computer is that it can do things human performers cannot do, it Chapter 1 4 is essential to break free from the limitations of traditional models and develop new forms that take advantage of the computer’s capabilities” (Winkler, 1998). The role of electronics in music is that of innovation. The aural possibilities and a computer’s abilities to perform actions that humans cannot, create a world of options not previously available. Utilizing these options fulfills Russolo’s futurist vision, and using these tools for improvisation expands the potential output of an electronics system. By allowing artificial indeterminism, human constraints are dissipated and doors are opened for the potential of otherwise unimaginable results. 5 2. Improvisation The question of how one makes a computer capable of improvising is one of the crucial elements in the task of developing an interactive improvisational system. As a computer is not self-aware, how can it make “choices” and respond in a musical manner? To address this issue, I looked to the nature of improvisation. What is it that is actually happening when one improvises? What is the improviser thinking about in order to play the “correct” notes, such that it sounds like music, as opposed to a random collection of pitches or sounds? Some may have a notion that improvisation is just a free-for-all, where the player can do anything they wish, but this is clearly not the case. If one were to listen to an accomplished jazz pianist play a solo, as well as an accomplished classical pianist play a cadenza, they would likely make their respective improvisations sound easy, effortless, and flow in its style. But if the roles were reversed, and the jazz pianist played a Mozart cadenza and a classical pianist played a solo in a jazz standard, there would likely be a clear difference in how they sound. The music- theorist Leonard Myer defines style as: “a replication of patterning, whether in human behavior or in the artifacts produced by human behavior, that results from a series of choices made within some set of constraints… [which] he has learned to use but does not himself create… Rather they are learned and adopted as part of the historical/cultural circumstances of individuals or group” (Myer, 1989).