<<

MUMT 303 New Media Production II Charalampos Saitis Winter 2010

Computer-Assisted Composition A short historical review

Computer-assisted composition is considered amongst the major musical developments that characterized the twentieth century. The quest for ‘new ’ started with Erik Satie and the early electronic instruments (Telharmonium, Theremin), explored the use of electricity, moved into the magnetic tape recording (Stockhausen, Varese, Cage), and soon arrived to the computer era. Computers, science, and technology promised new perspectives into sound, music, and composition. In this context computer-assisted composition soon became a creative challenge – if not necessity. After all, composers were the first artists to make substantive use of computers.

The first traces of computer-assisted composition are found in the Bells Labs, in the U.S.A, at the late 50s. It was Matthews, an engineer there, who saw the possibilities of while experimenting on digital transmission of telephone calls. In 1957, the first ever computer programme to create sounds was built. It was named Music I. Of course, this first attempt had many problems, e.g. it was monophonic and had no attack or decay. Max Matthews went on improving the programme, introducing a series of programmes named Music II, Music III, and so on until Music V. The idea of unit generators that could be put together to from bigger blocks was introduced in Music III. Meanwhile, Lejaren Hiller was creating the first ever computer-composed musical work: The Illiac Suit for String Quartet. This marked also a first attempt towards algorithmic composition. A binary code was processed in the Illiac Computer at the University of Illinois, producing the very first computer algorithmic composition. Later Hiller collaborated with John Cage to create HPSCHD, a programme able to define equal-temperament scales, then choose pitches and durations in them, and finally produce sounds. Back at the , worked on a composing programme, the PLF 2, which he used to compose his Four Studies in 1962. The software he produced was capable of compositional decisions, a fundamental concept in computer-assisted and algorithmic composition. At the same time Music IV was completed. It was written for an IBM 7094, one of the first computers to use transistors. Hubert Howe and Godfrey Winham at the Princeton University improved Music IV, calling their new version Music IVB. When IBM introduced its new computers in 1965, new challenges for computer music appeared. Music I-IV had been written in low-level, machine- specific assembly language, hence they would not run in other computers. In 1967, the Princeton group presented a version of Music IVB written in FORTRAN, the Music 4BF. FORTRAN was major high-level language at the time. Meanwhile, Max Matthews, Jean-Claude Risset, Richard Moore and Joan Miller were developing the FORTRAN-based Music V at the Bell Labs. Music V was the culmination of the previous programming environments. It included advanced software-defined unit generators that played notes, discrete sounds containing transient information for the unit generators. The notion of the score, previously presented in Music III, was further developed here, including note lists and function tables. Music V marked the end of the first breakthrough of computer music. It was obvious that new ideas had to be followed.

Hiller was not the first to work on algorithmic composition. , a pioneer of , formulated new statistical criteria for composition, focussing on the aleatoric directional tendencies of sound movement. Xenakis’ Metastasis for orchestra was premiered in 1955. Xenakis used stochastic formulas that he worked out by hand to compose the piece. What Hiller showed, was the ability of the computer to model . In the late 60s Xenakis went on developing his automated composition programme, the Stochastic Music Programme (SMP). SMP eventually generates a score for conventional instruments using complex stochastic formulas. Meanwhile, Gottfried Michael Koenig was studying computer music programming at the WDR studio in Cologne. In 1971 he went to the Institute of Sonology in Utrecht, where he completed PR1 (Project 1), a programme for algorithmic composition. PR1 generates a score following both deterministic and aleatoric composition techniques. Koenig used the programme for both his electronic and instrumental compositions. Unlike SMP and PR1, Barry Truax made a series of programmes exclusively for direct digital sound synthesis. Those were named POD after POisson Distribution, because the distribution of events in time and frequency in the programme follows the Poisson distribution ( and Probability Theory). In the next years, however, algorithmic composition more likely became a (standard) part of more general programming environments.

1 MUMT 303 New Media Production II Charalampos Saitis Winter 2010

One could argue that ‘new music’ has been a three-step adventure. These steps have always been taken simultaneously, and most importantly interconnecting and interacting with each other. First it was the challenge of new instrumentation beyond the orchestra. Second came the computers, which could either compose or provide the parameters for the composer. The third step was the need for new sounds that are not realizable with acoustical or analog electronic instruments. Sound synthesis provided the ground for this ultimate step.

John Chowning of Stanford University got involved with the work in Bell Labs in the mid 60s. He brought Music IV in Stanford and had it run in a PDP-1 computer at the Artificial Intelligence Lab of the University. When the PDP-1 was replaced by the PDP-6, Chowning together with David Poole wrote MUS10. Meanwhile, he was working on new techniques for sound synthesis. After several experiments and mathematical confirmation he introduced frequency modulation (FM) as a synthesis technique with a great advantage: “extreme economy”, i.e. producing sound with rich spectra from just two oscillators. It was a milestone in computer music. FM was soon licensed by Yamaha and later patented. During the same period, digital technology as well as psychoacoustics were making rapid jumps. This created ground for the cultivation of digital sound synthesis, the new computer music adventure. The objective was the design of digital that would take advantage of FM as well as other computer music developments. On the grounds of more systematic research, two poles were created in the late 70s and became the leading centres of research and experimentation in computer music: CCRMA in Stanford, and IRCAM in Paris.

Another significant development of that time was speech synthesis, introduced by . Gerald Bennett and Xavier Rodet at IRCAM worked towards generating a singing voice with a computer. In 1978, they presented CHANT. CHANT was a programme using the human vocal tract as a model of synthesis. Soon it proved capable of non-vocal sounds synthesis. CHANT also introduced the new area of - based synthesis, also referred to as physical modelling, where a natural sound is synthesized according to the physics of the vibration structure that produces it. In 1981, the same people developed FORMES, an improved programme based on object-oriented programming. It was used by Jean-Baptiste Barriere to compose Chreode in 1983. During that period, computer music was becoming highly popular. More research centres were formed, and composers-researchers – or researchers-composers – were exploring all possibilities of using the computer as a musical instrument and/or a compositional tool. However, access to the established centres was limited and moreover expensive if there was no funding. As a result, the need for a compositional workstation for the individual composer was growing greater and greater.

In the early 80s a group of composers living in York (UK) was formed. The group was called Interface and its first members were Trevor Wishart, Richard Orton and Tom Endrich. By 1986, more composers joined the group. The vision of the group was to build a compositional environment that would be affordable and accessible by the individual composer. They started working on the Atari ST computer. What attracted them to it were its affordability, its 16-bit technology, and a built-in MIDI interface. With the software contribution of Martin Atkins, and the hardware contribution of Dave Malham, the Composer’s Desktop Project (CDP) was created. Almost a year ago, Barry Vencoe made , a unit-generator-based software synthesis language. Csound was an attempt to ‘adjust’ the ideas of Music I-V to personal computers. Csound was a non-realtime environment until 1989, when it was turned into a real-time control language. Before David Zicarelli became involved with Max/MSP, he contributed in the design of Intelligent Music’s M in 1987. M was a graphical algorithmic environment allowing the composer to manipulate sounds recorded through a MIDI keyboard. However, as Digital Signal Processing was becoming a standard in computer music, more sophisticated programmes were needed to include DSP techniques.

Several years earlier, in 1981, presented Repons, a composition for a twenty-four-piece orchestra and six soloists. The novelty was that each soloist was independently connected to 4X, where the sound was processed and routed to different loudspeakers around the concert hall. 4X was built by Giuseppe Di Giugno at IRCAM and is regarded the first ever made digital signal processor. It was quite successful and composers such as , Pierre Henry, and Jean-Baptiste Barriere. In 1985 went to IRCAM and started programming software for 4X. While working together with the composer Philippe Manoury on the later’s Jupiter, they faced timing issues with the existed software. They were interested in having the performer triggering electronic sounds. The main challenge was to time

2 MUMT 303 New Media Production II Charalampos Saitis Winter 2010 musical events independently of one another. The solution came from Puckette in 1987. He overcame the timing problem by programming a realtime scheduler, which he named Max, after Max Matthews. To make things easier, he moved 4X and his scheduler on a Macintosh with MIDI support. He went on designing a graphical interface, which he named Patcher. Soon the interface became the main part of the project, the project went further than the initial concept, and in 1988 the first version of Max was presented. The next step was taken by Zicarelli, who developed MSP, an environment for real-time audio synthesis, DSP, and algorithmic composition, and extended Max to Max/MSP. Max/MSP was an unprecedented breakthrough, still regarded as the leading real-time audio synthesis platform. Puckette went on re-designing an open- source version, which he called Pd (PureData). Pd has some fundamental differences from Max/MSP, but is quite as popular, and in many cases preferable because of its free distribution. In 1996 James McCartney wrote SuperCollider, a programming environment with object-oriented language for real-time audio synthesis and algorithmic composition. Max/MSP, Pd, and SuperCollider are now used by many composers-programmers, as well as by sound artists, and sound engineers. In the dawn of the twenty-first century computers are more powerful and thus capable of things yet to be done. Scientists, programmers, composers, and musicians work interactively towards new challenges for music and composition.

References

Chadabe, J. (1996). Electric sound: The Past and Promise of Electronic Music. Upper Saddle River, NJ: Prentice Hall.

Roads, C. (1996). Computer Music Tutorial. Cambridge, MA: The MIT Press.

3