
Roger B. Dannenberg Interactive Visual Music: A School of Computer Science Carnegie Mellon University Pittsburgh, Pennsylvania 15213 USA Personal Perspective [email protected] Interactive performance is one of the most innova- musical situations by generating sounds and images tive ways computers can be used in music, and it that encourage improvising performers to make cer- leads to new ways of thinking about music compo- tain musical choices. In this way, I feel that I can sition and performance. Interactive performance achieve compositional control over both large-scale also poses many technical challenges, resulting in structure and fine-grain musical texture. At the new languages and special hardware including sen- same time, performers are empowered to augment sors, synthesis methods, and software techniques. the composed interactive framework with their As if there are not already enough problems to own ideas, musical personality, and sounds. tackle, many composers and artists have explored the combination of computer animation and music within interactive performances. In this article, I Origins describe my own work in this area, dating from around 1987, including discussions of artistic and In the earliest years of interactive music perfor- technical challenges as they have evolved. I also de- mance, a variety of hardware platforms based on scribe the Aura system, which I now use to create minicomputers and microprocessors were available. interactive music, animation, and video. Most systems were expensive, and almost nothing My main interest in composing and developing was available off-the-shelf. However, around 1984, systems is to approach music and video as expres- things changed dramatically: one could purchase an sions of the same underlying musical “deep struc- IBM PC, a MIDI synthesizer, and a MIDI interface, ture.” Thus, images are not an interpretation or allowing interactive computer music using afford- accompaniment to audio but rather an integral part able and portable equipment. I developed much of of the music and the listener/viewer experience. the CMU MIDI Toolkit software (Dannenberg 1986) From a systems perspective, I have always felt that in late 1984, which formed the basis for interactive timing and responsiveness are critical if images and music pieces by many composers. By 1985, I was music are to work together. I have focused on soft- working with Cherry Lane Technologies on a com- ware organizations that afford precise timing and puter accompaniment product for the soon-to-be- flexible interaction, sometimes at the expense of announced Amiga computer from Commodore. rich imagery or more convenient tools for image- Cherry Lane carried a line of pitch-to-MIDI convert- making. ers from IVL Technologies, so I was soon in the pos- “Interaction” is commonly accepted (usually session of very portable machines that could take in without much thought) as a desirable property of data from my trumpet as MIDI, process the MIDI computer systems, so the term has become over- data, and control synthesizers. used and vague. A bit of explanation is in order. In- Unlike the PC, the Amiga had a built-in color teraction is a two-way flow of information between graphics system with hardware acceleration. At live performer(s) and computer. In my work, I see first, I used the Amiga graphics to visualize the in- improvisation as a way to use the talents of the per- ternal state of an interactive music piece that was former to the fullest. Once a performer is given the originally written for the PC. It was a small step to freedom to improvise, however, it is difficult for a use the graphics routines to create abstract shapes composer to retain much control. I resolve this in response to my trumpet playing. My first experi- problem by composing interaction. In other words, ment was a sort of music notation where performed rather than writing notes for the performer, I design notes were displayed in real-time as growing squares positioned according to pitch and starting time. Be- Computer Music Journal, 29:4, pp. 25–35, Winter 2005 cause time wrapped around on the horizontal axis, © 2005 Massachusetts Institute of Technology. squares in different colors would pile up on top of Dannenberg 25 other squares, or very long notes would fill large tated within the color lookup table. Effects analo- portions of the screen, effectively erasing what was gous to chaser lights on movie marquees could be there before. This was all very simple, but I became created, again with very little computation. Scenes hooked on the idea of generating music and images could also be rendered incrementally. Drawing ob- together. This would be too much for a performer jects one by one from lines and polygons can be an alone, but with a computer helping out, new things interesting form of animation that spreads the became possible. drawing time over many video frames. Polygons can grow simply by overwriting larger and larger poly- gons, again with low cost as long as the polygons are Lines, Polygons, Screens, and Processes not too large. One of my first animations had worm-like figures One of the challenges of working with interactive moving around on the screen. Although the “worms” systems, as opposed to video or pre-rendered com- appeared to be in continuous motion, each frame re- puter animation, is that few computers in the early quired only a line segment to be added to the head days could fill a screen in 60 msec, which is about and a line segment to be erased at the tail. Sinuous the longest tolerable frame period for animation. paths based on Lissajous figures were used in an at- Even expensive 3-D graphics systems fell short of tempt to bring some organic and fluid motion to the delivering good frame rates. The solution was to in- rather flat two-dimensional look of the animation. vent interesting image manipulations that could be At other times, randomly generated polygons were accomplished without touching every pixel on created coincident with loud percussion sounds, every frame. and color map manipulations were used to fade these shapes to black slowly. Animation on Slow Machines Scheduling Concerns Like many modern computer graphics systems, early computer displays used special, fast, video Because processors in the 1980s were executing at RAM separate from primary memory. Unfortu- best a couple of million instructions per second, it nately, this RAM was so expensive that it was com- became important to think about the impact of mon to have only 4 to 8 bits per pixel, even for color heavy graphics computations on music timing. In displays. These limited pixels served as an index short, one would like for music processing to take into a color lookup table. In this way, the computer place with high priority so that music events such could optimize the color palette according to the as MIDI messages are delivered within a few mil- image (Burger 1993). A common animation trick liseconds of the desired time. Otherwise, rhythmic was to change the color table under program con- distortions will occur, and sometimes these will be trol. For example, if there were 20 objects on the audible. On the other hand, graphics frame rates screen, each object could be rendered with a differ- might be 15 to 60 frames per second (fps), and delay- ent pixel value, for example the integers 0 through ing the output by 15 msec or so is generally not a 19. The actual colors of the objects were then deter- problem. We do not seem to perceive visual rhythm mined by the first 20 elements of the color lookup with the same kind of precision we have in our table. To make object 15 fade to black, one could sense of hearing. The solution is to compute graph- simply write a sequence of fading colors to element ics and music in separate processes. This, however, 15 of the color lookup table. This was much faster raises another problem: how can we integrate and than rewriting every pixel of the object, so it be- coordinate images and music if they are running in came possible to animate many objects at once. separate processes? Another standard trick with color tables was One solution is to control everything from the “color cycling,” where a set of color values were ro- high-priority music process. In my work with the 26 Computer Music Journal Amiga, which had a real-time operating system, I reality, often running on Silicon Graphics worksta- ran the CMU MIDI Toolkit at a high priority so tions, has been used in a number of projects for in- that incoming MIDI data and timer events would teractive music performances (Gromala, Novak, run immediately. Within the CMU MIDI Toolkit and Sharir 1993; Bargar et al. 1994). thread, there could be many active computations In contrast, if enough computing power is avail- waiting to be awakened by the scheduler. A typi- able, it makes sense to schedule frames at precise cal computation would combine music and graph- times rather than simply to run as fast as possible. ics. For example, a computation might call a With this approach, there will be more processor function upon every note to generate some MIDI time available for music processing, and the compu- data. The same function might also change a color tation of frames can be integrated with the schedul- entry in the color table or draw a line or polygon, ing of other events. All events are now driven by creating some visual event or motion. By writing time. I believe this approach can be much more log- the music and animation generation together—lit- ical for the composer/programmer.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-