My Experiences with Max Mathews in the Early Days of Computer Music

Total Page:16

File Type:pdf, Size:1020Kb

My Experiences with Max Mathews in the Early Days of Computer Music Hubert Howe Aaron Copland School of Music My Experiences with Max Queens College Flushing, New York 11367 USA Mathews in the Early Days [email protected] of Computer Music I first met Max Mathews in about 1964, when I brief contacts, I gradually came to understand what was still an undergraduate student at Princeton Max had been up to during these early years. University. I had been working for Jim Randall, who In my opinion, computer music would not was composing his piece Mudgett: Monologues of exist if Max Mathews had not been running the a Mass Murderer during the spring for a concert in Behavioral and Acoustical Research departments at the summer. Computer music as we now know it Bell Telephone Laboratories in Murray Hill, New existed then only at Bell Telephone Laboratories, Jersey. Bell Labs had always been the institution and Jim’s way of working on this piece was to that did cutting-edge research on sound, because it compute a section of it at Princeton and drive a was applicable to their main product, the telephone. computer tape up to Bell Labs at Murray Hill, New Much of their research was devoted to such subjects Jersey, where he would convert it to sound, record it as figuring out how poor a transmitted signal could on a reel-to-reel tape, and take it back to Princeton, be while still being intelligible at the other end of where he could listen to it carefully and splice it into the phone line, so that they could save two cents the larger composition, or throw it out and redo that on every phone. (In those days, only rotary landline portion. The limitations of the data-storage system instruments existed.) Bell Labs had long done basic at that time were such that he had a maximum of work on acoustics, extending all the way back to the about two minutes of music that could be converted 1920s. on a single reel. The building always had very tight security In the fall of that year, I became a graduate (making it difficult for Godfrey Winham, who was student, and in collaboration with Godfrey Winham, a British citizen). When you walked from the front we undertook the task of exporting the music of the building to Max Mathews’s office, you went programming language that Max Mathews had through a gallery of their new products, which developed—Music IV—to Princeton. It was at that is where I first saw a videophone. (It required time that I began to realize the comprehensive six telephone lines to transmit the image.) The nature of the vision he had developed for computer Labs always had first-rate equipment and facilities, music, one that has undergirded much of the work including a large anechoic chamber; I once had the that has been going ever since. Godfrey, Jim, and strange experience of being inside it. When Max I still would visit Bell Labs to convert our files came along, he had his staff also working on music. to sound, but it was not too long after that Bell Partly, this was because he was a violinist, and he Labs donated the system they had been using to was interested in the acoustics of the instrument. Princeton, while they implemented a better system. For many years, he admired the work of Carleen That meant that we could do most of our work Hutchins, who made violins and other stringed at Princeton, only going to Bell Labs for our final instruments in the proportions of the violin, but copies. (The converter at Princeton was limited to this was really 17th-century research. a 10-kHz sampling rate, monaural, whereas at Bell The modern computer as we know it was invented Labs they then had 20 kHz, stereo.) We continued to in the 1950s, and by the late years of that decade, it meet Max, although much of our immediate contact was the IBM corporation that controlled almost the was with other people. In later years, I met and got entire market. The only computers that existed were to know Jean-Claude Risset, and I also saw the early mainframe machines, which were huge and required GROOVE system developed by Dick Moore but special climate-controlled rooms and round-the- used mostly by Emmanuel Ghent. Through these clock technicians to keep them going. Only large corporations and universities could afford to buy Computer Music Journal, 33:3, pp. 41–44, Fall 2009 such machines, and they were also very expensive to c 2009 Massachusetts Institute of Technology. run. Fortunately, Princeton University was able to Howe 41 Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/comj.2009.33.3.41 by guest on 30 September 2021 support our research, which they did by granting us ultimately went through version 5. The problems what we called “funny money,” which represented with the process at that time were that the data the computer time we used. Totals adding up to capacity and speed of computer tape—the only large hundreds and thousands of dollars were printed on storage medium—were not sufficient to produce the last page of all our jobs. good-quality sound, but that would change later My first machine was the IBM 7090, which took with the advancing technology. I think he probably up an entire large room, probably 700 to 1,000 square envisioned many of the products that later came feet in size. One of the impressive things was the about, such as digital recording and effects devices. console, which was full of blinking lights. Every Max Mathews hired a number of top-notch people time a number was loaded into the accumulator, to work on his music projects. The first composer which was the main register that the machine used, was James Tenney, who wrote an article in the a light corresponding to each bit was lit. Users were Journal of Music Theory in the early 1960s. He not allowed into the computer room, but it had later left computer music but wrote an interesting a large window into which we could look to see book, Meta+Hodos (1964), an application of gestalt whether and when our jobs would be run. The main theory and cognitive science to music. When I first console only took up about as much room as a large went there, a programmer named Joan Miller was desk. Using switches on the console, an operator working on Music IV. She was outstanding, probably could enter machine instructions directly into the only exceeded by Barry Vercoe, who is perhaps best registers, which was necessary sometimes during known for developing Csound. Through her work maintenance operations. What occupied most of I began to realize the power and sophistication of the floor space were 10–15 magnetic tape drives, the programming that went into Music IV. The which were the main storage media. Every time power lay, first of all, in the ability to construct the data was written and had to be re-read, the tape had sound wave. Because all sounds are waves, if you to be rewound. The machine was based on a 6-bit can generate any wave, you can generate any sound. character (which was why it printed only capital The other important point was that, by representing letters), programmed in an octal number system, all of the devices used in constructing the sound and had a 36-bit word, making it slightly more in little computer modules called unit generators accurate that the later 32-bit machines. (Bytes and (an invention by Max Mathews), you could have hexadecimal numbers came later with the System virtually an unlimited amount of equipment; the 360.) Also impressive in size was the printer, which only limitation was in the length of time it took was also about the size of a large desk. The printer to produce the sound, which was often quite long. was the main way users received output, which Learning to describe sounds was not easy, and it consisted of fan-folded sheets of 11 × 17-in. paper. took me years to work it out. Some users impressed others with the large amounts The data speeds and amounts were a serious of print-out they could produce on a given occasion, limitation of early computer music. Our first work most of which was thrown out. could only be realized at 10-kHz mono, which How anyone could have envisioned the potential allowed frequencies only up to 5 kHz. I thought of computers from this manner of interaction still it was heaven when we went to 20-kHz mono or seems mysterious to me, but Max Mathews was a 10-kHz stereo. The only way to record sound was great visionary. He realized the possibilities of the on quarter-inch analog magnetic tape (this was even digital representation of music, and he started using before the cassette!), and the only way to assemble analog-to-digital and digital-to-analog converters tapes into compositions was by splicing. to allow computers to process sounds. Once he Apart from James Tenney, the musical results of had done that, he realized the potential for music most of the people working at Bell Labs, including synthesis in the concept of generating and processing Max Mathews, were primitive. The work was the waveform digitally from scratch, and he started imaginative and sophisticated in terms of the sound work in the 1950s on a series of music programs that quality, but it was not cutting-edge in a musical 42 Computer Music Journal Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/comj.2009.33.3.41 by guest on 30 September 2021 sense.
Recommended publications
  • Wendy Reid Composer
    WENDY REID COMPOSER 1326 Shattuck Avenue #2 Berkeley, California 94709 [email protected] treepieces.net EDUCATION 1982 Stanford University, CCRMA, Post-graduate study Workshop in computer-generated music with lectures by John Chowning, Max Mathews, John Pierce and Jean-Claude Risset 1978-80 Mills College, M.A. in Music Composition Composition with Terry Riley, Robert Ashley and Charles Shere Violin and chamber music with the Kronos Quartet 1975-77 Ecoles D’Art Americaines, Palais de Fontainbleau and Paris, France: Composition with Nadia Boulanger; Classes in analysis, harmony, counterpoint, composition; Solfege with assistant Annette Dieudonne 1970-75 University of Southern California, School of Performing Arts, B.M. in Music Composition, minor in Violin Performance Composition with James Hopkins, Halsey Stevens and film composer David Raksin 1 AWARDS, GRANTS, and COMMISSIONS Meet The Composer/California Meet The Composer/New York Subito Composer Grant ASMC Grant Paul Merritt Henry Award Hellman Award The Oakland Museum The Nature Company Sound/Image Unlimited Graduate Assistantship California State Scholarship Honors at Entrance USC National Merit Award Finalist National Educational Development Award Finalist Commission, Brassiosaurus (Tomita/Djil/ Heglin):Tree Piece #52 Commission, Joyce Umamoto: Tree Piece #42 Commission, Abel-Steinberg-Winant Trio: Tree Piece #41 Commission, Tom Dambly: Tree Piece #31 Commission, Mary Oliver: Tree Piece #21 Commission, Don Buchla: Tree Piece #17 Commission, William Winant: Tree Piece #10 DISCOGRAPHY LP/Cassette: TREE PIECES (FROG RECORDS,1988/ FROG PEAK) CD: TREEPIECES(FROG RECORDS, 2002/ FROGPEAK) TREE PIECES volume 2 (NIENTE, 2004 / FROGPEAK) TREE PIECE SINGLE #1: LULU VARIATIONS (NIENTE, 2009) TREE PIECE SINGLE #2: LU-SHOO FRAGMENTS (NIENTE, 2010) 2 PUBLICATIONS Scores: Tree Pieces/Frog On Rock/Game of Tree/Klee Pieces/Glass Walls/Early Works (Frogpeak Music/Sound-Image/W.
    [Show full text]
  • Editorial: Alternative Histories of Electroacoustic Music
    This is a repository copy of Editorial: Alternative histories of electroacoustic music. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/119074/ Version: Accepted Version Article: Mooney, J orcid.org/0000-0002-7925-9634, Schampaert, D and Boon, T (2017) Editorial: Alternative histories of electroacoustic music. Organised Sound, 22 (02). pp. 143-149. ISSN 1355-7718 https://doi.org/10.1017/S135577181700005X This article has been published in a revised form in Organised Sound http://doi.org/10.1017/S135577181700005X. This version is free to view and download for private research and study only. Not for re-distribution, re-sale or use in derivative works. © Cambridge University Press Reuse Unless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version - refer to the White Rose Research Online record for this item. Where records identify the publisher as the copyright holder, users can verify any specific terms of use on the publisher’s website. Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing [email protected] including the URL of the record and the reason for the withdrawal request. [email protected] https://eprints.whiterose.ac.uk/ EDITORIAL: Alternative Histories of Electroacoustic Music In the more than twenty years of its existence, Organised Sound has rarely focussed on issues of history and historiography in electroacoustic music research.
    [Show full text]
  • 62 Years and Counting: MUSIC N and the Modular Revolution
    62 Years and Counting: MUSIC N and the Modular Revolution By Brian Lindgren MUSC 7660X - History of Electronic and Computer Music Fall 2019 24 December 2019 © Copyright 2020 Brian Lindgren Abstract. MUSIC N by Max Mathews had two profound impacts in the world of music ​ synthesis. The first was the implementation of modularity to ensure a flexibility as a tool for the user; with the introduction of the unit generator, the instrument and the compiler, composers had the building blocks to create an unlimited range of sounds. The second was the impact of this implementation in the modular analog synthesizers developed a few years later. While Jean-Claude Risset, a well known Mathews associate, asserts this, Mathews actually denies it. They both are correct in their perspectives. Introduction Over 76 years have passed since the invention of the first electronic general purpose computer,1 the ENIAC. Today, we carry computers in our pockets that can perform millions of times more calculations per second.2 With the amazing rate of change in computer technology, it's hard to imagine that any development of yesteryear could maintain a semblance of relevance today. However, in the world of music synthesis, the foundations that were laid six decades ago not only spawned a breadth of multifaceted innovation but continue to function as the bedrock of important digital applications used around the world today. Not only did a new modular approach implemented by its creator, Max Mathews, ensure that the MUSIC N lineage would continue to be useful in today’s world (in one of its descendents, Csound) but this approach also likely inspired the analog synthesizer engineers of the day, impacting their designs.
    [Show full text]
  • DSP Class III: Digital Electronic Music Concepts Overview (Part III) ADC and DAC Analog-To-Digital Conversion
    TECH 350: DSP Class III: Digital Electronic Music Concepts Overview (Part III) ADC and DAC Analog-to-Digital Conversion Parameters of ADC: • Sampling Rate (fs) = rate at which analog signal is ^ captured (sampling) (in Hertz) Intensity v • Bit Depth = number of values for each digital sample (quantization) (in bits) Time -> Limitations/Issues with Sampling Distortion caused by sampling, AKA ALIASING (or foldover) How can we rectify (or at least describe) this phenomenon? Sampling (Nyquist) Theorem •Can describe the resultant frequency of aliasing via the following (rough) formula, iff input freq. > half the sampling rate && < sampling rate: resultant frequency = sampling frequency (fs) - input frequency For example, if fs = 1000Hz and the frequency of our input is at 800Hz: 1000 - 800 = 200, so resultant frequency is 200Hz (!) •Nyquist theorem = In order to be able to reconstruct a signal, the sampling frequency must be at least twice the frequency of the signal being sampled •If you want to represent frequencies up to X Hz, you need fs = 2X Hz Ideal Sampling Frequency (for audio) •What sampling rate should we use for musical applications? •This is an on-going debate. Benefits of a higher sampling rate? Drawbacks? •AES Standards: •Why 44.1kHz? Why 48kHz? Why higher (we can’t hear up there, can we?) •For 44.1kHz and 48kHz answer lies primarily within video standard considerations, actually… •44.1kHz = 22 · 32 · 52 · 72, meaning it has a ton of integer factors •>2 * 20kHz is great, as it allows us to have frequency headroom to work with, and subharmonics (and interactions of phase, etc.) up in that range are within our audible range Anti-Aliasing Filters + Phase Correction •How to fix aliasing? Add a low-pass filter set at a special cutoff frequency before we digitize the signal.
    [Show full text]
  • Fifty Years of Computer Music: Ideas of the Past Speak to the Future
    Fifty Years of Computer Music: Ideas of the Past Speak to the Future John Chowning1 1 CCRMA, Department of Music, Stanford University, Stanford, California 94305 [email protected] Abstract. The use of the computer to analyze and synthesize sound in two early forms, additive and FM synthesis, led to new thoughts about synthesizing sound spectra, tuning and pitch. Detached from their traditional association with the timbre of acoustic instruments, spectra become structured and associated with pitch in ways that are unique to the medium of computer music. 1 Introduction In 1957, just fifty years ago, Max Mathews introduced a wholly new means of mak- ing music. An engineer/scientist at Bell Telephone Laboratories (BTL), Max (with the support of John Pierce, who was director of research) created out of numbers and code the first music to be produced by a digital computer. It is usually the case that a fascination with some aspect of a discipline outside of one’s own will quickly con- clude with an experiment without elaboration. But in Max’s case, it was the begin- ning of a profoundly deep and consequential adventure, one which he modestly in- vited us all to join through his elegantly conceived programs, engendering tendrils that found their way into far-flung disciplines that today, 50 years later, continue to grow without end. From the very beginning Max’s use of the computer for making music was expan- sive. Synthesis, signal processing, analysis, algorithmic composition, psychoacous- tics—all were within his scope and all were expressed and described in great detail in his famous article [1] and the succession of programs MUSIC I-V.1 It is in the nature of the computer medium that detail be elevated at times to the forefront of our thinking, for unlike preceding music technologies, both acoustic and analogue, computers require us to manage detail to accomplish even the most basic steps.
    [Show full text]
  • Download Chapter 264KB
    Memorial Tributes: Volume 16 Copyright National Academy of Sciences. All rights reserved. Memorial Tributes: Volume 16 MAX V. MATHEWS 1926–2011 Elected in 1979 “For contributions to computer generation and analysis of meaningful sounds.” BY C. GORDON BELL MAX VERNON MATHEWS, often called the father of computer music, died on April 21, 2011, at the age of 84. At the time of his death he was serving as professor (research) emeritus at Stanford University’s Center for Computer Research in Music and Acoustics. Max was born in Columbus, Nebraska, on November 13, 1926. He attended high school in Peru, Nebraska, where his father taught physics and his mother taught biology at the state teachers college there. Peru High School was the training school for the college. This was during World War II (1943– 1944). One day when Max was a senior in high school, he simply went off to Omaha (strictly on his own volition) and enlisted in the U.S. Navy—a fortunate move because he was able to have some influence on the service to which he was assigned, and after taking the Eddy Aptitude Test, he was selected for radar school. Radar, however, was so secret, that Max was designated a “radio technician.” After basic training he was sent to Treasure Island, San Francisco, where he met Marjorie (Marj), who became his wife. After returning from the war, Max applied to the California Institute of Technology (Caltech) and to the Massachusetts Institute of Technology (MIT). On graduating with a bachelor’s degree in electrical engineering from Caltech in 1950, he went to MIT to earn a doctorate in 1954.
    [Show full text]
  • MTO 20.1: Willey, Editing and Arrangement
    Volume 20, Number 1, March 2014 Copyright © 2014 Society for Music Theory The Editing and Arrangement of Conlon Nancarrow’s Studies for Disklavier and Synthesizers Robert Willey NOTE: The examples for the (text-only) PDF version of this item are available online at: http://www.mtosmt.org/issues/mto.14.20.1/mto.14.20.1.willey.php KEYWORDS: Conlon Nancarrow, MIDI, synthesis, Disklavier ABSTRACT: Over the last three decades a number of approaches have been used to hear Conlon Nancarrow’s Studies for Player Piano in new settings. The musical information necessary to do this can be obtained from his published scores, the punching scores that reveal the planning behind the compositions, copies of the rolls, or the punched rolls themselves. The most direct method of extending the Studies is to convert them to digital format, because of the similarities between the way notes are represented on a player piano roll and in MIDI. The process of editing and arranging Nancarrow’s Studies in the MIDI environment is explained, including how piano roll dynamics are converted into MIDI velocities, and other decisions that must be made in order to perform them in a particular environment: the Yamaha Disklavier with its accompanying GM sound module. While Nancarrow approved of multi-timbral synthesis, separating the voices of his Studies and assigning them unique timbres changes the listener’s experience of the “resultant,” Tenney’s term for the fusion of multiple voices into a single polyphonic texture. Received January 2014 1. Introduction [1.1] Conlon Nancarrow’s compositional output from 1948 until his death in 1997 was primarily for the two player pianos in his studio in Mexico City.
    [Show full text]
  • The Early History of Music Programming and Digital Synthesis, Session 20
    Chapter 20. Meeting 20, Languages: The Early History of Music Programming and Digital Synthesis 20.1. Announcements • Music Technology Case Study Final Draft due Tuesday, 24 November 20.2. Quiz • 10 Minutes 20.3. The Early Computer: History • 1942 to 1946: Atanasoff-Berry Computer, the Colossus, the Harvard Mark I, and the Electrical Numerical Integrator And Calculator (ENIAC) • 1942: Atanasoff-Berry Computer 467 Courtesy of University Archives, Library, Iowa State University of Science and Technology. Used with permission. • 1946: ENIAC unveiled at University of Pennsylvania 468 Source: US Army • Diverse and incomplete computers © Wikimedia Foundation. License CC BY-SA. This content is excluded from our Creative Commons license. For more information, see http://ocw.mit.edu/fairuse. 20.4. The Early Computer: Interface • Punchcards • 1960s: card printed for Bell Labs, for the GE 600 469 Courtesy of Douglas W. Jones. Used with permission. • Fortran cards Courtesy of Douglas W. Jones. Used with permission. 20.5. The Jacquard Loom • 1801: Joseph Jacquard invents a way of storing and recalling loom operations 470 Photo courtesy of Douglas W. Jones at the University of Iowa. 471 Photo by George H. Williams, from Wikipedia (public domain). • Multiple cards could be strung together • Based on technologies of numerous inventors from the 1700s, including the automata of Jacques Vaucanson (Riskin 2003) 20.6. Computer Languages: Then and Now • Low-level languages are closer to machine representation; high-level languages are closer to human abstractions • Low Level • Machine code: direct binary instruction • Assembly: mnemonics to machine codes • High-Level: FORTRAN • 1954: John Backus at IBM design FORmula TRANslator System • 1958: Fortran II 472 • 1977: ANSI Fortran • High-Level: C • 1972: Dennis Ritchie at Bell Laboratories • Based on B • Very High-Level: Lisp, Perl, Python, Ruby • 1958: Lisp by John McCarthy • 1987: Perl by Larry Wall • 1990: Python by Guido van Rossum • 1995: Ruby by Yukihiro “Matz” Matsumoto 20.7.
    [Show full text]
  • Applications in Heart Rate Variability
    Data Analysis through Auditory Display: Applications in Heart Rate Variability Mark Ballora Faculty of Music McGill University, Montréal May, 2000 A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements of the degree of Doctor of Philosophy in Music © Mark Ballora, May 2000 Table of Contents Abstract......................................................................................... v Résumé ........................................................................................ vi Acknowledgements ..........................................................................vii 1. Introduction 1.1 Purpose of Study.................................................................... 1 1.2 Auditory Display.................................................................... 2 1.3 Types of Auditory Display ........................................................ 3 1.4 Heart Rate Variability.............................................................. 4 1.5 Design of the Thesis................................................................ 6 2. Survey of Related Literature 2.1 Data in Music 2.1.1 Data Music—Making Art from Information............................ 8 2.1.2 Biofeedback Music........................................................14 2.1.3 Nonlinear Dynamics in Music ...........................................16 2.1.3.1 Fractal Music...................................................16 2.1.3.2 Mapping Chaotic (and other) Data ..........................18 2.1.4 Concluding Thoughts
    [Show full text]
  • Electronic Music - Inquiry Description
    1 of 15 Electronic Music - Inquiry Description This inquiry leads students through a study of the music industry by studying the history of electric and electronic instruments and music. Today’s students have grown up with ubiquitous access to music throughout the modern internet. The introduction of streaming services and social media in the early 21st century has shown a sharp decline in the manufacturing and sales of physical media like compact discs. This inquiry encourages students to think like historians about the way they and earlier generations consumed and composed music. The questions of artistic and technological innovation and consumption, invite students into the intellectual space that historians occupy by investigating the questions of what a sound is and how it is generated, how accessibility of instrumentation affects artistic trends, and how the availability of streaming publishing and listening services affect consumers. Students will learn about the technical developments and problems of early electric sound generation, how the vacuum tube allowed electronic instruments to become commercially viable, how 1960s counterculture broadcast avant-garde and experimental sounds to a mainstream audience, and track how artistic trends shift overtime when synthesizers, recording equipment, and personal computers become less expensive over time and widely commercially available. As part of their learning about electronic music, students should practice articulating and writing various positions on the historical events and supporting these claims with evidence. The final performance task asks them to synthesize what they have learned and consider how the internet has affected music publishing. This inquiry requires prerequisite knowledge of historical events and ideas, so teachers will want their students to have already studied the 19th c.
    [Show full text]
  • The 1997 Mathews Radio-Baton & Improvisation Modes
    The 1997 Mathews Radio-Baton & Improvisation Modes From the Proceedings of the 1997 International Computer Music Conference – Thessaloniki Greece Richard Boulanger & Max Mathews [email protected] & [email protected] Berklee College of Music & Stanford University Introduction The Radio-Baton is a controller for live computer music performances. It tracks the motions, in three dimensional space, of the ends of two Batons which are held in the hands of a performer. The X, Y and Z trajectories of each Baton are used to control the performance. The Radio-Baton is a MIDI instrument in the sense that it has MIDI input, output, and thru connectors. All electrical communication with the Baton is done over MIDI cables using standard MIDI conventions. The Baton was designed to work with MIDI synthesizers and MIDI-based sequencing and programming software. How the Radio-Baton Works The Radio-Baton uses a simple technique to determine the XYZ coordinates of the batons. At the end of each baton is a small radio transmitting antenna. On the receiving antenna surface are 5 receiving antennas as sketched on the figure--four long thin antennas arranged along the four edges of the board and one large antenna covering the entire center area of the board. The closer a baton is to a given receiver, the stronger the signal at that receiver. By comparing the signal strengths at the #1 and #2 antennas, the computer in the electronics box can determine the X position of the baton. Comparing the #3 and #4 strengths gives the Y position. The #5 strength gives the height above the board or Z position.
    [Show full text]
  • MIT CMJ304 00Front 1-11
    About This Issue This issue’s cover theme, “Robot scribes results of an experiment to can yield a sonic collage whose tem- Musicians,” concerns a longstanding evaluate how well the robot emulates poral evolution imitates that of the but frequently overlooked area of a human performance. original. Audio examples, including computer music. The cover of Com- The other article on this theme, by excerpts of a composition by the au- puter Music Journal Vol. 10, Number Gil Weinberg and Scott Driscoll, fo- thor, can be heard on the DVD (and 1 (Spring 1986) displayed an early cuses less on elaborate mechanics more can be found at his Web site). musical robot, the Sumitomo imple- and more on musicianship: Their ro- Our Fall 2005 issue presented an mentation of the WABOT-2 (dis- bot plays a simpler instrument, a article about a visual sound-synthesis cussed further in CMJ 10:2). That drum, but it can improvise based on language called PWGLSynth. keyboard-playing robot, replete with what it hears a human percussionist PWGLSynth is part of a more general a conversation system and a video playing. The robot, named Haile, in- visual environment, PWGL, written camera for optical recognition of teracts with human musicians in six in Lisp. Another component of sheet music, issued from an exten- different modes: imitation, stochastic PWGL, called Expressive Notation sive development project at Waseda transformation, perceptual transfor- Package (ENP), provides a “front University in Tokyo. In more recent mation, beat detection, simple ac- end” in terms of music notation. years, researchers at Waseda have cre- companiment, and perceptual (ENP was mentioned in last year’s ar- ated successively improved versions accompaniment.
    [Show full text]