AN INTRODUCTION to RSC3 1. Musics I Through V Max Mathews, Working at AT&T, Wrote the Music System In

Total Page:16

File Type:pdf, Size:1020Kb

AN INTRODUCTION to RSC3 1. Musics I Through V Max Mathews, Working at AT&T, Wrote the Music System In RSC3 FOR SCHEMERS: AN INTRODUCTION TO RSC3 ROHAN DRAPE Abstract. These are notes for a talk addressed to Schemers about rsc3, a scheme client to the SuperCollider (SC3) synthesis server. This talk provides a brief history of computer music in order to place SC3 in context and to define the problem domain, and then a description of and rationale for rsc3. 1. Musics I through V Max Mathews, working at AT&T, wrote the Music system in 1958 [1] and suc- cessive variants of this system through Music-V. Musics I through III were experi- mental and not used outside AT&T, Musics IV and V were written in Fortran and were the first widely distributed computer music synthesis systems, used for many years in studios including those at Stanford, Princeton, Columbia, MIT, IRCAM and Marseille. In 1969 Matthews published the important text \The Technology of Computer Music" [2] which is in two parts, the first discusses basic digital signal processing theory, the second is a manual for Music-V. Barry Vercoe at MIT wrote Music-11 [14] and variants through CSound [15] which is highly portable and very widely used. Eric Scheirer and others at MIT wrote the MPEG4 structured audio specification [10], a variant of CSound. These systems are all considered to be part of a Music-N family. 2. The Music-N Paradigm Systems in the Music-N family are acoustical compilers, reading a set of instruc- tion files to generate a signal file. Users define a set of signal processing graphs called instruments that together form an orchestra. The nodes of the signal flow graph are called unit generators or UGens. UGens read and write continuous signals from unidirectional ports. For efficiency many Music-N systems provide three rates of signal flow, initialization rate i-rate, control rate k-rate and audio rate a-rate. instr 1 k1 linen p5,p6,p3,p7 a2 oscil k1,p4,2 out a2 endin A piece is written by specifying a sequence of notes in a score. A note is a set of parameters, the first five parameters are traditionally instrument number, start time, duration, frequency and amplitude. i1 0 1 440 0.1 0.5 0.25 i1 0.5 1 442 0.1 0.25 0.5 Date: August 2003, minor revisions November 2006, May 2010, June 2014. 1 3. Other Computer Music Systems A different family of systems follow the Patcher [7] paradigm due to Puckette working at IRCAM. Systems in use include Max [8, 17], Pd [9]. A patch is a graph that combines both continuous signal processing elements and asynchronous messaging elements. This is at once the most interesting and problematic aspect of patcher systems. Patches are ordinarily created and edited using a drawing editor. Ideomatically the graph drawing represents the state of the system, however in practice graphs often become too complicated to be written in this manner and sub-graphs and references to stored data files are required. Another family of systems follow the Editor [6] paradigm, due to Moore working at Lucasfilm. These systems have direct precedents in analog studios and are very widely used in digital studios. Two widely used implementations are ProTools from Digidesign and Logic from Apple. 4. SuperCollider SuperCollider (SC) is a family of real-time audio signal processing systems writ- ten by James McCartney. SuperCollider is a descendant of Pyrite, a system for describing and generating Max patches. The first SuperCollider [3] is a dialect of Scheme highly optimized for musical signal processing. SC2 [4] and SC3d5 [5] are dialects of SmallTalk with the same optimizations. The interpreters for these languages generate real-time audio signals as a side effect of evaluating certain expressions. f = LFSaw.kr(0.4, 0, 24, LFSaw.kr([8,7.23], 0, 3, 80)).midicps; CombN.ar(SinOsc.ar(f, 0, 0.04), 0.2, 0.2, 4); SC3 is a variant of SC2 that cleanly separates the language intepreter and syn- thesis engine into two processes. These processes communicate over network sockets using a subset of the Open Sound Control (OSC) protocol [16]. The SC3 synthesis engine, scsynth, manages a graph of instruments. Instruments are specified as byte strings. All operations on the graph are initiated by sending an OSC message over a network socket. OSC messages are timestamped using the Network Time Proto- col (NTP). Operations that are not atomic reply to the client when the operation completes. UGens are loaded dynamically when the system boots and can be writ- ten by users. The SC3 language interpreter sclang implements the same SmallTalk dialect provided by SC2. SC3 is efficient, well designed and well implemented. 5. Music-N, SC3 and Moore's Law Earlier systems had provided high level languages for music signal processing by targeting Music-N systems. Common Lisp Music (CLM) [11] is one instance of this. The most significant contribution of SC is to real-time musical signal processing. Music-N systems were designed as accoustical compilers at a time when works were submitted to computer administrators on punch card and the output tapes were sent to a digital to analgue converter that rendered analogue tapes offline. Although traditional Music-N systems have been progressively adapted for real time environments the basic architectures are not properly dynamic. SuperCollider was initially designed as a high level language interpreter for real-time music signal processing. Correct dynamic behavior of the signal processing system requires: 1. graphs of instruments, 2. dynamic insertion and deletion of instruments at these graphs (this requires real-time constraints on UGen instatiation as well as runtime operation), 3. 2 dynamic audio and control signal routing and rerouting (global audio and control signal paths). Real time systems adapt to offline compilation use well. 6. Scheme As this paper is addressed to Schemers this section will be terser still. Scheme is a good working environment for music composition. Scheme is simple, dynamic, fast and well supported. 7. rsc3 rsc3 is an R6RS [12] library that facilitates using scheme as a client to the SC3 synthesis server. rsc3 is a client of the SC3 synthesis server in the same sense that sclang is. Where appropriate rsc3 provides a similar interface layer and uses the same or similar names and is therefore a derivative work of SC3. The rsc3 core implements: (1) The OSC protocol. A bytecode generator and parser for the subset of the OSC protocol used by SC3. (2) SC3 Synth Definition management. A bytecode generator and parser. Im- plementations for all UGens distributed with SC3. SC3 type input replica- tion (multiple channel expansion). (3) An Emacs [13] mode, with rsc3 and SC3 session management, expression evaluation, textual rewriting for evaluation, graph drawing and symbol lookup of rsc3 source and help files. The expressions below show the equivalent SC3 language and rsc3 declarations of a trivial Synth definition. Synthdef("sin", { arg f=440, a=0.1; Out.ar(0, SinOsc.ar(f, 0) * a); }) (synthdef "sin" (letc ((f 440) (a 0.1)) (out 0 (mul (sin-osc ar f 0) a)))) rsc3 provides a moderate set of procedures related to audio signal processing and musical composition. Using modern scheme systems thread latency is adequate for most musical work and GC stop times are reasonable though not ideal. The rsc3 source repository is available from: http://rd.slavepianos.org/. 8. Examples A series of examples demonstrate: the Emacs mode, partial UGen graphs, graph drawing, the dissasembler, the UTC and tempo schedulers, the widget set and control data integration. References [1] M. V. Mathews. Computer Program to Generate Acoustic Signals. Journal of the Acoustical Society of America, 32:1493, 1960. [2] M. V. Mathews. The Technology of Computer Music. MIT Press, Cambridge, MA, 1969. [3] James McCartney. SuperCollider: a new real time synthesis language. In Proceedings of the International Computer Music Conference. International Computer Music Association, 1996. [4] James McCartney. Continued evolution of the SuperCollider real time synthesis environment. In Proceedings of the International Computer Music Conference, pages 133{136. International Computer Music Association, 1998. 3 [5] James McCartney. A New, Flexible Framework for Audio and Image Synthesis. In Proceedings of the International Computer Music Conference, pages 258{261. International Computer Music Association, 2000. [6] F. R. Moore. The Lucasfilm digital audio facility. W. Kaufmann, Los Altos, CA, 1985. [7] Miller Puckette. The Patcher. In Proceedings of the International Computer Music Confer- ence, pages 420{429, San Francisco, 1988. Proceedings of the International Computer Music Conference. [8] Miller Puckette. Combining Event and Signal Processing in the Max Graphical Programming Environment. Computer Music Journal, 15(3):68{77, 1991. [9] Miller Puckette. Pure Data. In Proceedings of the International Computer Music Conference, pages 224{227. International Computer Music Association, 1997. [10] Eric Scheirer. SAOL: The MPEG-4 structured audio orchestra language. In Proceedings of the International Computer Music Conference, pages 432{438, 1998. [11] William Schottstaedt. Machine Tongues XVII: CLM - Music V meets Common Lisp. Com- puter Music Journal, 18(2), 1994. [12] Michael Sperber, R. Kent Dybvig, Matthew Flatt, Anton Van Straaten, Robby Findler, and Jacob Matthews. Revised6 report on the algorithmic language scheme. J. Funct. Program., 19(S1):1{301, 2009. [13] Richard Stallman. EMACS: The Extensible, Customizable, Self Documenting Display Editor. Symposium on Text Manipulation, pages 147{156, 1981. [14] Barry Vercoe. Reference Manual for the Music 11 Sound Synthesis Language. MIT Electronic Music Studio, Cambridge, MA, 1979. [15] Barry Vercoe. Csound: A manual for the audio processing system. MIT Media Lab, Cam- bridge, MA, 1985. Revised 1996. [16] Matthew Wright and Adrian Freed. Open Sound Control: A New Protocol for Communicating with Sound Synthesizers. In Proceedings of the International Computer Music Conference, pages 101{104.
Recommended publications
  • Wendy Reid Composer
    WENDY REID COMPOSER 1326 Shattuck Avenue #2 Berkeley, California 94709 [email protected] treepieces.net EDUCATION 1982 Stanford University, CCRMA, Post-graduate study Workshop in computer-generated music with lectures by John Chowning, Max Mathews, John Pierce and Jean-Claude Risset 1978-80 Mills College, M.A. in Music Composition Composition with Terry Riley, Robert Ashley and Charles Shere Violin and chamber music with the Kronos Quartet 1975-77 Ecoles D’Art Americaines, Palais de Fontainbleau and Paris, France: Composition with Nadia Boulanger; Classes in analysis, harmony, counterpoint, composition; Solfege with assistant Annette Dieudonne 1970-75 University of Southern California, School of Performing Arts, B.M. in Music Composition, minor in Violin Performance Composition with James Hopkins, Halsey Stevens and film composer David Raksin 1 AWARDS, GRANTS, and COMMISSIONS Meet The Composer/California Meet The Composer/New York Subito Composer Grant ASMC Grant Paul Merritt Henry Award Hellman Award The Oakland Museum The Nature Company Sound/Image Unlimited Graduate Assistantship California State Scholarship Honors at Entrance USC National Merit Award Finalist National Educational Development Award Finalist Commission, Brassiosaurus (Tomita/Djil/ Heglin):Tree Piece #52 Commission, Joyce Umamoto: Tree Piece #42 Commission, Abel-Steinberg-Winant Trio: Tree Piece #41 Commission, Tom Dambly: Tree Piece #31 Commission, Mary Oliver: Tree Piece #21 Commission, Don Buchla: Tree Piece #17 Commission, William Winant: Tree Piece #10 DISCOGRAPHY LP/Cassette: TREE PIECES (FROG RECORDS,1988/ FROG PEAK) CD: TREEPIECES(FROG RECORDS, 2002/ FROGPEAK) TREE PIECES volume 2 (NIENTE, 2004 / FROGPEAK) TREE PIECE SINGLE #1: LULU VARIATIONS (NIENTE, 2009) TREE PIECE SINGLE #2: LU-SHOO FRAGMENTS (NIENTE, 2010) 2 PUBLICATIONS Scores: Tree Pieces/Frog On Rock/Game of Tree/Klee Pieces/Glass Walls/Early Works (Frogpeak Music/Sound-Image/W.
    [Show full text]
  • Interpretação Em Tempo Real Sobre Material Sonoro Pré-Gravado
    Interpretação em tempo real sobre material sonoro pré-gravado JOÃO PEDRO MARTINS MEALHA DOS SANTOS Mestrado em Multimédia da Universidade do Porto Dissertação realizada sob a orientação do Professor José Alberto Gomes da Universidade Católica Portuguesa - Escola das Artes Julho de 2014 2 Agradecimentos Em primeiro lugar quero agradecer aos meus pais, por todo o apoio e ajuda desde sempre. Ao orientador José Alberto Gomes, um agradecimento muito especial por toda a paciência e ajuda prestada nesta dissertação. Pelo apoio, incentivo, e ajuda à Sara Esteves, Inês Santos, Manuel Molarinho, Carlos Casaleiro, Luís Salgado e todos os outros amigos que apesar de se encontraram fisicamente ausentes, estão sempre presentes. A todos, muito obrigado! 3 Resumo Esta dissertação tem como foco principal a abordagem à interpretação em tempo real sobre material sonoro pré-gravado, num contexto performativo. Neste caso particular, material sonoro é entendido como música, que consiste numa pulsação regular e definida. O objetivo desta investigação é compreender os diferentes modelos de organização referentes a esse material e, consequentemente, apresentar uma solução em forma de uma aplicação orientada para a performance ao vivo intitulada Reap. Importa referir que o material sonoro utilizado no software aqui apresentado é composto por músicas inteiras, em oposição às pequenas amostras (samples) recorrentes em muitas aplicações já existentes. No desenvolvimento da aplicação foi adotada a análise estatística de descritores aplicada ao material sonoro pré-gravado, de maneira a retirar segmentos que permitem uma nova reorganização da informação sequencial originalmente contida numa música. Através da utilização de controladores de matriz com feedback visual, o arranjo e distribuição destes segmentos são alterados e reorganizados de forma mais simplificada.
    [Show full text]
  • Real-Time Programming and Processing of Music Signals Arshia Cont
    Real-time Programming and Processing of Music Signals Arshia Cont To cite this version: Arshia Cont. Real-time Programming and Processing of Music Signals. Sound [cs.SD]. Université Pierre et Marie Curie - Paris VI, 2013. tel-00829771 HAL Id: tel-00829771 https://tel.archives-ouvertes.fr/tel-00829771 Submitted on 3 Jun 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Realtime Programming & Processing of Music Signals by ARSHIA CONT Ircam-CNRS-UPMC Mixed Research Unit MuTant Team-Project (INRIA) Musical Representations Team, Ircam-Centre Pompidou 1 Place Igor Stravinsky, 75004 Paris, France. Habilitation à diriger la recherche Defended on May 30th in front of the jury composed of: Gérard Berry Collège de France Professor Roger Dannanberg Carnegie Mellon University Professor Carlos Agon UPMC - Ircam Professor François Pachet Sony CSL Senior Researcher Miller Puckette UCSD Professor Marco Stroppa Composer ii à Marie le sel de ma vie iv CONTENTS 1. Introduction1 1.1. Synthetic Summary .................. 1 1.2. Publication List 2007-2012 ................ 3 1.3. Research Advising Summary ............... 5 2. Realtime Machine Listening7 2.1. Automatic Transcription................. 7 2.2. Automatic Alignment .................. 10 2.2.1.
    [Show full text]
  • Miller Puckette 1560 Elon Lane Encinitas, CA 92024 [email protected]
    Miller Puckette 1560 Elon Lane Encinitas, CA 92024 [email protected] Education. B.S. (Mathematics), MIT, 1980. Ph.D. (Mathematics), Harvard, 1986. Employment history. 1982-1986 Research specialist, MIT Experimental Music Studio/MIT Media Lab 1986-1987 Research scientist, MIT Media Lab 1987-1993 Research staff member, IRCAM, Paris, France 1993-1994 Head, Real-time Applications Group, IRCAM, Paris, France 1994-1996 Assistant Professor, Music department, UCSD 1996-present Professor, Music department, UCSD Publications. 1. Puckette, M., Vercoe, B. and Stautner, J., 1981. "A real-time music11 emulator," Proceedings, International Computer Music Conference. (Abstract only.) P. 292. 2. Stautner, J., Vercoe, B., and Puckette, M. 1981. "A four-channel reverberation network," Proceedings, International Computer Music Conference, pp. 265-279. 3. Stautner, J. and Puckette, M. 1982. "Designing Multichannel Reverberators," Computer Music Journal 3(2), (pp. 52-65.) Reprinted in The Music Machine, ed. Curtis Roads. Cambridge, The MIT Press, 1989. (pp. 569-582.) 4. Puckette, M., 1983. "MUSIC-500: a new, real-time Digital Synthesis system." International Computer Music Conference. (Abstract only.) 5. Puckette, M. 1984. "The 'M' Orchestra Language." Proceedings, International Computer Music Conference, pp. 17-20. 6. Vercoe, B. and Puckette, M. 1985. "Synthetic Rehearsal: Training the Synthetic Performer." Proceedings, International Computer Music Conference, pp. 275-278. 7. Puckette, M. 1986. "Shannon Entropy and the Central Limit Theorem." Doctoral dissertation, Harvard University, 63 pp. 8. Favreau, E., Fingerhut, M., Koechlin, O., Potacsek, P., Puckette, M., and Rowe, R. 1986. "Software Developments for the 4X real-time System." Proceedings, International Computer Music Conference, pp. 43-46. 9. Puckette, M.
    [Show full text]
  • Editorial: Alternative Histories of Electroacoustic Music
    This is a repository copy of Editorial: Alternative histories of electroacoustic music. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/119074/ Version: Accepted Version Article: Mooney, J orcid.org/0000-0002-7925-9634, Schampaert, D and Boon, T (2017) Editorial: Alternative histories of electroacoustic music. Organised Sound, 22 (02). pp. 143-149. ISSN 1355-7718 https://doi.org/10.1017/S135577181700005X This article has been published in a revised form in Organised Sound http://doi.org/10.1017/S135577181700005X. This version is free to view and download for private research and study only. Not for re-distribution, re-sale or use in derivative works. © Cambridge University Press Reuse Unless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version - refer to the White Rose Research Online record for this item. Where records identify the publisher as the copyright holder, users can verify any specific terms of use on the publisher’s website. Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing [email protected] including the URL of the record and the reason for the withdrawal request. [email protected] https://eprints.whiterose.ac.uk/ EDITORIAL: Alternative Histories of Electroacoustic Music In the more than twenty years of its existence, Organised Sound has rarely focussed on issues of history and historiography in electroacoustic music research.
    [Show full text]
  • 62 Years and Counting: MUSIC N and the Modular Revolution
    62 Years and Counting: MUSIC N and the Modular Revolution By Brian Lindgren MUSC 7660X - History of Electronic and Computer Music Fall 2019 24 December 2019 © Copyright 2020 Brian Lindgren Abstract. MUSIC N by Max Mathews had two profound impacts in the world of music ​ synthesis. The first was the implementation of modularity to ensure a flexibility as a tool for the user; with the introduction of the unit generator, the instrument and the compiler, composers had the building blocks to create an unlimited range of sounds. The second was the impact of this implementation in the modular analog synthesizers developed a few years later. While Jean-Claude Risset, a well known Mathews associate, asserts this, Mathews actually denies it. They both are correct in their perspectives. Introduction Over 76 years have passed since the invention of the first electronic general purpose computer,1 the ENIAC. Today, we carry computers in our pockets that can perform millions of times more calculations per second.2 With the amazing rate of change in computer technology, it's hard to imagine that any development of yesteryear could maintain a semblance of relevance today. However, in the world of music synthesis, the foundations that were laid six decades ago not only spawned a breadth of multifaceted innovation but continue to function as the bedrock of important digital applications used around the world today. Not only did a new modular approach implemented by its creator, Max Mathews, ensure that the MUSIC N lineage would continue to be useful in today’s world (in one of its descendents, Csound) but this approach also likely inspired the analog synthesizer engineers of the day, impacting their designs.
    [Show full text]
  • DSP Class III: Digital Electronic Music Concepts Overview (Part III) ADC and DAC Analog-To-Digital Conversion
    TECH 350: DSP Class III: Digital Electronic Music Concepts Overview (Part III) ADC and DAC Analog-to-Digital Conversion Parameters of ADC: • Sampling Rate (fs) = rate at which analog signal is ^ captured (sampling) (in Hertz) Intensity v • Bit Depth = number of values for each digital sample (quantization) (in bits) Time -> Limitations/Issues with Sampling Distortion caused by sampling, AKA ALIASING (or foldover) How can we rectify (or at least describe) this phenomenon? Sampling (Nyquist) Theorem •Can describe the resultant frequency of aliasing via the following (rough) formula, iff input freq. > half the sampling rate && < sampling rate: resultant frequency = sampling frequency (fs) - input frequency For example, if fs = 1000Hz and the frequency of our input is at 800Hz: 1000 - 800 = 200, so resultant frequency is 200Hz (!) •Nyquist theorem = In order to be able to reconstruct a signal, the sampling frequency must be at least twice the frequency of the signal being sampled •If you want to represent frequencies up to X Hz, you need fs = 2X Hz Ideal Sampling Frequency (for audio) •What sampling rate should we use for musical applications? •This is an on-going debate. Benefits of a higher sampling rate? Drawbacks? •AES Standards: •Why 44.1kHz? Why 48kHz? Why higher (we can’t hear up there, can we?) •For 44.1kHz and 48kHz answer lies primarily within video standard considerations, actually… •44.1kHz = 22 · 32 · 52 · 72, meaning it has a ton of integer factors •>2 * 20kHz is great, as it allows us to have frequency headroom to work with, and subharmonics (and interactions of phase, etc.) up in that range are within our audible range Anti-Aliasing Filters + Phase Correction •How to fix aliasing? Add a low-pass filter set at a special cutoff frequency before we digitize the signal.
    [Show full text]
  • Fifty Years of Computer Music: Ideas of the Past Speak to the Future
    Fifty Years of Computer Music: Ideas of the Past Speak to the Future John Chowning1 1 CCRMA, Department of Music, Stanford University, Stanford, California 94305 [email protected] Abstract. The use of the computer to analyze and synthesize sound in two early forms, additive and FM synthesis, led to new thoughts about synthesizing sound spectra, tuning and pitch. Detached from their traditional association with the timbre of acoustic instruments, spectra become structured and associated with pitch in ways that are unique to the medium of computer music. 1 Introduction In 1957, just fifty years ago, Max Mathews introduced a wholly new means of mak- ing music. An engineer/scientist at Bell Telephone Laboratories (BTL), Max (with the support of John Pierce, who was director of research) created out of numbers and code the first music to be produced by a digital computer. It is usually the case that a fascination with some aspect of a discipline outside of one’s own will quickly con- clude with an experiment without elaboration. But in Max’s case, it was the begin- ning of a profoundly deep and consequential adventure, one which he modestly in- vited us all to join through his elegantly conceived programs, engendering tendrils that found their way into far-flung disciplines that today, 50 years later, continue to grow without end. From the very beginning Max’s use of the computer for making music was expan- sive. Synthesis, signal processing, analysis, algorithmic composition, psychoacous- tics—all were within his scope and all were expressed and described in great detail in his famous article [1] and the succession of programs MUSIC I-V.1 It is in the nature of the computer medium that detail be elevated at times to the forefront of our thinking, for unlike preceding music technologies, both acoustic and analogue, computers require us to manage detail to accomplish even the most basic steps.
    [Show full text]
  • Sound-Source Recognition: a Theory and Computational Model
    Sound-Source Recognition: A Theory and Computational Model by Keith Dana Martin B.S. (with distinction) Electrical Engineering (1993) Cornell University S.M. Electrical Engineering (1995) Massachusetts Institute of Technology Submitted to the department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY June, 1999 © Massachusetts Institute of Technology, 1999. All Rights Reserved. Author .......................................................................................................................................... Department of Electrical Engineering and Computer Science May 17, 1999 Certified by .................................................................................................................................. Barry L. Vercoe Professor of Media Arts and Sciences Thesis Supervisor Accepted by ................................................................................................................................. Professor Arthur C. Smith Chair, Department Committee on Graduate Students _____________________________________________________________________________________ 2 Sound-source recognition: A theory and computational model by Keith Dana Martin Submitted to the Department of Electrical Engineering and Computer Science on May 17, 1999, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical Engineering
    [Show full text]
  • Download Chapter 264KB
    Memorial Tributes: Volume 16 Copyright National Academy of Sciences. All rights reserved. Memorial Tributes: Volume 16 MAX V. MATHEWS 1926–2011 Elected in 1979 “For contributions to computer generation and analysis of meaningful sounds.” BY C. GORDON BELL MAX VERNON MATHEWS, often called the father of computer music, died on April 21, 2011, at the age of 84. At the time of his death he was serving as professor (research) emeritus at Stanford University’s Center for Computer Research in Music and Acoustics. Max was born in Columbus, Nebraska, on November 13, 1926. He attended high school in Peru, Nebraska, where his father taught physics and his mother taught biology at the state teachers college there. Peru High School was the training school for the college. This was during World War II (1943– 1944). One day when Max was a senior in high school, he simply went off to Omaha (strictly on his own volition) and enlisted in the U.S. Navy—a fortunate move because he was able to have some influence on the service to which he was assigned, and after taking the Eddy Aptitude Test, he was selected for radar school. Radar, however, was so secret, that Max was designated a “radio technician.” After basic training he was sent to Treasure Island, San Francisco, where he met Marjorie (Marj), who became his wife. After returning from the war, Max applied to the California Institute of Technology (Caltech) and to the Massachusetts Institute of Technology (MIT). On graduating with a bachelor’s degree in electrical engineering from Caltech in 1950, he went to MIT to earn a doctorate in 1954.
    [Show full text]
  • MTO 20.1: Willey, Editing and Arrangement
    Volume 20, Number 1, March 2014 Copyright © 2014 Society for Music Theory The Editing and Arrangement of Conlon Nancarrow’s Studies for Disklavier and Synthesizers Robert Willey NOTE: The examples for the (text-only) PDF version of this item are available online at: http://www.mtosmt.org/issues/mto.14.20.1/mto.14.20.1.willey.php KEYWORDS: Conlon Nancarrow, MIDI, synthesis, Disklavier ABSTRACT: Over the last three decades a number of approaches have been used to hear Conlon Nancarrow’s Studies for Player Piano in new settings. The musical information necessary to do this can be obtained from his published scores, the punching scores that reveal the planning behind the compositions, copies of the rolls, or the punched rolls themselves. The most direct method of extending the Studies is to convert them to digital format, because of the similarities between the way notes are represented on a player piano roll and in MIDI. The process of editing and arranging Nancarrow’s Studies in the MIDI environment is explained, including how piano roll dynamics are converted into MIDI velocities, and other decisions that must be made in order to perform them in a particular environment: the Yamaha Disklavier with its accompanying GM sound module. While Nancarrow approved of multi-timbral synthesis, separating the voices of his Studies and assigning them unique timbres changes the listener’s experience of the “resultant,” Tenney’s term for the fusion of multiple voices into a single polyphonic texture. Received January 2014 1. Introduction [1.1] Conlon Nancarrow’s compositional output from 1948 until his death in 1997 was primarily for the two player pianos in his studio in Mexico City.
    [Show full text]
  • MUS421–571.1 Electroacoustic Music Composition Kirsten Volness – 20 Mar 2018 Synthesizers
    MUS421–571.1 Electroacoustic Music Composition Kirsten Volness – 20 Mar 2018 Synthesizers • Robert Moog – Started building Theremins – Making new tools for Herb Deutsch – Modular components connected by patch cables • Voltage-controlled Oscillators (multiple wave forms) • Voltage-controlled Amplifiers • AM / FM capabilities • Filters • Envelope generator (ADSR) • Reverb unit • AMPEX tape recorder (2+ channels) • Microphones Synthesizers Synthesizers • San Francisco Tape Music Center • Morton Subotnick and Ramon Sender • Donald Buchla – “Buchla Box”– 1965 – Sequencer – Analog automation device that allows a composer to set and store a sequence of notes (or a sequence of sounds, or loudnesses, or other musical information) and play it back automatically – 16 stages (16 splices stored at once) – Pressure-sensitive keys • Subotnick receives commission from Nonesuch Records (Silver Apples of the Moon, The Wild Bull, Touch) Buchla 200 Synthesizers • CBS buys rights to manufacture Buchlas • Popularity surges among electronic music studios, record companies, live performances – Wendy Carlos – Switched-on Bach (1968) – Emerson, Lake, and Palmer, Stevie Wonder, Mothers of Invention, Yes, Pink Floyd, Herbie Hancock, Chick Corea – 1968 Putney studio presents sold-out concert at Elizabeth Hall in London Minimoog • No more patch cables! (Still monophonic) Polyphonic Synthesizers • Polymoog • Four Voice (Oberheim Electronics) – Each voice still patched separately • Prophet-5 – Dave Smith at Sequential Circuits – Fully programmable and polyphonic • GROOVE
    [Show full text]