Grafting Synthesis Patches Onto Live Musical Instruments

Total Page:16

File Type:pdf, Size:1020Kb

Grafting Synthesis Patches Onto Live Musical Instruments GRAFTING SYNTHESIS PATCHES ONTO LIVE MUSICAL INSTRUMENTS Miller Puckette University of California, San Diego frequency IN ABSTRACT phasor Hilbert cos sin A new class of techniques is explored for controlling MU- SIC N or Max/MSP/pd instruments directly from the sound Arctan Mag- of a monophonic instrument (or separately acquired inputs wavetable nitude from a polyphonic instrument). The instantaneous phase [n] m[n] and magnitude of the input signal are used in place of pha- envelope sors and envelope generators. Several such instruments are generator described; this technique is used in SMECK, the author’s open-source guitar processing patch. OUT 1. INTRODUCTION OUT (a) (b) One of several possible approaches to real-time expressive Figure 1. A MUSIC N instrument: (a) classically imple- control of computer music algorithms is to use the acousti- mented; (b) adapted to use an incoming sound’s phase and cal sound of a live instrument directly as a source of control. amplitude This has the advantage of tapping into the skill of a trained musician (rather than requiring the player to learn entirely new skills). Further, using the sound of the instrument is ness and more expressivity (because they can avoid the la- sometimes preferable to using physical captors; the captor tency of analysis). These have used the incoming audio sig- approach is more expensive, harder to migrate from one in- nal simultaneously as a processing input and to compute the strument to another, and downright unavailable for some in- parameters of the audio effect such as a delay line [2], or an struments such as the human voice. Also, arguably at least, all-pass filter or frequency shifter [3]. In these approaches, the sound of the instrument itself is what the musician is fo- audio signal processing (at a low latency) is parametrized cussed on producing, so it is in a sense more direct to use by (higher-latency) analysis results: for instance, filtering or the instrumental sound than data taken from captors. modulating instrumental sound depending on its measured We will be concerned with the direct manipulation, at pitch [5]. The approach adopted here [7] is similar to, but the sample level, of almost-periodic signals. These may generalizes, the frequency shifting approach. come from a monophonic pitched instrument or from a poly- phonic string instrument with a separated pickup. Rather 3. GENERAL TECHNIQUE than analyzing the signal to derive pitch and amplitude en- velopes for controlling a synthesizer, we will work directly Our strategy is to extract time-dependent phase and am- from the signal itself, extracting its instantaneous phase and plitude signals from a quasi-periodic waveform, which can magnitude, for direct use in a signal processing chain. then be plugged into a wide variety of classical or novel MUSIC-style instruments. For example, the simple instru- 2. PRIOR WORK ment shown in Figure 1 (part a), an amplitude-controlled wavetable oscillator, is transformed into the instrument of Much instrument-derived synthesis takes an analysis/synthesis part (b). Whereas the instrument of part (a) has frequency, approach, in which time-windowed segments of incoming start time, and duration as parameters, that of part (b) has an audio are analyzed for their pitch (such as in guitar syn- audio signal as input coming from an external instrumen- thesizers) and/or spectrum (as in Charles Dodge’s Speech tal source. The phase and amplitude are computed from the Songs). incoming audio signal. Algorithms that can act directly on the signals without To extract the phase and magnitude, the input signal is the intermediary of an analysis step can attain higher robust- first converted into a pair of signals in phase quadrature us- m[n] [n] s k OUT Figure 3. Phase modulation. The parameter k is a fixed Figure 2. Extracted “magnitude” and “phase” of a sum of modulation index, and s is the sensitivity of modulation in- two sinusoids. dex to input amplitude. ing an appropriately designed pair of all-pass filters, labeled the output follows the amplitude of the input, since the table “Hilbert” in the figure. Denoting these by x[n] and y[n], the lookup portion depends only on phase and the amplitude of magnitude and phase are computed as the input is used linearly to control the amplitude of the out- put. Second, scaling the input by a constant factor does not q m[n] = x2[n] + y2[n] affect the waveform but only scales the output correspond- ingly. Both of these properties are sometimes desirable but other behaviors are possible and often desirable. f[n] = arctan(y[n]=x[n]) As a special case, if the wavetable in Figure 1(b) is cho- If the incoming signal is a sinusoid of constant magnitude sen to be a single cosine cycle, the instrument recovers an m and frequency f , then the calculation of m[n] retrieves the all-pass-filtered version of the incoming sound: value m and f is a phase signal (a sawtooth) of frequency f . u[n] = m(n)cos(f[n]) = x[n]: If the input signal contains more than one sinusoidal com- ponent, the result is more complicated, but if one sinusoidal It is thus possible to move continuously and coherently be- component is dominant then the effect of the others is simul- tween the nearly unaltered sound of the original instrumen- taneously to modulate the signals m[n] and f[n]; i.e., they tal signal and highly modified ones. are heard as amplitude and phase modulation [7]. This is illustrated in Figure 2, which shows the phase 4. PHASE MODULATION and magnitude measured from a synthetic input consisting of first and ninth partials with amplitudes in the ratio four to As a first example of a useful type of audio modification, the one. At top the two phase-quadrature outputs of the “Hilbert” instrument of Figure 3 adds phase modulation to the sound filters are graphed on the horizontal and vertical axes. As of the incoming instrument. There are two parameters, a shown underneath, the measured magnitude and phase are, fixed index k, and a sensitivity s to allow the index to vary to first order, those of the fundamental, modulated sinu- with amplitude. To analyze this we first put s = 0 so that the soidally at eight times the fundamental. (Although the phase index is fixed at k. If the incoming sound is a sinusoid, then signal appears to have jumps, it is continuous provided phases the amplitude signal is constant and the phase signal is a reg- zero and 2p are taken as equivalent). ular sawtooth, so the result is classic 2-operator phase mod- The particular way a patch responds to changes in am- ulation [1] with index k. If we now allow s to be nonzero, plitude in the input signal can be specified at will. For ex- the modulation index becomes amplitude-dependent. ample, the patch of Figure 1(b) respects the amplitude of If the input is a near but not exact sinusoid, deviations the incoming signal in two ways. First, the amplitude of from sinusoidality will appear as phase modulation on both tiple of the frequency of the incoming signal. The wave packet formulation of formant synthesis also a allows for the use of more complex waveforms in place of the sinusoids, permitting us to generate periodic sounds with arbitrary, time-varying spectral envelopes [6]. To do this, we replace the wraparound tables in the block diagram of b Figure 6 with a two-dimensional table, whose rows each act as waveforms to packetize. Collectively, the rows describe a time-varying waveform, whose variation is controlled by a separate control parameter (shown as an envelope generator c in the figure). An interesting application is computer-determined scat singing, in which instrumental notes are shaped into scat syllables, chosen to reflect the timing and melodic contour of the instrumental melody. Figure 4. Formant generation: (a) a format as an overlapped sum of wave packets; (b, c) decomposition of the sum into realizable parts. 6. RESULTS The testing ground for these techniques has been a guitar the carrier and modulation oscillators, and also as amplitude with a separated pickup (the Roland GK-3). Most of the modulation on the output and index of modulation, and, if available Roland gear only provides MIDI or synthesized s 6= 0, on the index of modulation of the 2-operator system. outputs, but for this work the six raw string signals are needed; In any case, the amplitude of the output still follows that of they may be obtained via the KeithMcmillin Stringport [4]. the input faithfully. The six guitar strings are each treated separately according to the algorithms described here. The system has been used 5. WAVE PACKETS by the author in many performances starting at Off-ICMC 2005. A comprehensive patch with many presets is avail- One powerful approach to synthesizing pitched sounds is able both in Max/MSP through the KeithMcmillin website, formant synthesis as in VOSIM [10], CHANT [9], PAF [8, and in Pd from crca.ucsd.edu/˜msp. Ch. 6], and so on. Our approach here is to use wave packet synthesis (although the PAF also works well in this context). Figure 4 shows a wave packet series with an overlap factor 7. CONCLUSION of two. Each packet is the product of a Hann window with a This paper has described an exceedingly general technique. sinusoid; each sinusoid’s phase is zero at the center of its Almost any classical MUSIC N-style computer music in- packet. The fundamental frequency is controlled by the spac- strument may be adapted so that one or more of its phase ing of the packets, the formant center frequency is the fre- and amplitude controls are replaced by ones obtained from quency of the individual “packetted” sinusoids, and the band- an incoming quasiperiodic instrumental input.
Recommended publications
  • Real-Time Programming and Processing of Music Signals Arshia Cont
    Real-time Programming and Processing of Music Signals Arshia Cont To cite this version: Arshia Cont. Real-time Programming and Processing of Music Signals. Sound [cs.SD]. Université Pierre et Marie Curie - Paris VI, 2013. tel-00829771 HAL Id: tel-00829771 https://tel.archives-ouvertes.fr/tel-00829771 Submitted on 3 Jun 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Realtime Programming & Processing of Music Signals by ARSHIA CONT Ircam-CNRS-UPMC Mixed Research Unit MuTant Team-Project (INRIA) Musical Representations Team, Ircam-Centre Pompidou 1 Place Igor Stravinsky, 75004 Paris, France. Habilitation à diriger la recherche Defended on May 30th in front of the jury composed of: Gérard Berry Collège de France Professor Roger Dannanberg Carnegie Mellon University Professor Carlos Agon UPMC - Ircam Professor François Pachet Sony CSL Senior Researcher Miller Puckette UCSD Professor Marco Stroppa Composer ii à Marie le sel de ma vie iv CONTENTS 1. Introduction1 1.1. Synthetic Summary .................. 1 1.2. Publication List 2007-2012 ................ 3 1.3. Research Advising Summary ............... 5 2. Realtime Machine Listening7 2.1. Automatic Transcription................. 7 2.2. Automatic Alignment .................. 10 2.2.1.
    [Show full text]
  • Miller Puckette 1560 Elon Lane Encinitas, CA 92024 [email protected]
    Miller Puckette 1560 Elon Lane Encinitas, CA 92024 [email protected] Education. B.S. (Mathematics), MIT, 1980. Ph.D. (Mathematics), Harvard, 1986. Employment history. 1982-1986 Research specialist, MIT Experimental Music Studio/MIT Media Lab 1986-1987 Research scientist, MIT Media Lab 1987-1993 Research staff member, IRCAM, Paris, France 1993-1994 Head, Real-time Applications Group, IRCAM, Paris, France 1994-1996 Assistant Professor, Music department, UCSD 1996-present Professor, Music department, UCSD Publications. 1. Puckette, M., Vercoe, B. and Stautner, J., 1981. "A real-time music11 emulator," Proceedings, International Computer Music Conference. (Abstract only.) P. 292. 2. Stautner, J., Vercoe, B., and Puckette, M. 1981. "A four-channel reverberation network," Proceedings, International Computer Music Conference, pp. 265-279. 3. Stautner, J. and Puckette, M. 1982. "Designing Multichannel Reverberators," Computer Music Journal 3(2), (pp. 52-65.) Reprinted in The Music Machine, ed. Curtis Roads. Cambridge, The MIT Press, 1989. (pp. 569-582.) 4. Puckette, M., 1983. "MUSIC-500: a new, real-time Digital Synthesis system." International Computer Music Conference. (Abstract only.) 5. Puckette, M. 1984. "The 'M' Orchestra Language." Proceedings, International Computer Music Conference, pp. 17-20. 6. Vercoe, B. and Puckette, M. 1985. "Synthetic Rehearsal: Training the Synthetic Performer." Proceedings, International Computer Music Conference, pp. 275-278. 7. Puckette, M. 1986. "Shannon Entropy and the Central Limit Theorem." Doctoral dissertation, Harvard University, 63 pp. 8. Favreau, E., Fingerhut, M., Koechlin, O., Potacsek, P., Puckette, M., and Rowe, R. 1986. "Software Developments for the 4X real-time System." Proceedings, International Computer Music Conference, pp. 43-46. 9. Puckette, M.
    [Show full text]
  • MUS421–571.1 Electroacoustic Music Composition Kirsten Volness – 20 Mar 2018 Synthesizers
    MUS421–571.1 Electroacoustic Music Composition Kirsten Volness – 20 Mar 2018 Synthesizers • Robert Moog – Started building Theremins – Making new tools for Herb Deutsch – Modular components connected by patch cables • Voltage-controlled Oscillators (multiple wave forms) • Voltage-controlled Amplifiers • AM / FM capabilities • Filters • Envelope generator (ADSR) • Reverb unit • AMPEX tape recorder (2+ channels) • Microphones Synthesizers Synthesizers • San Francisco Tape Music Center • Morton Subotnick and Ramon Sender • Donald Buchla – “Buchla Box”– 1965 – Sequencer – Analog automation device that allows a composer to set and store a sequence of notes (or a sequence of sounds, or loudnesses, or other musical information) and play it back automatically – 16 stages (16 splices stored at once) – Pressure-sensitive keys • Subotnick receives commission from Nonesuch Records (Silver Apples of the Moon, The Wild Bull, Touch) Buchla 200 Synthesizers • CBS buys rights to manufacture Buchlas • Popularity surges among electronic music studios, record companies, live performances – Wendy Carlos – Switched-on Bach (1968) – Emerson, Lake, and Palmer, Stevie Wonder, Mothers of Invention, Yes, Pink Floyd, Herbie Hancock, Chick Corea – 1968 Putney studio presents sold-out concert at Elizabeth Hall in London Minimoog • No more patch cables! (Still monophonic) Polyphonic Synthesizers • Polymoog • Four Voice (Oberheim Electronics) – Each voice still patched separately • Prophet-5 – Dave Smith at Sequential Circuits – Fully programmable and polyphonic • GROOVE
    [Show full text]
  • Interactive Electroacoustics
    Interactive Electroacoustics Submitted for the degree of Doctor of Philosophy by Jon Robert Drummond B.Mus M.Sc (Hons) June 2007 School of Communication Arts University of Western Sydney Acknowledgements Page I would like to thank my principal supervisor Dr Garth Paine for his direction, support and patience through this journey. I would also like to thank my associate supervisors Ian Stevenson and Sarah Waterson. I would also like to thank Dr Greg Schiemer and Richard Vella for their ongoing counsel and faith. Finally, I would like to thank my family, my beautiful partner Emma Milne and my two beautiful daughters Amelia Milne and Demeter Milne for all their support and encouragement. Statement of Authentication The work presented in this thesis is, to the best of my knowledge and belief, original except as acknowledged in the text. I hereby declare that I have not submitted this material, either in full or in part, for a degree at this or any other institution. …………………………………………… Table of Contents TABLE OF CONTENTS ..................................................................................................................I LIST OF TABLES..........................................................................................................................VI LIST OF FIGURES AND ILLUSTRATIONS............................................................................ VII ABSTRACT..................................................................................................................................... X CHAPTER ONE: INTRODUCTION.............................................................................................
    [Show full text]
  • Read Book Microsound
    MICROSOUND PDF, EPUB, EBOOK Curtis Roads | 424 pages | 17 Sep 2004 | MIT Press Ltd | 9780262681544 | English | Cambridge, Mass., United States Microsound PDF Book Phantoms Canoply Games listeners. I have no skin in the game either way and think you should just use what you want to use…Just make music any way you see fit. Miller Puckette Professor, Department of Music, University of California San Diego Microsound offers an enticing series of slice 'n' dice audio recipes from one of the pioneering researchers into the amazingly rich world of granular synthesis. In a single row of hp we find the Mimeophon offering stereo repeats and travelling halos. Browse our collection here. Make Noise modules have an uncanny knack of fitting well together. Electronic Sounds Live. Search Search. Connect to Spotify. Electronic and electroacoustic music. Taylor Deupree. He has also done some scoring for TV and movies, and sound design for video games. Search Search. Join the growing network of Microsound Certified Installers today. Taylor Deupree 71, listeners. Microsound grew up with most of us. El hombre de la Caverna, Disco 1. Everything you need to know about Microsound Products. Sounds coalesce, evaporate, and mutate into other sounds. Connect your Spotify account to your Last. Microsound Accreditation Carry the Certified Microsound Installer reputation with you wherever you go. The Morphagene acts as a recorder of sound and layerer of ideas while the Mimeophon mimics and throws out echoes of what has come before. Help Learn to edit Community portal Recent changes Upload file. More Love this track Set track as current obsession Get track Loading.
    [Show full text]
  • UNIVERSITY of CALIFORNIA, SAN DIEGO Unsampled Digital Synthesis and the Context of Timelab a Dissertation Submitted in Partial S
    UNIVERSITY OF CALIFORNIA, SAN DIEGO Unsampled Digital Synthesis and the Context of Timelab A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Music by David Medine Committee in charge: Professor Miller Puckette, Chair Professor Anthony Burr Professor David Kirsh Professor Scott Makeig Professor Tamara Smyth Professor Rand Steiger 2016 Copyright David Medine, 2016 All rights reserved. The dissertation of David Medine is approved, and it is acceptable in quality and form for publication on micro- film and electronically: Chair University of California, San Diego 2016 iii DEDICATION To my mother and father. iv EPIGRAPH For those that seek the knowledge, it's there. But you have to seek it out, do the knowledge, and understand it on your own. |The RZA Always make the audience suffer as much as possible. —Aflred Hitchcock v TABLE OF CONTENTS Signature Page.................................. iii Dedication..................................... iv Epigraph.....................................v Table of Contents................................. vi List of Supplemental Files............................ ix List of Figures..................................x Acknowledgements................................ xiii Vita........................................ xv Abstract of the Dissertation........................... xvi Chapter 1 Introduction............................1 1.1 Preamble...........................1 1.2 `Analog'...........................5 1.3 Digitization.........................7 1.4
    [Show full text]
  • The Sampling Theorem and Its Discontents
    The Sampling Theorem and Its Discontents Miller Puckette Department of Music University of California, San Diego [email protected] After a keynote talk given at the 2015 ICMC, Denton, Texas. August 28, 2016 The fundamental principle of Computer Music is usually taken to be the Nyquist Theorem, which, in its usually cited form, states that a band-limited function can be exactly represented by sampling it at regular intervals. This paper will not quarrel with the theorem itself, but rather will test the assump- tions under which it is commonly applied, and endeavor to show that there are interesting approaches to computer music that lie outside the framework of the sampling theorem. As we will see in Section 3, sampling violations are ubiquitous in every- day electronic music practice. The severity of these violations can usually be mitigated either through various engineering practices and/or careful critical lis- tening. But their existence gives the lie to the popular understanding of digital audio practice as being \lossless". This is not to deny the power of modern digital signal processing theory and its applications, but rather to claim that its underlying assumption|that the sampled signals on which we are operating are to be thought of as exactly rep- resenting band-limited continuous-time functions|sheds light on certain digital operations (notably time-invariant filtering) but not so aptly on others, such as classical synthesizer waveform generation. Digital audio practitioners cannot escape the necessity of representing contin- uous-time signals with finite-sized data structures. But the blanket assumption that such signals can only be represented via the sampling theorem can be unnecessarily limiting.
    [Show full text]
  • Music Program of the 14Th Sound and Music Computing Conference 2017
    Music Program of the 14th Sound and Music Computing Conference 2017 SMC 2017 July 5-8, 2017 Espoo, Finland Edited by Koray Tahiroğlu and Tapio Lokki Aalto University School of Science School of Electrical Engineering School of Arts, Design, and Architecture http://smc2017.aalto.fi Committee Chair Prof. Vesa Välimäki Papers Chair Prof. Tapio Lokki Music Chair Dr. Koray Tahiroğlu Treasurer Dr. Jukka Pätynen Web Manager Mr. Fabian Esqueda Practicalities Ms. Lea Söderman Ms. Tuula Mylläri Sponsors ii Reviewers for music program Chikashi Miyama Miriam Akkermann Claudia Molitor James Andean Ilkka Niemeläinen Newton Armstrong Eleonora Oreggia Andrew Bentley Miguel Ortiz John Bischoff Martin Parker Thomas Bjelkeborn Kevin Patton Joanne Cannon Lorah Pierre Lamberto Coccioli Sébastien Piquemal Nicolas Collins Afroditi Psarra Carl Faia Miller Puckette Nick Fells Heather Roche Heather Frasch Robert Rowe Richard Graham Tim Shaw Thomas Grill Jeff Snyder Michael Gurevich Julian Stein Kerry Hagan Malte Steiner Louise Harris Kathleen Supove Lauren Hayes Koray Tahiroğlu Anne Hege Kalev Tiits Enrique Hurtado Enrique Tomás Andy Keep Pierre Alexandre Chris Kiefer Tremblay Juraj Kojs Dan Trueman Petri Kuljuntausta Tolga Tuzun Hans Leeuw Juan Vasquez Thor Magnusson Simon Waters Christos Michalakos Anna Weisling Matthew Yee-King Johannes Zmölnig iii Table of Contents Concerning the Size of the Sun 5 Spheroid 6 Astraglossa or first steps in celestial syntax 7 Machine Milieu 8 RAW Live Coding Audio/Visual Performance 9 The First Flowers of the Year Are Always Yellow 10 Shankcraft 11 The Mark on the Mirror (Made by Breathing) 12 [sisesta pealkiri] 13 Synaesthesia 15 Cubed II 16 The Counting Sisters 17 bocca:mikro 18 revontulet (2017) 20 The CM&T Dome@SMC-17 21 Lightscape-Masquerade 22 Raveshift 24 ImproRecorderBot.
    [Show full text]
  • Miller Puckette
    A Divide Between ‘Compositional’ and ‘Performative’ Aspects of Pd * Miller Puckette In the ecology of human culture, unsolved problems play a role that is even more essential than that of solutions. Although the solutions found give a field its legitimacy, it is the prob- lems that give it life. With this in mind, I’ll describe what I think of as the central problem I’m struggling with today, which has perhaps been the main motivating force in my work on Pd, among other things. If Pd’s fundamental design reflects my attack on this problem, perhaps others working on or in Pd will benefit if I try to articulate the problem clearly. In its most succinct form, the problem is that, while we have good paradigms for describing processes (such as in the Max or Pd programs as they stand today), and while much work has been done on representations of musical data (ranging from searchable databases of sound to Patchwork and OpenMusic, and including Pd’s unfinished “data” editor), we lack a fluid mechanism for the two worlds to interoperate. 1EXAMPLES Three examples, programs that combine data storage and retrieval with realtime actions, will serve to demonstrate this division. Taking them as representative of the current state of the art, the rest of this paper will describe the attempts I’m now making to bridge the separation between the two realms. 1.1 CSound In CSound2, the database is called a score (this usage was widespread among software synthesis packages at the time Csound was under development). Scores in Csound consist mostly of ‘notes’, which are commands for a synthesizer.
    [Show full text]
  • An Interview with Max Mathews
    Tae Hong Park 102 Dixon Hall An Interview with Music Department Tulane University Max Mathews New Orleans, LA 70118 USA [email protected] Max Mathews was last interviewed for Computer learned in school was how to touch-type; that has Music Journal in 1980 in an article by Curtis Roads. become very useful now that computers have come The present interview took place at Max Mathews’s along. I also was taught in the ninth grade how home in San Francisco, California, in late May 2008. to study by myself. That is when students were Downloaded from http://direct.mit.edu/comj/article-pdf/33/3/9/1855364/comj.2009.33.3.9.pdf by guest on 26 September 2021 (See Figure 1.) This project was an interesting one, introduced to algebra. Most of the farmers and their as I had the opportunity to stay at his home and sons in the area didn’t care about learning algebra, conduct the interview, which I video-recorded in HD and they didn’t need it in their work. So, the math format over the course of one week. The original set teacher gave me a book and I and two or three of video recordings lasted approximately three hours other students worked the problems in the book and total. I then edited them down to a duration of ap- learned algebra for ourselves. And this was such a proximately one and one-half hours for inclusion on wonderful way of learning that after I finished the the 2009 Computer Music Journal Sound and Video algebra book, I got a calculus book and spent the next Anthology, which will accompany the Winter 2009 few years learning calculus by myself.
    [Show full text]
  • Voices with Maureen Chowning
    UC San Diego • Division of Arts and Humanities • Department of Music presents John Chowning Turenas (1972) Phoné (1981) - short break - Stria (1977) Voices (2011) BIOS JOHN CHOWNING Composer John M. Chowning was born in Salem, New Jersey in 1934. Following military service and studies at Wittenberg University, he studied composition in Paris for three years with Nadia Boulanger. In 1964, with the help of Max Mathews then at Bell Telephone Laboratories and David Poole of Stanford, he set up a computer music program using the computer system of Stanford University’s Artificial Intelligence Laboratory. Beginning the same year he began the research leading to the first generalized sound localization algorithm implemented in a quad format in 1966. He received the doctorate in composition from Stanford University in 1966, where he studied with Leland Smith. The following year he discovered the frequency modulation synthesis (FM) algorithm, licensed to Yamaha, that led to the most successful synthesis engines in the history of electronic instruments. His three early pieces, Turenas (1972), Stria (1977) and Phoné (1981), make use of his localization/spatialization and FM synthesis algorithms in uniquely different ways. The Computer Music Journal, 31(3), 2007 published four papers about Stria including analyses, its history and reconstruction. After more than twenty years of hearing problems, Chowning was finally able to compose again beginning in 2004, when he began work on Voices, for solo soprano and interactive computer using MaxMSP. Chowning was elected to the American Academy of Arts and Sciences in 1988. He was awarded the Honorary Doctor of Music by Wittenberg University in 1990.
    [Show full text]
  • A Rationale for Faust Design Decisions
    A Rationale for Faust Design Decisions Yann Orlarey1, Stéphane Letz1, Dominique Fober1, Albert Gräf2, Pierre Jouvelot3 GRAME, Centre National de Création Musicale, France1, Computer Music Research Group, U. of Mainz, Germany2, MINES ParisTech, PSL Research University, France3 DSLDI, October 20, 2014 1-Music DSLs Quick Overview Some Music DSLs DARMS DCMP PLACOMP DMIX LPC PLAY1 4CED MCL Elody Mars PLAY2 Adagio MUSIC EsAC MASC PMX AML III/IV/V Euterpea Max POCO AMPLE MusicLogo Extempore MidiLisp POD6 Arctic Music1000 Faust MidiLogo POD7 Autoklang MUSIC7 Flavors MODE PROD Bang Musictex Band MOM Puredata Canon MUSIGOL Fluxus Moxc PWGL CHANT MusicXML FOIL MSX Ravel Chuck Musixtex FORMES MUS10 SALIERI CLCE NIFF FORMULA MUS8 SCORE CMIX NOTELIST Fugue MUSCMP ScoreFile Cmusic Nyquist Gibber MuseData SCRIPT CMUSIC OPAL GROOVE MusES SIREN Common OpenMusic GUIDO Lisp Music MUSIC 10 Organum1 SMDL HARP Common MUSIC 11 Outperform SMOKE Music Haskore MUSIC 360 Overtone SSP Common HMSL MUSIC 4B PE SSSP Music INV Notation MUSIC Patchwork ST invokator 4BF Csound PILE Supercollider KERN MUSIC 4F CyberBand Pla Symbolic Keynote MUSIC 6 Composer Kyma Tidal LOCO First Music DSLs, Music III/IV/V (Max Mathews) 1960: Music III introduces the concept of Unit Generators 1963: Music IV, a port of Music III using a macro assembler 1968: Music V written in Fortran (inner loops of UG in assembler) ins 0 FM; osc bl p9 p10 f2 d; adn bl bl p8; osc bl bl p7 fl d; adn bl bl p6; osc b2 p5 p10 f3 d; osc bl b2 bl fl d; out bl; FM synthesis coded in CMusic Csound Originally developed
    [Show full text]