Csound Programming

Total Page:16

File Type:pdf, Size:1020Kb

Csound Programming CSOUND PROGRAMMING AIST2010 Lecture 5P CSOUND CSound is an environment for sound and music computing ­“A sound compiler” ­Non-interactive score-driven ­Real-time interaction It started with Max Mathews’s “Music1” in 1954 at Bell Labs ­“MusicN” series by Bell Labs, Princeton, Stanford, MIT, etc. ­CSound was created by Barry Vercoe in 1985 at MIT Media Lab ­Read about the journey: https://120years.net/music-n-max-mathews-usa-1957/ AIST2010 L5P — CSOUND PROGRAMMING 2 CSOUND FILE STRUCTURE <CsoundSynthesizer> In the old days CSound ;all code relating to Csound should be encapsulated between ;<CsoundSynthesizer> and </CsoundSynthesizer> requires two files <CsOptions> ;this section tells Csound how to interact ­ .orc (orchestra) file: how CSound ;with various devices and hardware should create sounds </CsOptions> ­ .sco (score) file: what sounds <CsInstruments> ;this section contains instrument definitions CSound should create </CsInstruments> <CsScore> Now only one .csd file is ;this section tells Csound when and how to perform ;instruments defined in the CsInstruments section above needed, with 3 sections </CsScore> </CsoundSynthesizer> AIST2010 L5P — CSOUND PROGRAMMING 3 A VERY SIMPLE MUSIC <CsoundSynthesizer> <CsOptions> -odac ; real-time output to DAC </CsOptions> <CsInstruments> sr = 44100 ; Sampling rate kr = 4410 ; Control rate nchnls = 2 ; Number of channels 0dbfs = 1 ; Max amplitude (full scale 0db) represented by 1 instr 1 ; Instrument #1 aOut oscili p3, p4 ; Opcode oscili arguments p3,p4, output to aOut out aOut ; Use aOut as the output source endin ; End of instrument #1 </CsInstruments> <CsScore> i1 0 1 100 ; Instrument #1, start time 0s, p3=1, p4=100 i1 1 1 200 ; Instrument #1, start time 1s, p3=1, p4=200 i1 2 1 300 ; Instrument #1, start time 2s, p3=1, p4=300 </CsScore> </CsoundSynthesizer> AIST2010 L5P — CSOUND PROGRAMMING 4 Image from: SAMPLING RATE AND CONTROL RATE http://floss.booktype.pro/csound/a- initialization-and-performance-pass/ The instruments are driven by the sampling rate ­A hidden loop repeats the code in relevant “instrument”, during the time specified by the “score” ­ sr = 44100 Relationship between sampling rate and control rate There is also a control rate (k-rate) for data not changing so rapidly ­ kr = 4410 ;usually 10% of sampling rate is sufficient ­ ksmps = 10 ;it can also be a ratio Read more about timing and pass here: ­ http://floss.booktype.pro/csound/a-initialization-and-performance-pass/ AIST2010 L5P — CSOUND PROGRAMMING 5 instr 1 ; ifreq only updated once VARIABLES ; during init ifreq = 440 ; kenv is a changing value ; updated every k-cycle Mainly 3 types of numerical variables, kenv linseg 0, 1, 1, 1, 0 differentiated by the first character ; aout is an audio signal: ; vector every k-cycle ­“a”udio variables: one value per sample aout oscili kenv, ifreq or, view it as a ksmps block of values per k-time out aout ­“k”ontrol variables: one value per k-time endin ­“i”nit variables: one value per note (in score) Waveform by the above code Global variables that can be read by all instruments ­Add “g” in front of a/k/i AIST2010 L5P — CSOUND PROGRAMMING 6 INSTRUMENTS AND P-FIELDS All instruments specified using instr…endin blocks can be called upon in the score ­Different methods to generate sounds, and receive parameters from score ­Besides numbers, you can also use a "string" to name the instruments ; instruments ; score instr 1 i1 0 1 0.5 440 aout oscili p4, p5 ;sine ; start=0, dur=1, p4=0.5, p5=440 out aout i2 0.5 2 0.8 220 endin ; start=0.5, dur=2, p4=0.8, p5=220 instr 2 aout vco2 p4, p5 ;sawtooth out aout endin AIST2010 L5P — CSOUND PROGRAMMING 7 OPCODES AND OPERATORS Opcodes are basic unit generators ­They are like usual functions in other languages, and easily defined by user ­ opcode … endop block Two ways to use opcodes, e.g. oscili ­aout oscili 1, 440 ;the traditional way ­aout = oscili(1, 440) ;easier for inline opcode Common operators are available, with usual operator precedence + - * / ^ % = += -= *= /= == >= > < <= != AIST2010 L5P — CSOUND PROGRAMMING 8 i2 BUILDING ENVELOPES i4 i3 Lines can be built for affecting amplitude, i5 time frequency, or any values as an input i1 for opcodes idur1 idur2 idur3 idur4 xvar line ia, idur, ib A line built using linseg ­A line from ia to ib for idur seconds xvar linseg i1, idur1, i2, idur2, i3, idur3, …, in ­Line segments from i1 to in using multiple points with durations xvar expseg i1, idur1, i2, idur2, i3, idur3, …, in ­Exponential segments similar to linseg You can also use an oscili for building an envelope! AIST2010 L5P — CSOUND PROGRAMMING 9 CONDITIONS AND LOOPS Not common but exists… Looping using while ­if <condition> then ­while <condition> do … … else od … ­until <condition> do endif … od AIST2010 L5P — CSOUND PROGRAMMING 10 ADDITIVE SYNTHESIS … … instr 1 ; Trumpet instr 2 ; Nokia phone ifreq = p4 ifreq = p4 aout1 oscili .141, ifreq aout1 oscili .045, ifreq aout2 oscili .2 , ifreq*2 aout2 oscili .079, ifreq*2 aout3 oscili .141, ifreq*3 aout3 oscili .158, ifreq*3 aout4 oscili .112, ifreq*4 aout4 oscili .224, ifreq*4 aout5 oscili .079, ifreq*5 aout5 oscili .158, ifreq*5 aout6 oscili .056, ifreq*6 aout6 oscili .126, ifreq*6 aout7 oscili .05 , ifreq*7 aout7 oscili .158, ifreq*7 aout8 oscili .035, ifreq*8 aout8 oscili .045, ifreq*8 aout9 oscili .032, ifreq*9 aout9 oscili .032, ifreq*9 aout10 oscili .02, ifreq*10 aout10 oscili .025, ifreq*10 out (aout1+aout2+aout3+aout4+aout5\ ;nextline out (aout1+aout2+aout3+aout4+aout5\ ;nextline +aout6+aout7+aout8+aout9+aout10)/10 +aout6+aout7+aout8+aout9+aout10)/10 endin endin … … i1 0 1 783.99 i2 0 1 440 … … AIST2010 L5P — CSOUND PROGRAMMING 11 WAVETABLE SYNTHESIS Simplifying from additive synthesis with constant sinusoidal waves ;; a simple instrument <CsScore> instr 1 f1 0 16384 10 .045 .079 .158 .224 .158 \ ifreq = cpsmidinn(p4) ;convert from midi no to hz .126 .158 .045 .032 .025 ;; f1 is table 1 aout oscili 1, ifreq, 1 ;1 is table 1 out aout i1 0 .1 88 ;; in this notation endin i1 + . 86 ;; + means continue from previous note i1 + .2 78 ;; . means the same duration as previous note i1 + . 80 ;; midi numbers are used here for convenience i1 + .1 85 ;; and can easily be converted i1 + . 83 The table is kept in the score i1 + .2 74 f1 — function table 1 (any number) i1 + . 76 0 — start time = 0 i1 + .1 83 i1 + . 81 16384 — no. of samples for drawing waveforms i1 + .2 73 10 — sine wave generator i1 + . 76 Then 10 amplitudes for the sine waves i1 + .4 81 </CsScore> AIST2010 L5P — CSOUND PROGRAMMING 12 FREQUENCY MODULATION FM is to introduce a extra wave onto ifreq, with controllable parameters Basic vibrato… And FM? ­ ifreq = 440 ­ icarfreq = 440 ;use kvib for changing vib iratio = 2 ;kvib linseg 1, 5, 20 ; mod freq depends on ratio ;kmod oscili 1, kvib imodfreq = icarfreq*iratio kmod oscili 1, 10 index = 5 aout oscili 0.5, ifreq+kmod ; fmod amp depends on index out aout imodamp = imodfreq*index kmod oscili imodamp, imodfreq aout oscili 0.5, icarfreq+kmod out aout AIST2010 L5P — CSOUND PROGRAMMING 13 CSOUND WITH MIDI CSound by defaults maps MIDI channels 1–16 to instr 1–16 ­In Configure >> Run, choose the appropriate MIDI input interface ­ Or you can use the virtual keyboard as well ­ instr 1 kfreq cpsmidib 2 ;cpsmidib gets midi note messages with pitchbend ; a k-type variable is needed since the frequency may change iamp ampmidi 0dbfs ;ampmidi gets midi velocity, normalize to 0dbfs aout oscili iamp, kfreq out aout endin ­An extra line is needed in the score: ­ f0 300 ; turn on engine for 5 mins, or use any timing you prefer ­There are many more in CSound MIDI for you to discover! AIST2010 L5P — CSOUND PROGRAMMING 14 Freq (Hz) MIDI oct pch pch oct MIDI Freq (Hz) 27.5 21 22 29.135 30.868 23 C1 32.703 24 5.00 25 34.648 36.708 26 27 38.891 41.203 28 HOW TO EASILY NOTATE PITCHES? 43.654 29 30 46.249 48.999 31 32 51.913 MIDI 22 25 27 30 32 34 37 39 42 44 46 49 51 54 56 58 61 63 66 68 70 73 75 78 80 82 85 87 90 92 94 5597 99 33102 104 106 34 58.27 61.735 35 C2 oct 8.08 8.25 8.50 8.67 8.83 65.406 36 6.00 37 69.296 73.416 38 39 77.782 pch 8.01 8.03 8.06 8.08 8.10 82.407 40 87.307 41 42 92.499 97.999 43 44 103.83 C1 C2 C3 D3 E3 F3 G3 A3 B3 C4 D4 E4 F4 G4 A4 B4 C5 C6 C7 C8 110 45 46 116.54 123.47 47 C3 pch 5.00 6.00 7.00 8.00 8.02 8.04 8.05 8.07 8.09 8.11 9.00 10.00 130.8111.00 48 7.0012.00 49 138.59 D3 146.83 50 51 155.56 E3 oct 8.00 8.17 8.33 8.42 8.58 8.75 8.92 9.00 164.81 52 è F3 pch .00,174.61.01,53 …, .11 MIDI 21 23 24 26 28 29 31 33 35 36 38 40 41 43 45 47 48 50 52 53 55 57 59 60 62 64 65 67 69 71 72 74 76 77 79 81 83 84 86 88 89 91 93 95 96 98 100 101 103 105 107 108 54 185 G3 oct è 1/12,196 2/12,55 …, 11/12 56 207.65 A3 220 57 58 233.08 Pitch converters ­ octcps(cps) cps to oct 246.94 59 B3 C4 261.63 60 8.00 8.00 8.01 8.08 61 277.18 D4 ­ octpch(pch) pch to oct ­ cpsmidinn(nn) midi# to cps293.66 62 8.17 8.02 8.03 8.25 63 311.13 329.63 64 8.33 8.04 E4 F4 ­ cpspch(pch) pch to cps ­ pchmidinn(nn) midi# to pch349.23 65 8.42 8.05 8.06 8.50 66 369.99 G4 392 67 8.58 8.07 8.08 8.67 68 415.3 A4 ­ pchoct(oct) oct to pch ­ octmidinn(nn) midi# to oct440 69 8.75 8.09 8.10 8.83 70 466.16 493.88 71 8.92 8.11 B4 C5 ­ cpsoct(oct) oct to cps cps = cycle per second = Hz 523.25 72 9.00 9.00 73 554.37 587.33 74 75 622.25 AIST2010659.26 L5P — CSOUND76 PROGRAMMING 15 698.46 77 78 739.99 783.99 79 80 830.61 880 81 82 932.33 987.77 83 C6 1046.5 84 #### 85 1108.7 1174.7 86 87 1244.5 1318.5 88 1396.9 89 90 1480 1568 91 92 1661.2 1760 93 94 1864.7 1975.5 95 C7 2093 96 #### 97 2217.5 2349.3 98 99 2489 2637 100 2793.8 101 102 2960 3136 103 104 3322.4 3520 105 106 3729.3 3951.1 107 C8 4186 108 #### CSOUND API CSound allow easy collaboration with other programming environments ­CSound API (in C) ­CSound on iOS/Android ­CSound in MaxMSP or PureData ­CSound as VST ­Web-based CSound AIST2010 L5P — CSOUND PROGRAMMING 16 READ FURTHER Chapter 3, “Fundamentals”, CSound.
Recommended publications
  • Synchronous Programming in Audio Processing Karim Barkati, Pierre Jouvelot
    Synchronous programming in audio processing Karim Barkati, Pierre Jouvelot To cite this version: Karim Barkati, Pierre Jouvelot. Synchronous programming in audio processing. ACM Computing Surveys, Association for Computing Machinery, 2013, 46 (2), pp.24. 10.1145/2543581.2543591. hal- 01540047 HAL Id: hal-01540047 https://hal-mines-paristech.archives-ouvertes.fr/hal-01540047 Submitted on 15 Jun 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A Synchronous Programming in Audio Processing: A Lookup Table Oscillator Case Study KARIM BARKATI and PIERRE JOUVELOT, CRI, Mathématiques et systèmes, MINES ParisTech, France The adequacy of a programming language to a given software project or application domain is often con- sidered a key factor of success in software development and engineering, even though little theoretical or practical information is readily available to help make an informed decision. In this paper, we address a particular version of this issue by comparing the adequacy of general-purpose synchronous programming languages to more domain-specific
    [Show full text]
  • Wendy Reid Composer
    WENDY REID COMPOSER 1326 Shattuck Avenue #2 Berkeley, California 94709 [email protected] treepieces.net EDUCATION 1982 Stanford University, CCRMA, Post-graduate study Workshop in computer-generated music with lectures by John Chowning, Max Mathews, John Pierce and Jean-Claude Risset 1978-80 Mills College, M.A. in Music Composition Composition with Terry Riley, Robert Ashley and Charles Shere Violin and chamber music with the Kronos Quartet 1975-77 Ecoles D’Art Americaines, Palais de Fontainbleau and Paris, France: Composition with Nadia Boulanger; Classes in analysis, harmony, counterpoint, composition; Solfege with assistant Annette Dieudonne 1970-75 University of Southern California, School of Performing Arts, B.M. in Music Composition, minor in Violin Performance Composition with James Hopkins, Halsey Stevens and film composer David Raksin 1 AWARDS, GRANTS, and COMMISSIONS Meet The Composer/California Meet The Composer/New York Subito Composer Grant ASMC Grant Paul Merritt Henry Award Hellman Award The Oakland Museum The Nature Company Sound/Image Unlimited Graduate Assistantship California State Scholarship Honors at Entrance USC National Merit Award Finalist National Educational Development Award Finalist Commission, Brassiosaurus (Tomita/Djil/ Heglin):Tree Piece #52 Commission, Joyce Umamoto: Tree Piece #42 Commission, Abel-Steinberg-Winant Trio: Tree Piece #41 Commission, Tom Dambly: Tree Piece #31 Commission, Mary Oliver: Tree Piece #21 Commission, Don Buchla: Tree Piece #17 Commission, William Winant: Tree Piece #10 DISCOGRAPHY LP/Cassette: TREE PIECES (FROG RECORDS,1988/ FROG PEAK) CD: TREEPIECES(FROG RECORDS, 2002/ FROGPEAK) TREE PIECES volume 2 (NIENTE, 2004 / FROGPEAK) TREE PIECE SINGLE #1: LULU VARIATIONS (NIENTE, 2009) TREE PIECE SINGLE #2: LU-SHOO FRAGMENTS (NIENTE, 2010) 2 PUBLICATIONS Scores: Tree Pieces/Frog On Rock/Game of Tree/Klee Pieces/Glass Walls/Early Works (Frogpeak Music/Sound-Image/W.
    [Show full text]
  • Proceedings of the Fourth International Csound Conference
    Proceedings of the Fourth International Csound Conference Edited by: Luis Jure [email protected] Published by: Escuela Universitaria de Música, Universidad de la República Av. 18 de Julio 1772, CP 11200 Montevideo, Uruguay ISSN 2393-7580 © 2017 International Csound Conference Conference Chairs Paper Review Committee Luis Jure (Chair) +yvind Brandtsegg Martín Rocamora (Co-Chair) Pablo Di Liscia John -tch Organization Team Michael Gogins Jimena Arruti Joachim )eintz Pablo Cancela Alex )ofmann Guillermo Carter /armo Johannes Guzmán Calzada 0ictor Lazzarini Ignacio Irigaray Iain McCurdy Lucía Chamorro Rory 1alsh "eli#e Lamolle Juan Martín L$#ez Music Review Committee Gusta%o Sansone Pablo Cetta Sofía Scheps Joel Chadabe Ricardo Dal "arra Sessions Chairs Pablo Di Liscia Pablo Cancela "olkmar )ein Pablo Di Liscia Joachim )eintz Michael Gogins Clara Ma3da Joachim )eintz Iain McCurdy Luis Jure "lo Menezes Iain McCurdy Daniel 4##enheim Martín Rocamora Juan Pampin Steven *i Carmelo Saitta Music Curator Rodrigo Sigal Luis Jure Clemens %on Reusner Index Preface Keynote talks The 60 years leading to Csound 6.09 Victor Lazzarini Don Quijote, the Island and the Golden Age Joachim Heintz The ATS technique in Csound: theoretical background, present state and prospective Oscar Pablo Di Liscia Csound – The Swiss Army Synthesiser Iain McCurdy How and Why I Use Csound Today Steven Yi Conference papers Working with pch2csd – Clavia NM G2 to Csound Converter Gleb Rogozinsky, Eugene Cherny and Michael Chesnokov Daria: A New Framework for Composing, Rehearsing and Performing Mixed Media Music Guillermo Senna and Juan Nava Aroza Interactive Csound Coding with Emacs Hlöðver Sigurðsson Chunking: A new Approach to Algorithmic Composition of Rhythm and Metre for Csound Georg Boenn Interactive Visual Music with Csound and HTML5 Michael Gogins Spectral and 3D spatial granular synthesis in Csound Oscar Pablo Di Liscia Preface The International Csound Conference (ICSC) is the principal biennial meeting for members of the Csound community and typically attracts worldwide attendance.
    [Show full text]
  • Computer Music
    THE OXFORD HANDBOOK OF COMPUTER MUSIC Edited by ROGER T. DEAN OXFORD UNIVERSITY PRESS OXFORD UNIVERSITY PRESS Oxford University Press, Inc., publishes works that further Oxford University's objective of excellence in research, scholarship, and education. Oxford New York Auckland Cape Town Dar es Salaam Hong Kong Karachi Kuala Lumpur Madrid Melbourne Mexico City Nairobi New Delhi Shanghai Taipei Toronto With offices in Argentina Austria Brazil Chile Czech Republic France Greece Guatemala Hungary Italy Japan Poland Portugal Singapore South Korea Switzerland Thailand Turkey Ukraine Vietnam Copyright © 2009 by Oxford University Press, Inc. First published as an Oxford University Press paperback ion Published by Oxford University Press, Inc. 198 Madison Avenue, New York, New York 10016 www.oup.com Oxford is a registered trademark of Oxford University Press All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of Oxford University Press. Library of Congress Cataloging-in-Publication Data The Oxford handbook of computer music / edited by Roger T. Dean. p. cm. Includes bibliographical references and index. ISBN 978-0-19-979103-0 (alk. paper) i. Computer music—History and criticism. I. Dean, R. T. MI T 1.80.09 1009 i 1008046594 789.99 OXF tin Printed in the United Stares of America on acid-free paper CHAPTER 12 SENSOR-BASED MUSICAL INSTRUMENTS AND INTERACTIVE MUSIC ATAU TANAKA MUSICIANS, composers, and instrument builders have been fascinated by the expres- sive potential of electrical and electronic technologies since the advent of electricity itself.
    [Show full text]
  • Interactive Csound Coding with Emacs
    Interactive Csound coding with Emacs Hlöðver Sigurðsson Abstract. This paper will cover the features of the Emacs package csound-mode, a new major-mode for Csound coding. The package is in most part a typical emacs major mode where indentation rules, comple- tions, docstrings and syntax highlighting are provided. With an extra feature of a REPL1, that is based on running csound instance through the csound-api. Similar to csound-repl.vim[1] csound- mode strives to enable the Csound user a faster feedback loop by offering a REPL instance inside of the text editor. Making the gap between de- velopment and the final output reachable within a real-time interaction. 1 Introduction After reading the changelog of Emacs 25.1[2] I discovered a new Emacs feature of dynamic modules, enabling the possibility of Foreign Function Interface(FFI). Being insired by Gogins’s recent Common Lisp FFI for the CsoundAPI[3], I decided to use this new feature and develop an FFI for Csound. I made the dynamic module which ports the greater part of the C-lang’s csound-api and wrote some of Steven Yi’s csound-api examples for Elisp, which can be found on the Gihub page for CsoundAPI-emacsLisp[4]. This sparked my idea of creating a new REPL based Csound major mode for Emacs. As a composer using Csound, I feel the need to be close to that which I’m composing at any given moment. From previous Csound front-end tools I’ve used, the time between writing a Csound statement and hearing its output has been for me a too long process of mouseclicking and/or changing windows.
    [Show full text]
  • Expanding the Power of Csound with Integrated Html and Javascript
    Michael Gogins. Expanding the Power of Csound with Intergrated HTML and JavaScript EXPANDING THE POWER OF CSOUND WITH INTEGRATED HTML AND JAVA SCRIPT Michael Gogins [email protected] https://michaelgogins.tumblr.com http://michaelgogins.tumblr.com/ This paper presents recent developments integrating Csound [1] with HTML [2] and JavaScript [3, 4]. For those new to Csound, it is a “MUSIC N” style, user- programmable software sound synthesizer, one of the first yet still being extended, written mostly in the C language. No synthesizer is more powerful. Csound can now run in an interactive Web page, using all the capabilities of current Web browsers: custom widgets, 2- and 3-dimensional animated and interactive graphics canvases, video, data storage, WebSockets, Web Audio, mathematics typesetting, etc. See the whole list at HTML5 TEST [5]. Above all, the JavaScript programming language can be used to control Csound, extend its capabilities, generate scores, and more. JavaScript is the “glue” that binds together the components and capabilities of HTML5. JavaScript is a full-featured, dynamically typed language that supports functional programming and prototype-based object- oriented programming. In most browsers, the JavaScript virtual machine includes a just- in-time compiler that runs about 4 times slower than compiled C, very fast for a dynamic language. JavaScript has limitations. It is single-threaded, and in standard browsers, is not permitted to access the local file system outside the browser's sandbox. But most musical applications can use an embedded browser, which bypasses the sandbox and accesses the local file system. HTML Environments for Csound There are two approaches to integrating Csound with HTML and JavaScript.
    [Show full text]
  • DVD Program Notes
    DVD Program Notes Part One: Thor Magnusson, Alex Click Nilson is a Swedish avant McLean, Nick Collins, Curators garde codisician and code-jockey. He has explored the live coding of human performers since such Curators’ Note early self-modifiying algorithmic text pieces as An Instructional Game [Editor’s note: The curators attempted for One to Many Musicians (1975). to write their Note in a collaborative, He is now actively involved with improvisatory fashion reminiscent Testing the Oxymoronic Potency of of live coding, and have left the Language Articulation Programmes document open for further interaction (TOPLAP), after being in the right from readers. See the following URL: bar (in Hamburg) at the right time (2 https://docs.google.com/document/d/ AM, 15 February 2004). He previously 1ESzQyd9vdBuKgzdukFNhfAAnGEg curated for Leonardo Music Journal LPgLlCe Mw8zf1Uw/edit?hl=en GB and the Swedish Journal of Berlin Hot &authkey=CM7zg90L&pli=1.] Drink Outlets. Alex McLean is a researcher in the area of programming languages for Figure 1. Sam Aaron. the arts, writing his PhD within the 1. Overtone—Sam Aaron Intelligent Sound and Music Systems more effectively and efficiently. He group at Goldsmiths College, and also In this video Sam gives a fast-paced has successfully applied these ideas working within the OAK group, Uni- introduction to a number of key and techniques in both industry versity of Sheffield. He is one-third of live-programming techniques such and academia. Currently, Sam the live-coding ambient-gabba-skiffle as triggering instruments, scheduling leads Improcess, a collaborative band Slub, who have been making future events, and synthesizer design.
    [Show full text]
  • Editorial: Alternative Histories of Electroacoustic Music
    This is a repository copy of Editorial: Alternative histories of electroacoustic music. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/119074/ Version: Accepted Version Article: Mooney, J orcid.org/0000-0002-7925-9634, Schampaert, D and Boon, T (2017) Editorial: Alternative histories of electroacoustic music. Organised Sound, 22 (02). pp. 143-149. ISSN 1355-7718 https://doi.org/10.1017/S135577181700005X This article has been published in a revised form in Organised Sound http://doi.org/10.1017/S135577181700005X. This version is free to view and download for private research and study only. Not for re-distribution, re-sale or use in derivative works. © Cambridge University Press Reuse Unless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version - refer to the White Rose Research Online record for this item. Where records identify the publisher as the copyright holder, users can verify any specific terms of use on the publisher’s website. Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing [email protected] including the URL of the record and the reason for the withdrawal request. [email protected] https://eprints.whiterose.ac.uk/ EDITORIAL: Alternative Histories of Electroacoustic Music In the more than twenty years of its existence, Organised Sound has rarely focussed on issues of history and historiography in electroacoustic music research.
    [Show full text]
  • 62 Years and Counting: MUSIC N and the Modular Revolution
    62 Years and Counting: MUSIC N and the Modular Revolution By Brian Lindgren MUSC 7660X - History of Electronic and Computer Music Fall 2019 24 December 2019 © Copyright 2020 Brian Lindgren Abstract. MUSIC N by Max Mathews had two profound impacts in the world of music ​ synthesis. The first was the implementation of modularity to ensure a flexibility as a tool for the user; with the introduction of the unit generator, the instrument and the compiler, composers had the building blocks to create an unlimited range of sounds. The second was the impact of this implementation in the modular analog synthesizers developed a few years later. While Jean-Claude Risset, a well known Mathews associate, asserts this, Mathews actually denies it. They both are correct in their perspectives. Introduction Over 76 years have passed since the invention of the first electronic general purpose computer,1 the ENIAC. Today, we carry computers in our pockets that can perform millions of times more calculations per second.2 With the amazing rate of change in computer technology, it's hard to imagine that any development of yesteryear could maintain a semblance of relevance today. However, in the world of music synthesis, the foundations that were laid six decades ago not only spawned a breadth of multifaceted innovation but continue to function as the bedrock of important digital applications used around the world today. Not only did a new modular approach implemented by its creator, Max Mathews, ensure that the MUSIC N lineage would continue to be useful in today’s world (in one of its descendents, Csound) but this approach also likely inspired the analog synthesizer engineers of the day, impacting their designs.
    [Show full text]
  • DSP Class III: Digital Electronic Music Concepts Overview (Part III) ADC and DAC Analog-To-Digital Conversion
    TECH 350: DSP Class III: Digital Electronic Music Concepts Overview (Part III) ADC and DAC Analog-to-Digital Conversion Parameters of ADC: • Sampling Rate (fs) = rate at which analog signal is ^ captured (sampling) (in Hertz) Intensity v • Bit Depth = number of values for each digital sample (quantization) (in bits) Time -> Limitations/Issues with Sampling Distortion caused by sampling, AKA ALIASING (or foldover) How can we rectify (or at least describe) this phenomenon? Sampling (Nyquist) Theorem •Can describe the resultant frequency of aliasing via the following (rough) formula, iff input freq. > half the sampling rate && < sampling rate: resultant frequency = sampling frequency (fs) - input frequency For example, if fs = 1000Hz and the frequency of our input is at 800Hz: 1000 - 800 = 200, so resultant frequency is 200Hz (!) •Nyquist theorem = In order to be able to reconstruct a signal, the sampling frequency must be at least twice the frequency of the signal being sampled •If you want to represent frequencies up to X Hz, you need fs = 2X Hz Ideal Sampling Frequency (for audio) •What sampling rate should we use for musical applications? •This is an on-going debate. Benefits of a higher sampling rate? Drawbacks? •AES Standards: •Why 44.1kHz? Why 48kHz? Why higher (we can’t hear up there, can we?) •For 44.1kHz and 48kHz answer lies primarily within video standard considerations, actually… •44.1kHz = 22 · 32 · 52 · 72, meaning it has a ton of integer factors •>2 * 20kHz is great, as it allows us to have frequency headroom to work with, and subharmonics (and interactions of phase, etc.) up in that range are within our audible range Anti-Aliasing Filters + Phase Correction •How to fix aliasing? Add a low-pass filter set at a special cutoff frequency before we digitize the signal.
    [Show full text]
  • Fifty Years of Computer Music: Ideas of the Past Speak to the Future
    Fifty Years of Computer Music: Ideas of the Past Speak to the Future John Chowning1 1 CCRMA, Department of Music, Stanford University, Stanford, California 94305 [email protected] Abstract. The use of the computer to analyze and synthesize sound in two early forms, additive and FM synthesis, led to new thoughts about synthesizing sound spectra, tuning and pitch. Detached from their traditional association with the timbre of acoustic instruments, spectra become structured and associated with pitch in ways that are unique to the medium of computer music. 1 Introduction In 1957, just fifty years ago, Max Mathews introduced a wholly new means of mak- ing music. An engineer/scientist at Bell Telephone Laboratories (BTL), Max (with the support of John Pierce, who was director of research) created out of numbers and code the first music to be produced by a digital computer. It is usually the case that a fascination with some aspect of a discipline outside of one’s own will quickly con- clude with an experiment without elaboration. But in Max’s case, it was the begin- ning of a profoundly deep and consequential adventure, one which he modestly in- vited us all to join through his elegantly conceived programs, engendering tendrils that found their way into far-flung disciplines that today, 50 years later, continue to grow without end. From the very beginning Max’s use of the computer for making music was expan- sive. Synthesis, signal processing, analysis, algorithmic composition, psychoacous- tics—all were within his scope and all were expressed and described in great detail in his famous article [1] and the succession of programs MUSIC I-V.1 It is in the nature of the computer medium that detail be elevated at times to the forefront of our thinking, for unlike preceding music technologies, both acoustic and analogue, computers require us to manage detail to accomplish even the most basic steps.
    [Show full text]
  • Download Chapter 264KB
    Memorial Tributes: Volume 16 Copyright National Academy of Sciences. All rights reserved. Memorial Tributes: Volume 16 MAX V. MATHEWS 1926–2011 Elected in 1979 “For contributions to computer generation and analysis of meaningful sounds.” BY C. GORDON BELL MAX VERNON MATHEWS, often called the father of computer music, died on April 21, 2011, at the age of 84. At the time of his death he was serving as professor (research) emeritus at Stanford University’s Center for Computer Research in Music and Acoustics. Max was born in Columbus, Nebraska, on November 13, 1926. He attended high school in Peru, Nebraska, where his father taught physics and his mother taught biology at the state teachers college there. Peru High School was the training school for the college. This was during World War II (1943– 1944). One day when Max was a senior in high school, he simply went off to Omaha (strictly on his own volition) and enlisted in the U.S. Navy—a fortunate move because he was able to have some influence on the service to which he was assigned, and after taking the Eddy Aptitude Test, he was selected for radar school. Radar, however, was so secret, that Max was designated a “radio technician.” After basic training he was sent to Treasure Island, San Francisco, where he met Marjorie (Marj), who became his wife. After returning from the war, Max applied to the California Institute of Technology (Caltech) and to the Massachusetts Institute of Technology (MIT). On graduating with a bachelor’s degree in electrical engineering from Caltech in 1950, he went to MIT to earn a doctorate in 1954.
    [Show full text]