The Sndobj Sound Object Library

Total Page:16

File Type:pdf, Size:1020Kb

The Sndobj Sound Object Library The SndObj Sound Object Library VICTOR E. P. LAZZARINI Department ofMusic, National University ofIreland, Maynooth, Co. Kildare, Ireland E-mail: [email protected] The SndObj Sound Object Library is a C++ Programming libraries constitute a lower-level in object-oriented audio processing framework and toolkit. computer music practice, when compared to sound com- This article initially examines some of the currently pilers. The context where they appear is more general, available sound processing packages,including sound and consequently their use is more complex. This can compilers,programming libraries and toolkits. It reviews be easily demonstrated by the differences in coding of a the processes involved in the use of these systems and their strengths and weaknesses. Some application similar instrument under csound and cmix (or clm), examples are also provided. In this context,the SndObj which will be shown later. It involves not only the library is presented and its components are discussed in knowledge ofthe syntax ofa particular language, but detail. The article shows the library as composed of a set also the grasp ofconcepts not necessarily needed when of classes that encapsulate all processes involved in using a sound compiler. For instance, with cmix, the synthesis,processing and IO operations. Programming user has to understand variable declaration, memory examples are included to show some uses of the system. allocation, header files, object code linking and other These,together with library binaries,source code and C-related concepts in order to build a sound processing complete documentation,are included in the or synthesis ‘instrument’. The advantage is that, in this downloadable package,available on the Internet. Possible future developments and applications are considered. The case, the programmer has more control over the pro- library is demonstrated to provide a useful base for cesses involved in sound manipulation and can design research and development of audio processing software. applications that will suit better his/her needs. Apart from the mentioned use of libraries as part of packages 1. BACKGROUND: SOUND COMPILERS, such as clm and cmix, there are several other sound- LIBRARIES AND TOOLKITS manipulation programming libraries which were developed for various applications. As examples, two of Since the development, by Max Mathews, ofthe these can be mentioned: the SndLib, developed at MUSIC-series ofsound synthesis programs (Mathews CCRMA for cross-platform sound input/output, and the 1960, 1961) in the early 1960s, several programs for Sfsys, developed by the CDP (Atkins et al. 1987), origin- synthesis and processing ofaudio in a general-purpose ally for their Atari-based sound filing system and then computer were developed. They were mostly based, dir- ported to other platforms (PC and UNIX). ectly or indirectly, on the MUSIC IV model. Programs Another related type ofprogramming softwarelibrary such as these are sometimes referred to as Computer is the toolkit, normally associated with the object- Music Languages, Acoustic Compilers or Sound Com- oriented paradigm. Toolkit objects are normally used in pilers. Some ofthem were developed forspecific hard- programs by direct reference and, sometimes, by com- ware and are now obsolete, whereas others, which enjoyed greater success, were developed under high- position. In general, its use is more straightforward than level languages, like MUSIC V (Mathews 1969), written the standard procedural- or modular-programming lan- in Fortran. Among these, cmusic (Moore 1990) and guage libraries. Examples oftoolkits are the NeXT- csound (Vercoe and Piche 1997), written in C, are two based Music and SoundKit (Jaffe and Boynton 1991), important examples ofsound compilers. Csound is one written in ObjectiveC, and the Synthesis Toolkit (Cook ++ ofthe most interesting sound processing systems avail- 1996), developed by Perry Cook in C . The SndObj able today, not only in terms ofwidespread use and port- library (Lazzarini and Accorsi 1998), although not ability, but also because ofits continuous expansion and designed to be used solely as a toolkit, is also an support by a large development community. Other example. MUSIC-derived sound compilers still in use include clm This section will examine in detail some aspects of (Schottstaedt 1992), which runs in a Common Lisp sound compilers and libraries. First, csound is intro- environment, and cmix (Lansky 1990), which consists of duced as an example ofa sound compiler, followedby a command parser and a library ofC sound-manipulating a small programming example. It is then compared with functions. In fact, both clm and cmix can be considered two library-compiler hybrids, cmix and clm. Examples hybrids ofa sound compiler and a programming library, ofsimilar code forthese systems are also shown. This the former using Common Lisp, and the latter using C is followed by an overview of the object-oriented pro- as the implementation language. gramming paradigm. Completing the section, two sound Organised Sound 5(1): 35–49 2000 Cambridge University Press. Printed in the United Kingdom. Downloaded from https://www.cambridge.org/core. Maynooth University, on 21 Aug 2019 at 09:54:54, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1355771800001060 36 Victor E. P. Lazzarini manipulation toolkits are discussed, the Sound/MusicKit instrument. The output ofa csound instrument is defined and the Synthesis Toolkit. by the code: out asignal 1.1. Csound,cmix and clm which can be thought ofas a ugen without an output Csound was originally developed by Barry Vercoe at the variable. The input argument is a variable ofthe type a, MIT, but it has been updated and expanded by several which is used to hold audio signals (actually a vector). contributors, this author included. Cmix is the product A simple sine-wave csound instrument is shown below: ofthe work ofPaul Lansky and others at Princeton Uni- versity, and clm originated at Stanford, mainly as a instr 1 result ofBill Schottstaedt’s research. These packages are asig oscil p4, p5, 1 very different from each other, except for the fact that out asig they have common predecessors, the MUSIC-family of endin programs, and similar applications. The main difference The arguments to oscil are amplitude, frequency in Hz between these packages is the user-interface. Csound and function table number (which stores a sine-wave uses basically two text files ofcoded informationas its shape). The variables ofthe type P, p4 and p5, and the input, one containing an ‘orchestra’ file, with signal- function tables are defined in the score file. This will processing definitions organised in terms of‘instru- have a number of f statements which will define the ments’ and a ‘score’ file, with performance instructions function tables used in the synthesis process and a for the ‘instruments’. These two files are given as argu- number of i statements. These define calls to instruments ments to the csound command which compiles the to generate/process audio. For example, audio. Cmix uses only one ‘score’ file, the ‘instruments’ are C-coded and pre-compiled as part ofthe library. The f1 0 1024 10 1 score file is passed to the cmix command (or directly i1 0 2 16000 440 to the instrument command ifyou are using only one is a score that will create a sine-wave function table, instrument) which calls the C programs responsible for defined as number 1, and call instrument 1 to generate a the sound processing. In fact, the score file is not strictly 440 Hz signal, with a peak amp of16,000, from0 to 2 required because cmix works by interpreting commands seconds. The command that are passed to it. These commands are written in cmix’s parsing language MINC, which is very similar in csound -odevaudio sine.orc sine.sco structure to C. Clm works in a Common Lisp environ- will play that sound in real time (sine.orc and sine.sco ment, which is interpreter based. Calls to clm ‘instru- being the orchestra and score files with the code shown ments’ are interpreted similarly to any lisp function. above). Prior to its use, a clm instrument must be compiled and Cmix uses a different principle. Instruments are loaded. A ‘score’ file, based on lisp code, can be written defined as commands called by the cmix environment, to perform all the necessary calls to generate audio. This using minc syntax (Garton 1994). A C compiler is neces- file can be loaded like any other file and the lisp inter- sary, since instruments are coded as C functions and preter will compile the sound. linked to the main cmix libraries and the Minc user-data Csound scores and orchestras are written using a very parsing language. They make use ofsome cmix C func- straightforward syntax. The orchestra file is normally tions, such as setnote(), endnote() and ADDOUT(), divided into two types ofstatements: header and instru- which provide basic soundfile and housekeeping func- ment block. The use ofa header is not compulsory. In tions. Unit generators are also provided, in the form of case ofits absence, defaultparameters are used. The C library functions which can be employed in a user- instrument block statements are preceded by the code defined instrument. The instrument designer has to pro- instr instrument number, and the instrument block is vide a more detailed processing algorithm, including the closed by endin. The general form of an instrument necessary initialisation routines and a processing loop. block statement is csound syntax can be defined as fol- The example below shows a simple cmix instrument lows: which generates a simple audio signal: output var ugen input args double sine(float *p, int n args) { /* p[0] start,p[1] dur,p[2] amp,p[3] freq,p[4] function table */ where output var is any type ofoutput variable (a, k, int n, durs, length, outfile; or i) and input args is any number ofinput arguments float fr, sr, amp, ndx, *wavetable, to the unit generator ugen.
Recommended publications
  • 62 Years and Counting: MUSIC N and the Modular Revolution
    62 Years and Counting: MUSIC N and the Modular Revolution By Brian Lindgren MUSC 7660X - History of Electronic and Computer Music Fall 2019 24 December 2019 © Copyright 2020 Brian Lindgren Abstract. MUSIC N by Max Mathews had two profound impacts in the world of music ​ synthesis. The first was the implementation of modularity to ensure a flexibility as a tool for the user; with the introduction of the unit generator, the instrument and the compiler, composers had the building blocks to create an unlimited range of sounds. The second was the impact of this implementation in the modular analog synthesizers developed a few years later. While Jean-Claude Risset, a well known Mathews associate, asserts this, Mathews actually denies it. They both are correct in their perspectives. Introduction Over 76 years have passed since the invention of the first electronic general purpose computer,1 the ENIAC. Today, we carry computers in our pockets that can perform millions of times more calculations per second.2 With the amazing rate of change in computer technology, it's hard to imagine that any development of yesteryear could maintain a semblance of relevance today. However, in the world of music synthesis, the foundations that were laid six decades ago not only spawned a breadth of multifaceted innovation but continue to function as the bedrock of important digital applications used around the world today. Not only did a new modular approach implemented by its creator, Max Mathews, ensure that the MUSIC N lineage would continue to be useful in today’s world (in one of its descendents, Csound) but this approach also likely inspired the analog synthesizer engineers of the day, impacting their designs.
    [Show full text]
  • The Computational Attitude in Music Theory
    The Computational Attitude in Music Theory Eamonn Bell Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2019 © 2019 Eamonn Bell All rights reserved ABSTRACT The Computational Attitude in Music Theory Eamonn Bell Music studies’s turn to computation during the twentieth century has engendered particular habits of thought about music, habits that remain in operation long after the music scholar has stepped away from the computer. The computational attitude is a way of thinking about music that is learned at the computer but can be applied away from it. It may be manifest in actual computer use, or in invocations of computationalism, a theory of mind whose influence on twentieth-century music theory is palpable. It may also be manifest in more informal discussions about music, which make liberal use of computational metaphors. In Chapter 1, I describe this attitude, the stakes for considering the computer as one of its instruments, and the kinds of historical sources and methodologies we might draw on to chart its ascendance. The remainder of this dissertation considers distinct and varied cases from the mid-twentieth century in which computers or computationalist musical ideas were used to pursue new musical objects, to quantify and classify musical scores as data, and to instantiate a generally music-structuralist mode of analysis. I present an account of the decades-long effort to prepare an exhaustive and accurate catalog of the all-interval twelve-tone series (Chapter 2). This problem was first posed in the 1920s but was not solved until 1959, when the composer Hanns Jelinek collaborated with the computer engineer Heinz Zemanek to jointly develop and run a computer program.
    [Show full text]
  • LAC-07 Proceedings
    LINUX AUDIO CONFERENCE BERLIN Lectures/Demos/Workshops Concerts/LinuxSoundnight P roceedin G S TU-Berlin 22.-25.03.07 www.lac.tu-berlin.de5 Published by: Technische Universität Berlin, Germany March 2007 All copyrights remain with the authors www.lac.tu-berlin.de Credits: Cover design and logos: Alexander Grüner Layout: Marije Baalman Typesetting: LaTeX Thanks to: Vincent Verfaille for creating and sharing the DAFX’06 “How to make your own Proceedings” examples. Printed in Berlin by TU Haus-Druckerei — March 2007 Proc. of the 5th Int. Linux Audio Conference (LAC07), Berlin, Germany, March 22-25, 2007 LAC07-iv Preface The International Linux Audio Conference 2007, the fifth of its kind, is taking place at the Technis- che Universität Berlin. We are very glad to have been given the opportunity to organise this event, and we hope to have been able to put together an interesting conference program, both for developers and users, thanks to many submissions of our participants, as well as the support from our cooperation partners. The DAAD - Berliner Künstlerprogramm has supported us by printing the flyers and inviting some of the composers. The Cervantes Institute has given us support for involving composers from Latin America and Spain. Tesla has been a generous host for two concert evenings. Furthermore, Maerz- Musik and the C-Base have given us a place for the lounge and club concerts. The Seminar für Medienwissenschaften of the Humboldt Universität zu Berlin have contributed their Signallabor, a computer pool with 6 Linux audio workstations and a multichannel setup, in which the Hands On Demos are being held.
    [Show full text]
  • International Computer Music Conference (ICMC/SMC)
    Conference Program 40th International Computer Music Conference joint with the 11th Sound and Music Computing conference Music Technology Meets Philosophy: From digital echos to virtual ethos ICMC | SMC |2014 14-20 September 2014, Athens, Greece ICMC|SMC|2014 14-20 September 2014, Athens, Greece Programme of the ICMC | SMC | 2014 Conference 40th International Computer Music Conference joint with the 11th Sound and Music Computing conference Editor: Kostas Moschos PuBlished By: x The National anD KapoDistrian University of Athens Music Department anD Department of Informatics & Telecommunications Panepistimioupolis, Ilissia, GR-15784, Athens, Greece x The Institute for Research on Music & Acoustics http://www.iema.gr/ ADrianou 105, GR-10558, Athens, Greece IEMA ISBN: 978-960-7313-25-6 UOA ISBN: 978-960-466-133-6 Ξ^ĞƉƚĞŵďĞƌϮϬϭϰʹ All copyrights reserved 2 ICMC|SMC|2014 14-20 September 2014, Athens, Greece Contents Contents ..................................................................................................................................................... 3 Sponsors ..................................................................................................................................................... 4 Preface ....................................................................................................................................................... 5 Summer School .......................................................................................................................................
    [Show full text]
  • Compositional and Analytic Applications of Automated Music Notation Via Object-Oriented Programming
    UC San Diego UC San Diego Electronic Theses and Dissertations Title Compositional and analytic applications of automated music notation via object-oriented programming Permalink https://escholarship.org/uc/item/3kk9b4rv Author Trevino, Jeffrey Robert Publication Date 2013 Peer reviewed|Thesis/dissertation eScholarship.org Powered by the California Digital Library University of California UNIVERSITY OF CALIFORNIA, SAN DIEGO Compositional and Analytic Applications of Automated Music Notation via Object-oriented Programming A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Music by Jeffrey Robert Trevino Committee in charge: Professor Rand Steiger, Chair Professor Amy Alexander Professor Charles Curtis Professor Sheldon Nodelman Professor Miller Puckette 2013 Copyright Jeffrey Robert Trevino, 2013 All rights reserved. The dissertation of Jeffrey Robert Trevino is approved, and it is acceptable in quality and form for publication on mi- crofilm and electronically: Chair University of California, San Diego 2013 iii DEDICATION To Mom and Dad. iv EPIGRAPH Extraordinary aesthetic success based on extraordinary technology is a cruel deceit. —-Iannis Xenakis v TABLE OF CONTENTS Signature Page .................................... iii Dedication ...................................... iv Epigraph ....................................... v Table of Contents................................... vi List of Figures .................................... ix Acknowledgements ................................. xii Vita .......................................... xiii Abstract of the Dissertation . xiv Chapter 1 A Contextualized History of Object-oriented Musical Notation . 1 1.1 What is Object-oriented Programming (OOP)? . 1 1.1.1 Elements of OOP . 1 1.1.2 ANosebleed History of OOP. 6 1.2 Object-oriented Notation for Composers . 12 1.2.1 Composition as Notation . 12 1.2.2 Generative Task as an Analytic Framework . 13 1.2.3 Computational Models of Music/Composition .
    [Show full text]
  • Analytical Techniques of Electroacoustic Music”
    David Huff Proposal for “Analytical Techniques of Electroacoustic Music” Introduction The foundations of musical analysis have long been tested by the weight of music from the periods of tonal practice. There is a rich and expansive repertory of techniques from which the analyst may choose in approaching music composed in the tonal tradition of the Eighteenth and Nineteenth Centuries. The Twentieth Century saw an expansion of compositional strategies, which in turn charged theorists and musicologists with the tasks of developing and adopting new methods of analysis. Analysts met these challenges by employing a range of techniques nearly as varied as the post-tonal compositions themselves. Pitch-class set theory and twelve-tone serial techniques have been the most widely adopted analytical methods for the post- tonal repertory, representing rigorous, quasi-scientific approaches to what is often extremely complex music. This expansion of compositional and analytical techniques coincided with the development of technology that eventually made its way into the hands of composers. Technology’s influence on composition in the mid-Twentieth Century formed in the concurrent strands of musique concrète and elektronische Musik, one concerned with the use of recorded sound as a malleable medium, the other with generating sounds themselves through electronic means. The two approaches eventually converged, with varying degrees of cohesion, into what is generally referred to as electroacoustic music.1 1 Leigh Landy discusses various titles applied to strands of electronic-based music in Landy 2006. While the proposed dissertation will summarize these terminological aspects the present proposal will simply use the term “electroacoustic music” to mean any music composed with the full or partial use of electronic means.
    [Show full text]
  • Audio Programming Book
    The Audio Programming Book edited by Richard Boulanger and Victor Lazzarini foreword by Max V. Mathews The MIT Press Cambridge, Massachusetts London, England ( 2010 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. For information about quantity discounts, email [email protected]. Set in Stone Serif and Stone Sans on 3B2 by Asco Typesetters, Hong Kong. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data [to come] 10987654321 The Audio Programming Book DVD Contents Continuations of the Printed Text Chapters 1 Delay Lines and Noise Generation Richard Dobson 2 Digital Filters Richard Dobson 3 From C to C++ Richard Dobson 4 Audio Processing in C++ Richard Dobson 5 Audio Plugins in C++ Richard Dobson 6 APB & DVD Chapter Summaries: An Annotated Outline Richard Dobson 7 Continuing Down The Audio Stream Gabriel Maldonado 8 Working with Control Signals to Build a SoftSynth with a Cross-Platform GUI Gabriel Maldonado 9 More VST Plug-ins Victor Lazzarini 10 Algorithmic Patch Design for Synmod Eric Lyon 11 Algorithmic Sound Design Eric Lyon Foundations 12 Compiling Software in the OS X Terminal – UNIX-style Joo Won Park 13 Building Linux Audio Applications From Source: A User's Guide David L. Phillips 14 A Guide To Linux Audio Applications David L. Phillips MIDI Programming
    [Show full text]
  • Generative Processes for Audification
    Generative Processes for Audification Judy Jackson April 23rd, 2018 Abstract Using the JavaSerial library, I present a generative method for the digital signal processing technique of audification. By analyzing multiple test instances produced by this method, I demonstrate that a generative audification process can be precise and easily controlled. The parame- ters tested in this experiment cause explicit, one-to-one changes in the resulting audio of each test instance. 1 INTRODUCTION Beginning in the early twentieth century, composers have sought to utilize the sounds of a technologically evolving society in their musical practice. In 1913, the Italian Futurist composer, Luigi Russolo, famously called for the full in- corporation of industrialization sounds (`noise-sounds') into classical music. He captured these sentiments and ideas for their execution in his seminal manifesto, The Art of Noise [7]. Russolo's manifesto earned him the title of \the grandfa- ther of noise music" and laid the groundwork for generations of composers to interface with the various screeches and crashes of the future. This paper explores a new approach to generating complex sonic material for the purpose of composing electronic music. Through the use of the JavaSerial library, composers can craft small modules from data structures and arrange them into various combinations to create larger-scale pieces. Since 1913, various technological innovations have spurred the birth of a wide variety of electronic music disciplines. The invention of vacuum tubes permitted the creation of smaller, more efficient electronics, and marked the birth of analog synthesizers and synthesized music. Magnetic tape, a product of World War II, allowed composers to capture any sound in the real world and place it in space and time for playback as a musical composition.
    [Show full text]
  • Audio Programming Book
    The Audio Programming Book edited by Richard Boulanger and Victor Lazzarini foreword by Max V. Mathews The MIT Press Cambridge, Massachusetts London, England ( 2010 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. For information about quantity discounts, email [email protected]. Set in Stone Serif and Stone Sans on 3B2 by Asco Typesetters, Hong Kong. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data [to come] 10987654321 Preface ‘‘But how does an oscillator really work?’’ My 40-year journey to The Audio Programming Book began with that question. Some of the answers came from Tom Piggott (my first electronic music teacher, and the one who got me started with analog synthesizers—an EML200 and an Arp2600). More answers came from Alan Pearlman, founder and president of the ARP Synthesizer Company, the man who commissioned my first symphony, Three Soundscapes for Synthesizers and Orchestra. Still more answers came from Dexter Morrill, who offered me a Visiting Composer’s Resi- dency at Colgate University, where I made my first computer music. It took days back then, but I rendered ‘‘Happy Birthday’’ in Music10 and played the rendered soundfile for my dad over the phone on his birthday. And more answers came from Bruce Pennycook, who was also in residence at Colgate. (We would work through the night and end our sessions with breakfast at a local diner; I picked his brain every spare minute; he taught me how to do stereo panning and gave me his sub- bass oscillator instrument, LOW.) I started to really answer the question in Barry Vercoe’s summer workshop at the Massa- chusetts Institute of Technology, in which I learned music11.
    [Show full text]
  • LAC2006 4Th International Linux Audio Conference
    LAC2006 4th International Linux Audio Conference Lectures - Demos - Workshops Concerts - Linux Sound Night 27.–30.04.06 ZKM | Institute for Music and Acoustics Editorial It is already the fourth time that the Linux Audio Developers gather at the Institute for Music and Acoustics of the Center for Art and Media in Karlsruhe. All started with the presentation of ideas, concepts and results and quickly it became clear that the personal response towards the presentations increased the motivation and the quality of the developments. We were glad to welcome several key-developers here in Karlsruhe and to see how enthusiastically they all contributed to the conference. It turned out that the meetings filled the missing link in bet- ween the hard work of programming and communication in the discussion groups. Now the conference gained a quite professional level in its organisation structure: there are proceedings, papers, workshops and concerts and I hope that this time everything is happening as smooth as in the former years thanks to the preparations of Götz Dipper, Frank Neumann and Marc Riedel. The Technical University (TU) of Berlin has expressed their interest in hosting the conference. We are very glad about this and we take it as a further proof of the importance of the LAC conference. So, next year the conference will take place at the TU Berlin. We believe that the TU Berlin as a location of outstanding Linux Audio Development will offer us a great confe- rence next year! Ludger Bruemmer Head of the ZKM | Institute for Music and Acoustics Editorial Welcome! We would like to welcome you to the 4th International Linux Audio Conference (LAC2006), at the ZKM | Zentrum für Kunst und Medientechnologie (Center for Art and Media) in Karls- ruhe, Germany.
    [Show full text]
  • The Hybrid Mobile Instrument: Recoupling the Haptic, the Physical, and the Virtual
    THE HYBRID MOBILE INSTRUMENT: RECOUPLING THE HAPTIC, THE PHYSICAL, AND THE VIRTUAL A DISSERTATION SUBMITTED TO THE DEPARTMENT OF MUSIC AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Romain Pierre Denis Michon June 2018 © 2018 by Romain Pierre Denis Michon. All Rights Reserved. Re-distributed by Stanford University under license with the author. This work is licensed under a Creative Commons Attribution- Noncommercial 3.0 United States License. http://creativecommons.org/licenses/by-nc/3.0/us/ This dissertation is online at: http://purl.stanford.edu/rd318qn0219 ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Chris Chafe, Co-Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Julius Smith, III, Co-Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Ge Wang I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Matthew Wright, Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice Provost for Graduate Education This signature page was generated electronically upon submission of this dissertation in electronic format.
    [Show full text]
  • Recent Advances in Real-Time Musical Effects, Synthesis, and Virtual Analog Models
    Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2011, Article ID 940784, 15 pages doi:10.1155/2011/940784 Review Article Recent Advances in Real-Time Musical Effects, Synthesis, and Virtual Analog Models Jyri Pakarinen,1 Vesa Valim¨ aki,¨ 1 Federico Fontana,2 Victor Lazzarini,3 and Jonathan S. Abel4 1 Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, 02150 Espoo, Finland 2 Department of Mathematics and Computer Science, University of Udine, 33100 Udine, Italy 3 Sound and Music Technology Research Group, National University of Ireland, Maynooth, Ireland 4 CCRMA, Stanford University, Stanford, CA 94305-8180, USA Correspondence should be addressed to Jyri Pakarinen, jyri.pakarinen@tkk.fi Received 8 October 2010; Accepted 5 February 2011 Academic Editor: Mark Kahrs Copyright © 2011 Jyri Pakarinen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper reviews some of the recent advances in real-time musical effects processing and synthesis. The main emphasis is on virtual analog modeling, specifically digital emulation of vintage delay and reverberation effects, tube amplifiers, and voltage- controlled filters. Additionally, adaptive effects algorithms and sound synthesis and processing languages are discussed. 1. Introduction synthesis can be found in articles [3]and[4], respectively. A tutorial on virtual analog oscillator algorithms, which are ff Real-time musical e ects processing and synthesis play a not tackled in this paper, has been written by Valim¨ aki¨ part in nearly all musical sounds encountered in the con- and Huovilainen [5].
    [Show full text]