The SndObj Sound Object Library

VICTOR E. P. LAZZARINI Department ofMusic, National University ofIreland, Maynooth, Co. Kildare, Ireland E-mail: [email protected]

The SndObj Sound Object Library is a C++ Programming libraries constitute a lower-level in object-oriented audio processing framework and toolkit. practice, when compared to sound com- This article initially examines some of the currently pilers. The context where they appear is more general, available sound processing packages,including sound and consequently their use is more complex. This can compilers,programming libraries and toolkits. It reviews be easily demonstrated by the differences in coding of a the processes involved in the use of these systems and their strengths and weaknesses. Some application similar instrument under and cmix (or clm), examples are also provided. In this context,the SndObj which will be shown later. It involves not only the library is presented and its components are discussed in knowledge ofthe syntax ofa particular language, but detail. The article shows the library as composed of a set also the grasp ofconcepts not necessarily needed when of classes that encapsulate all processes involved in using a sound compiler. For instance, with cmix, the synthesis,processing and IO operations. Programming user has to understand variable declaration, memory examples are included to show some uses of the system. allocation, header files, object code linking and other These,together with library binaries,source code and C-related concepts in order to build a sound processing complete documentation,are included in the or synthesis ‘instrument’. The advantage is that, in this downloadable package,available on the Internet. Possible future developments and applications are considered. The case, the programmer has more control over the pro- library is demonstrated to provide a useful base for cesses involved in sound manipulation and can design research and development of audio processing software. applications that will suit better his/her needs. Apart from the mentioned use of libraries as part of packages 1. BACKGROUND: SOUND COMPILERS, such as clm and cmix, there are several other sound- LIBRARIES AND TOOLKITS manipulation programming libraries which were developed for various applications. As examples, two of Since the development, by Mathews, ofthe these can be mentioned: the SndLib, developed at MUSIC-series ofsound synthesis programs (Mathews CCRMA for cross-platform sound input/output, and the 1960, 1961) in the early 1960s, several programs for Sfsys, developed by the CDP (Atkins et al. 1987), origin- synthesis and processing ofaudio in a general-purpose ally for their Atari-based sound filing system and then computer were developed. They were mostly based, dir- ported to other platforms (PC and UNIX). ectly or indirectly, on the MUSIC IV model. Programs Another related type ofprogramming softwarelibrary such as these are sometimes referred to as Computer is the toolkit, normally associated with the object- Music Languages, Acoustic Compilers or Sound Com- oriented paradigm. Toolkit objects are normally used in pilers. Some ofthem were developed forspecific hard- programs by direct reference and, sometimes, by com- ware and are now obsolete, whereas others, which enjoyed greater success, were developed under high- position. In general, its use is more straightforward than level languages, like MUSIC V (Mathews 1969), written the standard procedural- or modular-programming lan- in Fortran. Among these, cmusic (Moore 1990) and guage libraries. Examples oftoolkits are the NeXT- csound (Vercoe and Piche 1997), written in C, are two based Music and SoundKit (Jaffe and Boynton 1991), important examples ofsound compilers. Csound is one written in ObjectiveC, and the Synthesis Toolkit (Cook ++ ofthe most interesting sound processing systems avail- 1996), developed by Perry Cook in C . The SndObj able today, not only in terms ofwidespread use and port- library (Lazzarini and Accorsi 1998), although not ability, but also because ofits continuous expansion and designed to be used solely as a toolkit, is also an support by a large development community. Other example. MUSIC-derived sound compilers still in use include clm This section will examine in detail some aspects of (Schottstaedt 1992), which runs in a Common Lisp sound compilers and libraries. First, csound is intro- environment, and cmix (Lansky 1990), which consists of duced as an example ofa sound compiler, followedby a command parser and a library ofC sound-manipulating a small programming example. It is then compared with functions. In fact, both clm and cmix can be considered two library-compiler hybrids, cmix and clm. Examples hybrids ofa sound compiler and a programming library, ofsimilar code forthese systems are also shown. This the former using Common Lisp, and the latter using C is followed by an overview of the object-oriented pro- as the implementation language. gramming paradigm. Completing the section, two sound

Organised Sound 5(1): 35–49  2000 Cambridge University Press. Printed in the United Kingdom.

Downloaded from https://www.cambridge.org/core. Maynooth University, on 21 Aug 2019 at 09:54:54, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1355771800001060 36 Victor E. P. Lazzarini

manipulation toolkits are discussed, the Sound/MusicKit instrument. The output ofa csound instrument is defined and the Synthesis Toolkit. by the code:

out asignal 1.1. Csound,cmix and clm which can be thought ofas a ugen without an output Csound was originally developed by at the variable. The input argument is a variable ofthe type a, MIT, but it has been updated and expanded by several which is used to hold audio signals (actually a vector). contributors, this author included. Cmix is the product A simple sine-wave csound instrument is shown below: ofthe work ofPaul Lansky and others at Princeton Uni- versity, and clm originated at Stanford, mainly as a instr 1 result ofBill Schottstaedt’s research. These packages are asig oscil p4, p5, 1 very different from each other, except for the fact that out asig they have common predecessors, the MUSIC-family of endin programs, and similar applications. The main difference The arguments to oscil are amplitude, frequency in Hz between these packages is the user-interface. Csound and function table number (which stores a sine-wave uses basically two text files ofcoded informationas its shape). The variables ofthe type P, p4 and p5, and the input, one containing an ‘orchestra’ file, with signal- function tables are defined in the score file. This will processing definitions organised in terms of‘instru- have a number of f statements which will define the ments’ and a ‘score’ file, with performance instructions function tables used in the synthesis process and a for the ‘instruments’. These two files are given as argu- number of i statements. These define calls to instruments ments to the csound command which compiles the to generate/process audio. For example, audio. Cmix uses only one ‘score’ file, the ‘instruments’ are C-coded and pre-compiled as part ofthe library. The f1 0 1024 10 1 score file is passed to the cmix command (or directly i1 0 2 16000 440 to the instrument command ifyou are using only one is a score that will create a sine-wave function table, instrument) which calls the C programs responsible for defined as number 1, and call instrument 1 to generate a the sound processing. In fact, the score file is not strictly 440 Hz signal, with a peak amp of16,000, from0 to 2 required because cmix works by interpreting commands seconds. The command that are passed to it. These commands are written in cmix’s parsing language MINC, which is very similar in csound -odevaudio sine.orc sine.sco structure to C. Clm works in a Common Lisp environ- will play that sound in real time (sine.orc and sine.sco ment, which is interpreter based. Calls to clm ‘instru- being the orchestra and score files with the code shown ments’ are interpreted similarly to any lisp function. above). Prior to its use, a clm instrument must be compiled and Cmix uses a different principle. Instruments are loaded. A ‘score’ file, based on lisp code, can be written defined as commands called by the cmix environment, to perform all the necessary calls to generate audio. This using minc syntax (Garton 1994). A C compiler is neces- file can be loaded like any other file and the lisp inter- sary, since instruments are coded as C functions and preter will compile the sound. linked to the main cmix libraries and the Minc user-data Csound scores and orchestras are written using a very parsing language. They make use ofsome cmix C func- straightforward syntax. The orchestra file is normally tions, such as setnote(), endnote() and ADDOUT(), divided into two types ofstatements: header and instru- which provide basic soundfile and housekeeping func- ment block. The use ofa header is not compulsory. In tions. Unit generators are also provided, in the form of case ofits absence, defaultparameters are used. The C library functions which can be employed in a user- instrument block statements are preceded by the code defined instrument. The instrument designer has to pro- instr instrument number, and the instrument block is vide a more detailed processing algorithm, including the closed by endin. The general form of an instrument necessary initialisation routines and a processing loop. block statement is csound syntax can be defined as fol- The example below shows a simple cmix instrument lows: which generates a simple audio signal:

output var ugen input args double sine(float *p, int n args) { /* p[0] start,p[1] dur,p[2] amp,p[3] freq,p[4] function table */ where output var is any type ofoutput variable (a, k, int n, durs, length, outfile; or i) and input args is any number ofinput arguments float fr, sr, amp, ndx, *wavetable, to the unit generator ugen. A unit generator is a signal output[1]; processing algorithm (such as an oscillator or envelope generator) which is used as a building block for an /* initialisation */

Downloaded from https://www.cambridge.org/core. Maynooth University, on 21 Aug 2019 at 09:54:54, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1355771800001060 The SndObj Sound Object Library 37

outfile 1; to cmix, the instrument designer has to provide all the ndx 0; initialisation and processing loop code. The following amp p[2]; code shows a sine-wave instrument (generating similar fr p[3]; output to the csound and cmix examples shown above): durs setnote (p[0], p[1], outfile); (definstrument sine (start dur fr amp) wavetable floc((int)p[4]); (let* ((beg (floor (* start length fsize((int)p[4]); sampling-rate))) /* processing loop */ (end (+ beg (floor (* dur for (n 0; n < durs; n++){ sampling-rate)))) output[0] oscil (amp, fr* (length/SR), (sinusoidal (make-oscil : length, wavetable, &ndx); frequency fr))) ADDOUT (output, outfile); (run (loop for i from beg to end do } (outa i (* amp (oscil endnote(outfile); sinusoidal))))))) return(1.); This instrument, sine, uses an oscillator, created by } make-oscil, called sinusoidal to generate (dur – start)* The user-defined instrument is always a function with sampling-rate samples. These are added to the output file two arguments (an array offloats, which hold the instru- using outa. The run macro wraps the processing portion ment input parameters and an int number ofparameters) ofthe instrument, so that the code can be optimised. This returning a double. The setnote() function sets the initial generally involves the use ofa C program to performthe position ofthe file read/write pointer and the duration of processing, in which case a compiler is needed. It gener- time for reading or writing to the file. It also returns the ally speeds up the process, since lisp code is too slow number ofsamples forthe specified duration, according for signal processing. The C program is written, com- to the sampling rate defined in the soundfile header. One piled and run by the system when the instrument is ofthe cmix particularities is that the soundfile header called. Before it is used the instrument must be either must exist on disk prior to synthesis. The oscil() function typed at the lisp listener or compiled and loaded from a is the basic unit generator used by the instrument, an file. The basic clm lisp macro used to open a soundfile oscillator. Its arguments are amplitude, sampling incre- for writing is with-sound, which receives the output of ment, table length, pointer to table and an index to the an instrument (or instruments) and writes it to a file, current phase position. ADDOUT() reads an array, the generally playing it back straight after the process. For length ofwhich is determined by the number ofoutput example, typing channels, and writes it to a file. In this case, because the (with-sound () (sine 0 1 440 .1)) output is mono, the output array is unity length. After compiling and linking this code, the following cmix at the lisp listener will make the above defined instru- command-line, using Minc commands, can be used to ment generate a 1-second A-440 sine-wave sound, which invoke it: will be written to the default file, generally test.snd or test.wav. This file will be played back immediately by cmix output(‘‘ out sfile’’ ) the system (ifthe playback is enabled). As the whole makegen(1,10,1024,1) system is based on the Common Lisp language, any lisp- sine(1,1,16000,440,1) style control-of-flow is possible. ‘Score’ files can be cre- These commands would open an output soundfile ated using lisp code which can be loaded to generate a out sfile and write 1 second ofsine wave sound to it. soundfile. They could also be saved as score file and the standard As has been shown, the design ofsound synthesis/ input redirected from that file. Minc also features C-like processing instruments using csound is somewhat easier control-of-flow structures which can be used in a score and faster than with the other two computer music lan- file to control the processing done by the different instru- guages. Its learning curve is also less steep. This simpli- ments. city is at the expense offlexibility and programming As mentioned before, clm works in a Common Lisp power. Clm and cmix have indeed a better integration environment. It is in fact an extension to that program- ofimplementation, orchestra and score languages, which ming language. Clm instruments are similar to lisp func- is favoured by some authors (Pope 1993). Also, because tions, although they are not defined by defun, but by they are language/library hybrids, they are more easily definstrument. Nevertheless, the call syntax is the extendable. Their use as development tools for DSP same. An instrument in clm can be created by using unit- applications benefits very much from this fact. On the generators (which are themselves proper lisp functions) other hand, both systems need compilers/interpreters for and one ofthe output functions,which adds the instru- the development ofinstruments, whereas csound is a ment output into the current output soundfile. Similarly complete sound compiler driver.

Downloaded from https://www.cambridge.org/core. Maynooth University, on 21 Aug 2019 at 09:54:54, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1355771800001060 38 Victor E. P. Lazzarini

Another issue concerning the use ofsound compilers characteristics but is basically similar to its parent class. and libraries is their portability. As mentioned before, Factorisation is the description ofa class in terms ofa the most successful systems are the ones based on port- hierarchy ofaspects, using the inheritance mechanism. able code. Csound seems to have been ported to most of Instead ofdefining a class complete with all its concrete the popular platforms: UNIX, MacOS and MS- attributes, a hierarchy ofclass can be defined, which will Windows. Cmix and clm, although now somewhat out factorise the characteristics and behaviour of the original ofthe NeXT/Macintosh niche, are not as portable as class. Finally, abstraction uses the idea offinding com- csound. The clm source at CCRMA is supposed to be monalities between concrete objects and defining an portable to Windows, under Clisp. This author’s experi- abstract class which embodies these common character- ence is that a lot ofediting ofthe lisp source code is istics. This class will then serve as the base from which necessary for a successful build under Windows (using the others are created. freeware gcc compiler and cygwin32). Cmix is not at all Object-oriented programming packages can be gro- portable to MS-Windows. A truly portable sound pro- uped into two main categories, according to their use: cessing library would be greatly welcome. One ofthe toolkits and frameworks. The former is based on reuse goals ofthe SndObj project is the development ofa port- by direct reference and composition. Users will basically able core ofsound processing objects, thus answering employ the existing code by declaring instances of the need for such tools. classes and using them to perform the wanted actions. Two examples ofthis type ofpackage are discussed below. Frameworks are used, in general, by refinement 1.2. Object-oriented design and composition ofexisting classes. They are based on A different solution for the design of sound processing class hierarchies created by factorisation and abstraction. systems and libraries is found under the object-oriented The most common uses offrameworksare in the devel- programming paradigm. The definition ofobject- opment ofgraphical user interfaces:V (Wampler 1998) oriented programming is not very clear cut (Pope 1991), is a typical example ofa GUI framework.Kyma although the concepts it involves are well defined. The (Scalletti 1991) is an example ofa sound processing first one is that ofdata abstraction, whereby new data system which incorporates a framework architecture, in types, called abstract data or user-defined types,or its sound model classes. The SndObj library was classes, can be designed to behave similarly to built-in designed with both framework and toolkit uses in mind. ones (Stroustrup 1995). Moreover, they can also include a full set of operations (methods) that can be used to 1.3. Sound processing toolkits manipulate that particular data type. An abstract data type can be considered as a kind ofa black box which The NeXT Music and SoundKits, and the Synthesis can model a real-life object. Some elements that com- Toolkit are two examples ofsound synthesis/processing pose the characteristics ofan object can be hidden, in toolkits. The first two were implemented in ObjectiveC what is called encapsulation, allowing easier and safer for the NeXT computer using a Motorola DSP 56001 handling ofinformationand processes. Also crucial to chip. They were designed to provide the software library the object-oriented paradigm is another mechanism, that support for music applications development. It includes ofinheritance, which allows abstract data types to pass classes for, amongst other things, 56001-based syn- on their characteristics (and operations) to other types thesis/processing, audio recording, playback and editing. which extend/refine the definition ofthat type. The The Synthesis Toolkit is a sound synthesis library writ- inheritance concept allows for trees of classes, which ten in C++, having ports across UNIX and MS Windows can be very functional and useful for applications, such platforms. The case for a more portable system is evid- as sound processing systems. ent here. The MusicKit (and SoundKit) had to endure The design ofa system using the object-oriented para- the fate of being associated with a particular hardware digm is very much based on decomposition and com- which was not very popular outside some computer position processes, recognition oflikeness and behaviour music centres (esp. in the USA). On the other hand, the ofobjects. Another idea involved in this process is that Synthesis Toolkit seems to have a more promising ofthe level ofreuse or sharing and maintenance. Soft- future. ware based on object-oriented concepts is very much dir- The SoundKit, developed by Lee Boynton, was ected towards the reuse ofalgorithms and data struc- designed for recording, playback, editing and graphic tures. Four basic techniques can be defined for display ofdigital audio. It is based on the Sound class, object-oriented design (Pope 1991): compositions, an object which is wrapped around the sound data struc- refinement, factorisation and abstraction. Composition is ture and provides the methods for performing the above- basically the reuse ofexisting classes as instance vari- mentioned actions. The sound data structure can contain ables ofnew classes, composing a new object out of sampled audio or DSP code images and data streams, other objects it should contain. Refinement is the use of depending whether it has been used for manipulating inheritance to create a new class which has specialised audio or controlling DSP-based synthesis. In addition to

Downloaded from https://www.cambridge.org/core. Maynooth University, on 21 Aug 2019 at 09:54:54, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1355771800001060 The SndObj Sound Object Library 39

the Sound class, a SoundView class was designed to This algorithm reads an input file, applies a filter and an provide a mechanism for sound graphic display. The envelope to it and writes it out to another file. Only the MusicKit, developed by David Jaffe, provides the other class constructor and the tick() function are shown. This tools for music application development, including per- toolkit hosts a great number ofsynthesis and processing formance, music representation and synthesis. Relevant objects, roughly 60 C++ classes, which makes it a very to this discussion is the synthesis part ofthe MusicKit. impressive library. The portability and extensibility are Its model is based on three main classes: SynthElement, also valuable aspects ofthis system. These featureswere SynthPatch and SynthInstrument. SynthElement is an also included in the design ofthe SndObj library, along abstract class which has two subclasses: UnitGenerator with some other desirable characteristics. The next sec- and SynthData. The former is the base class for all signal tions ofthis paper will introduce and discuss the library processing functions and the latter is used to provide in closer detail. patchcords between UnitGenerator objects. These objects include oscillators, envelopes and other standard signal processing algorithms. The SynthPatch-derived classes 2. THE SOUND OBJECT LIBRARY VERSION 1.0 are collections ofSynthElement objects which define a certain processing configuration. SynthInstrument classes The initial motivation for the development of this library manage the use ofSynthPatch classes. The Music and evolved during work on the software Audio Workshop SoundKits model seems very sound, its major weakness (Lazzarini 1998). It was observed that the creation ofa being the dependence on a specific hardware/OS. set ofobjects foraudio signal processing could be very The Synthesis Toolkit, v.1.0, is a port ofalgorithms useful in the design of new applications. It could help and instrument models developed by its author, Perry provide higher-level tools for audio programming and Cook, on many different platforms and languages, software with a more intuitive user interface. This set of including the mentioned NeXT MusicKit. The motiva- objects would be designed as a toolkit, to be employed tions for creating this toolkit, according to the author, directly to build a DSP program (or in a visual patching are based on a desire for portability and extensibility, application, to create an instrument patch), and as a taking into account the evolution in efficiency and power framework, to which developers could add their own ofmodern CPUs. The entire Synthesis Toolkit is derived specialised code. Programs to carry out specific tasks from a base class called Object. This class controls the could be easily developed by connecting the available basic behaviour ofthe system, including the type ofIO objects and providing standard control-of-flow. Using used (file format, realtime, etc.). Floating-point numbers, derivation and inheritance, new objects could be created either double- or float-precision, are used to represent from the existing ones, thus leaving the development audio samples. The processing is implemented, on the possibilities open. The framework would thus be a audio sample based unit generator classes, by a funda- useful tool for research in signal processing, acoustics mental tick() method. This function causes the unit gen- and psychoacoustics. erator to compute one audio sample, returning it as its The project was based on three basic principle, as output. The sound-generating objects, such as oscillators explained in the preliminary account ofthis research and envelope generator, which act only as sources of work (Lazzarini and Accorsi 1998): (i) encapsulation, audio, implement a tick() that does not take any argu- (ii) universal patch-ability, and (iii) portability. All the ments. Other objects, such as sound output classes, only processes involved with production, manipulation and receive samples, returning void. Other processing storage ofaudio data should be encapsulated by the objects take audio samples as arguments to tick() and classes. Patching ofobjects should be unrestricted, as return their output. A LastOut() method, implemented ifthey were modules in an analog , or unit by objects that are sources ofaudio, can be used to feed generators in systems such as the sound compilers exam- multiple sample consuming objects. An example ofa ined earlier on in this article. As a final premise, the simple sound processing algorithm, given by the author main processing and input/output code should be port- in Cook (1996), is transcribed below: able. This would also allow for machine-dependent specialisation when necessary. The project was ExampleClass() { developed under C++, which seemed a very good envelope new Envelope; choice, mainly for its C-compatibility and good support waveIn new RawWvIn (‘‘infile.raw’’); for object-oriented programming. filter new OnePole; The SndObj library was initially developed by this output new RawWvOut (‘‘outfile.snd’’); author, assisted by Fernando Accorsi, at the Nu´cleo } de Mu´sica Contemporaˆnea and the Department of MY FLOAT ExampleClass::tick(void) { Computer Science, Universidade Estadual de Londrina, Output->tick( envelope->tick() * in Brazil. The first versions ofthe library were built filter->tick(waveIn->tick())); under AIX onan IBM Risc2000 machine (using the } g++ compiler), as well as on a Pentium PC under

Downloaded from https://www.cambridge.org/core. Maynooth University, on 21 Aug 2019 at 09:54:54, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1355771800001060 40 Victor E. P. Lazzarini

Figure 1. The SndObj library class trees.

Windows95 (using both MS Visual C++ and Cygwin 2.2. The SndObj-derived classes g++). The development work continued at the National A SndObj object, as modelled for this library, has one University ofIreland, Maynooth. Versions forSolaris output, a sampling rate, an on/off switch and an error (Sparc) and IRIX (MIPSPro) were developed, as well code, as shown in figure 2. The output is defined as a as updated MS Windows versions and, more recently, pointer to a float location, *m output. SndObj- a Linux one. SndObj library version 1.0 binaries are derived classes also have a DoProcess() method that car- available for these three platforms and it is expected ries out all the processing duties and other methods to that the source code can be built on any other UNIX access its member variables. The derived classes have platform. an undefined number of inputs (objects and/or offset values). As mentioned before, some of them also rely on 2.1. The class hierarchy maths function-table objects to do their processing. Sound processing objects are designed to be easily inter- The proposed hierarchy for the library is based on three connected. This is done by passing the address ofa abstract base classes: SndObj, Table and SndIO. These SndObj-derived class instance to the object receiving its form the base for three types of objects that integrate the output, for example: set, respectively: sound processing, mathematical func- tion-table and sound input/output objects. The SndObj object2.SetInput(&object1); library version 1.0 comprises more than fifty classes (cf. Appendix A) organised in three main class trees. A dia- Most ofthe objects can receive one or more SndObj gram showing the inheritance relationships between the inputs. As an example, oscillators can receive any sound classes in the library is shown in figure 1. The classes object as their frequency and amplitude inputs. Filters derived from SndObj are involved in the production and receive SndObjs as their audio input, as well as their manipulation ofsound samples. Some ofthem make use frequency and bandwidth inputs. Mixers can receive any ofthe Table classes, when some sort offunctiontable is number ofinput objects and mix them together. This needed. The SndIO tree is dedicated to all the actions ability to easily make patches ofprocessing boxes gives involving sound input or output: disk, ADC/DAC, stand- the flexibility necessary for users to create a great ard IO, etc. These classes use a SndObj as their input. number ofapplications. A SndObj-derived SndIn class was designed to receive SndObj-derived classes in general have two con- an input from a SndIO-derived object. This would structors, a default constructor which builds a bare enable audio from input sources to be inserted in the object and another one that initialises the object para- processing chain. meters to its input arguments. They also allocate a

Downloaded from https://www.cambridge.org/core. Maynooth University, on 21 Aug 2019 at 09:54:54, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1355771800001060 The SndObj Sound Object Library 41

Figure 2. The SndObj model.

memory space for the output sample and switch the functions. Tables are modelled as wrappers around a object on by calling the Enable() method. Methods for floating-point vector. Their basic attribute is a table setting or updating class members are also available for length, which determines the size ofthe array. A basic each parameter (as in the above example for setting the MakeTable() method is implemented in the Table- input ofan object). The DoProcess() method is a virtual derived classes. This method is called by their con- function declared in SndObj and implemented differ- structors. A GetTable() method provides the basic ently on each ofits derived classes. This functionwould access to the table itself, returning a pointer to its first be equivalent to the tick() method ofthe Synthesis Tool- location. GetLen() returns the length ofthe table. A kit, discussed earlier in this article. It calculates one common use for tables is to provide one cycle of a cer- sample every time it is called and places this sample at tain waveshape to be continuously sampled by an oscil- the output ofthe object. Ifanother object is patched to lator. The HarmTable is an example ofsuch an object it, its DoProcess() method will look at that output and that creates any of the following four harmonic wave- use it to compute its own output sample. The way patch- forms: sine, saw, square or pulse (buzz). It can be used ing works in the SndObj library is completely different by an oscillator object by passing its location to the con- from the quoted example of the Synthesis Toolkit. The structor or to a SetTable() method: DoProcess() method never accepts any arguments (its HarmTable sawtable (1024, 25, SAW); // input sample, or samples, is/are taken directly from saw wave with 25 harmonics another object). It returns unity, when successful, and oscillator.SetTable(&sawtable); // null, ifsome problem has occurred. As the DoProcess() oscillator is an Oscilt or Oscili calculates one sample every time it is called, it should // be placed in a processing loop, after calls to other object DoProcess() belonging to input objects, as shown below: Similar to this is the UsrHarmTable, which allows the for(int n 0; n < end*sampling rate; user to define the relative amplitude ofthe individual n++) harmonics. SndTable stores sampled sound, input from a { // processing loop SndIO-derived object. Another common type offunction (...) table is to store a shape to be used as an amplitude or object1.DoProcess(); // object1 is frequency envelope. The TrisegTable class creates a patched to three-segment line, with the option oflogarithmic or object2.DoProcess(); // the input of linear lines, that can be used by an oscillator to control object2 a parameter ofsome other sound object. Also, a gen- (...) eralised Hamming window function table is supplied, as } the HammingTable object, as well as a polynomial Other virtual classes declared in SndObj are drawing function, PlnTable. Table objects are very ErrorMessage(), which returns an error string according useful and can be employed in a variety of ways. New to an internal error code, SetSr(), which sets or updates table-derived objects are constantly added to the library the sampling rate, and a virtual destructor. This enables implementation. the destruction mechanism ofderived classes to work properly. The SetSr() method is declared virtual because 2.4. Input and Output in some cases there is the need to perform object-specific operations after an alteration to the sampling rate. The SndIO classes are designed to deal with all the input and output services needed by the sound objects. Central to their operation are the Read() and Write() methods. 2.3. Tables They are used to perform the IO functions, regardless of The table classes were developed to supply certain whether the target is a soundfile, ADC/DAC, computer SndObj-derived objects with tabulated mathematical screen or some other device. The derived classes can fit

Downloaded from https://www.cambridge.org/core. Maynooth University, on 21 Aug 2019 at 09:54:54, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1355771800001060 42 Victor E. P. Lazzarini

into any main category ofinput/output: soundfile (which programming examples which employ SndObj library has derived classes for different formats), DAC/ADC, classes. They should demonstrate the encapsulation, rel- screen output, MIDI. Some SndIO-derived objects are ative user-friendliness and modular aspects of the expected to be platform dependent – the ones that rely system. As a final example, this article will explore the on platform-specific features such as realtime IO and steps involved in the development ofnew SndObj- graphics. derived classes. Output SndIO-derived classes can receive one SndObj input per channel. This patching can be done by the class 3.1. A simple program constructor, or by a SetOutput() method. It is done in a similar way to any other patching operation throughout The first example shows a very simple sine-wave syn- the library, by passing the address ofan object: thesis routine using the SndObj-derived class Oscilt. The Table-derived HarmTable and the SndIO-derived output.SetOutput(&sound); // sound is a SndWaveO classes are also employed, providing a tabu- SndObj object lated sine function and file output services, respectively. // and output The comments (starting with a ‘//’) explain the con- a SndIO object structs used in the program. Conversely, a SndIn object can receive a SndIO-derived #include input: #include ‘‘AudioDefs.h’’ // SndObj SndIn sound(&input, 1); // input is a headers and definitions SndIO object int main (int argc, char* argv[]) { // from which sound is reading channel 1 float dur (float)atof(argv[2]); // dur: second command-line argument The Read() and Write() methods are built in such way to float amp (float)atof(argv[3]); // amp: work transparently in a processing loop with the SndObj third command-line arg DoProcess() methods. Although they do not always read float fr (float)atof(argv[4]); // fr: and write one sample at a time, they are designed to fourth command-line arg behave as if they did. The SndIO buffer can be set, in HarmTable table(1024, 1, SINE); // sine- the class constructor, to any size desired. Depending on wave table certain hardware conditions, they can alter the perform- Oscilt oscil(&table, fr, amp); // ance ofthe read/write operation. A processing loop read- oscillator ing from one input and writing to another is shown SndWaveO output(argv[1], 1, 16); // below: 1-channel 16-bit precision for(int n 0; n < end*sampling rate; // RIFF- n++) Wave file output { // input.Read(); filename: first command-line arg sound.DoProcess(); // SndIn object output.SetOutput(1, &oscil); // assign (...) the oscillator to the file // any processing // output output.Write(); channel 1 } for(int n 0; n < dur*oscil.GetSr(); ++ At the present version the SndIO hierarchy has four n ) // processing loop main subclasses: SndFIO, soundfile IO; SndStdIO, { which sends and receives samples from the standard IO; oscil.DoProcess(); SndSgiRT, to perform realtime audio IO on Silicon output.Write(); Graphics; SndWinRT, realtime audio IO on Windows; } and SndOssRT, Open Sound System realtime audio IO. return 1; }

3. PROGRAMMING EXAMPLES This code can be compiled and linked, generating a pro- gram called sine. The command-line A number ofsample applications, in the formofconsole sine test.wav 1 16000 440 programs, were developed to demonstrate some uses of the library. These programs can also be used as tutorials would generate a RIFF-Wave soundfile named and therefore are included in the documentation ‘‘test.wav’’ containing 1 second of440 Hz sinusoidal (Lazzarini 1999). This article will first examine three sound.

Downloaded from https://www.cambridge.org/core. Maynooth University, on 21 Aug 2019 at 09:54:54, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1355771800001060 The SndObj Sound Object Library 43

Figure 3. Risset signal flowchart.

3.2. Risset TrisegTable envtable(512, TSPoints, LINEAR); The next example, a program called risset, is a slightly more complex synthesis application, based on the // Wavetable object ingenious Risset design (Lorrain 1980), where nine HarmTable table1(1024, atoi(argv[5]), oscillators are combined to generate a cascading har- SQUARE); monics drone. This is done by the minute differences in frequency of the oscillators, which creates a sequence of // truncating oscillator object phase cancellations/sums. This example uses a square (envelope) wave as the basic oscillator shape. The signal flowchart Oscilt envoscil(&envtable, 1/duration, ofthis program is shown in figure 3. The program code 32767); is shown below (excluding the usage() function): // 9 interpolating oscillator objects #include Oscili oscil1(&table1, fr, 0.f, 0, & #include envoscil); #include ‘‘AudioDefs.h’’ Oscili oscil2(&table1, fr-(fr*.03f/ void usage(); 110), 0.f, 0, &envoscil); Oscili oscil3(&table1, fr-(fr*.06f/ int 110), 0.f, 0, &envoscil); main(int argc, char* argv[]){ Oscili oscil4(&table1, fr-(fr*.09f/ if(argc ! 6){ 110), 0.f, 0, &envoscil); usage(); // usage message Oscili oscil5(&table1, fr-(fr*.12f/ return 0; 110), 0.f, 0, &envoscil); } Oscili oscil6(&table1, fr+(fr*.03f/ // command line arguments 110), 0.f, 0, &envoscil); float fr Oscili oscil7(&table1, fr+(fr*.06f/ (float)atof(argv[3]);// frequency 110), 0.f, 0, &envoscil); float amp Oscili oscil8(&table1, fr+(fr*.09f/ (float)atof(argv[4]);// amplitude 110), 0.f, 0, &envoscil); float duration Oscili oscil9(&table1, fr+(fr*.12f/ (float)atof(argv[2]);// duration. 110), 0.f, 0, &envoscil); // Envelope breakpoints & function table // Mixer object float TSPoints[7] {.0f, .05f, Mixer mix; 1.f, .85f, .8f, .1f, .5f}; mix.AddObj(&oscil1);

Downloaded from https://www.cambridge.org/core. Maynooth University, on 21 Aug 2019 at 09:54:54, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1355771800001060 44 Victor E. P. Lazzarini

mix.AddObj(&oscil2); 3.3. A string resonator box mix.AddObj(&oscil3); The last example is based on the StringFlt class. This mix.AddObj(&oscil4); implements the model ofa sympathetically vibrating mix.AddObj(&oscil5); string. This program uses a number ofthese objects to mix.AddObj(&oscil6); simulate a box containing a number ofstrings tuned to mix.AddObj(&oscil7); different fundamental frequencies. It leaves the number mix.AddObj(&oscil8); ofstrings and their tuning forthe user to decide. It can mix.AddObj(&oscil9); be used interactively or its parameters can be defined in // Gain attenuation a text datafile supplied as an argument to the program. Gain gain((amp-15.f), &mix); The version shown below demonstrates the realtime input and output, as implemented on Silicon Graphics // output to an AIFF-format soundfile machines, under IRIX 6.5. A signal flowchart for this SndAiffO output(argv[1], 1, 16); application is also shown in figure 4. Output.SetOutput(1, &gain); #include // synthesis loop #include unsigned long dur (unsigned long) #include (duration*envoscil.GetSr()); #include for(unsigned long n 0; n < dur; n++){ #include #include envoscil.DoProcess(); // envelope void usage ();

oscil1.DoProcess(); // oscillators int main (int argc, char *argv[]){ int nstrs; oscil2.DoProcess(); float fdbgain; float gain; oscil3.DoProcess(); float dur; float sr; oscil4.DoProcess(); float* fr; SndSgiRTI *input; oscil5.DoProcess(); SndSgiRTO *output; if(argc 1) { // prompt for oscil6.DoProcess(); parameters cout << ‘‘Enter duration: ’’; oscil7.DoProcess(); cin >> dur; cout >> ‘‘Enter sampling rate: ’’; oscil8.DoProcess(); cin >> sr; cout << ‘‘Enter number of strings: ’’; oscil9.DoProcess(); cin >> nstrs; mix.DoProcess(); // mix cout << ‘‘Enter feedback gain (0 < fbd < gain.DoProcess(); // gain attenuation 1): ’’; output.Write(); // file output cin >> fdbgain; } cout << ‘‘Enter gain attenuation(dB): ’’; return 1; cin >> gain; } fr new float[nstrs]; The program accepts a command-line ofthe following for(int i 0; i> fr[i]; freq(Hz) no of harmonics } } It uses a truncating oscillator as an envelope and inter- polating oscillators as audio generators. A Mixer object else if(argc 3) { // get the sums the outputs ofthe oscillators and a Gain object parameters from file attenuates and controls the overall amplitude ofthe ifstream datafile(argv[3]); output. An instance ofthe SndAiffO class writes to an datafile >> dur; AIFF-format soundfile. datafile >> sr;

Downloaded from https://www.cambridge.org/core. Maynooth University, on 21 Aug 2019 at 09:54:54, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1355771800001060 The SndObj Sound Object Library 45

Figure 4. Streson signal flowchart.

datafile >> nstrs; nstrs; i++) datafile >> fdbgain; datafile >> gain; strings[i].DoProcess();// string filters fr new float[nstrs]; mix.DoProcess(); // mixer for(int i 0;i> fr[i]; output->Write(); // output to DAC } } else { // display usage message delete input; // delete IO objects to usage (); delete output; // clean-up the hardware return 0; buffers } return 1; // realtime IO objects } SndSgiRTI input(1, 512, 1024, SHORTSAM, sr); The command line for this program has the form: SndSgiRTO output(1, 512, 1024, streson [datafile] SHORTSAM); The program can be activated just by typing its name at // Sound input, attenuation, string the UNIX shell or by supplying a datafile as argument. filters This is a text (ASCII) file containing the following para- SndIn sound(input, 1); meters, each on a new line: StringFlt* strings new StringFlt[nstrs]; duration Mixer mix; sampling rate // String Filters parameters set-up number of strings for(int i 0;iSetOutput(1, &atten); number ofstrings. // processing loop unsigned long n, end (unsigned 3.4. Deriving a class from SndObj long)dur*sr; In order to develop other classes and extend the library, for(n 0; n < end; n++){ there are a few simple steps that the user should follow: input->Read(); // input from ADC sound.DoProcess(); // sound input (1) Define your SndObj-derived class in a header file for(int i 0;i< (‘MyNewClass.h’). Include in the class declaration

Downloaded from https://www.cambridge.org/core. Maynooth University, on 21 Aug 2019 at 09:54:54, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1355771800001060 46 Victor E. P. Lazzarini

one DoProcess() and one ErrorMessage() method: else *m output 0.f; // the object is switched off #ifndef MYNEWCLASS H return 1; #define MYNEWCLASS H } #include ‘‘SndObj.h’’ char* MyNewClass :: ErrorMessage(){ Class MyNewClass : public SndObj{ char* message; protected: switch(m error) { // . . .declare here your member vari- case 0: ables message ‘‘No error.’’; // and protected methods . . .if you break; are // creating a processing class, define //...define here the error messages the for the error // input as SndObj* input; // codes you defined in your methods public: } return message; // . . .declare at least two con- } structors: // the default one and another which (4) Add the following line to the end of the initialises AudioDefs.h file: // all parameters to the supplied #include MyNewClass.h arguments. // declare Set. . .() methods for every (5) Compile your class and append it to the library parameter. binary. You can do that by adding your class to the supplied Makefile and calling make. First add virtual ȁMyNewClass(); ‘MyNewClass.o’ to the end ofthe EXOBJS list. virtual short DoProcess(); Then add the following line at the end of the file: virtual char* ErrorMessage(); $(oDir)/MyNewClass.o: } MyNewClass.cpp MyNewClass.h SndObj.h (2) Provide the implementation code for all con- $(CC) -c $(CFLAGS) -o $@ structors, destructor and methods you defined. Put MyNewClass.cpp them in another file (‘MyNewClass.cpp’). The con- Now the user can employ a newly created MyNewClass structors should also initialise member variables as a processing object together with the other library inherited from SndObj. The sampling rate, m sr classes. is normally taken from the input object (using GetSr()). Memory space for the output location *m output should be properly allocated. 4. MUSIC AND RESEARCH APPLICATIONS (3) The DoProcess() and ErrorMessage() methods The Sound Object Library is being developed with three should be implemented as follows: possible main applications in mind: DSP/audio synthesis short MyNewClass :: DoProcess() { research, composition and general music use. The library if(m enable) { // the object is itself is directly useful for the first two. The software switched on applications created with the library can be employed by anyone with some knowledge ofcomputers in music. // . . .define all your processing code For the researcher, the library offers many advantages: to work on a single-sample easy access to sound IO, a set ofpre-built signal gener- // basis. In order to obtain the input ators and modifiers and a stable framework. Composers signal use can use the library as they would use a sound compiler, // input->Output(). This returns the with the added versatility ofan object-oriented design. input object’s output sample Finally, since the development curve is very short, // as a float. Assign the result of your applications can be very easily created for general use. processing to *m output, Simple examples ofthese are the previously discussed // the location of this object’s sample applications and some other GUI-based sound output sample. processing programs developed using the library. } The library is designed very much as a workbench for

Downloaded from https://www.cambridge.org/core. Maynooth University, on 21 Aug 2019 at 09:54:54, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1355771800001060 The SndObj Sound Object Library 47

research and composition. While it clearly does not have development ofnew classes and applications forthe lib- the variety ofgenerators/modifiers as some sound com- rary. A complete HTML reference manual was created pilers have, it has a great potential for evolution. By by this author to help users creating their own sound virtue ofits object-oriented design, many aspects ofa processing applications. This documentation includes development cycle can be rationalised and improved. the code listings of14 sample programs designed for For that reason, this library can be used as a useful tool synthesis, processing and utility applications (cf. by computer music research groups. In comparison to Appendix B). The realtime capabilities ofthe Silicon certain hardware-dependent systems, the library has the Graphics and Windows versions are going to be used advantage of being fully portable across many platforms as a complement to outboard processing and synthesis (and its core elements are portable to any platform with equipment in our electronic and recording studio at May- aC++ compiler). This is also very useful since the use nooth. Windows MME and Open Sound System real- ofmixed MS-Windows and UNIX environments is time IO classes were recently added to the library. This widespread in the research community. extends the library realtime capabilities to Windows, The integration ofcomposition and development Linux and UNIX platforms with OSS-compatible audio facilities was the main motivation behind the library pro- hardware. Solaris realtime classes are not being consid- ject. As mentioned before, the need for this kind of tool ered because ofthe poor audio support foundon Sun came originally from the software development process, hardware. but its final application is not limited to it. The use of Several options were considered to give the library a the library as an aid to electroacoustic music composi- graphical user interface. Preliminary studies were carried tion has been explored, mainly by this author, in prepar- out using the V framework (Wampler 1998). This is a ing elements for several pieces. Also, the implementa- cross-platform framework which provides a good set of tion ofrealtime capabilities makes it possible to use the tools for building user interfaces. The fact that it is computer in interactive situations. This can lead to new implemented in C++ helps its integration with the lib- ways ofconceiving the computer as a possible option rary. This is a clear advantage over other possibilities for electroacoustic instrumentation. For composers with using another implementation language, such as Tcl/Tk some computer programming skills, the possibility of or Java. A simple sample application, developed using designing and fine-tuning the sound synthesis/processing V, is available for developers interested in examining routines in relation to compositional needs presents the use ofthat frameworkin conjunction with the interesting opportunities. SndObj library. An interesting possibility which was Work on a new piece for live instrument (saxophone) considered is the design ofa graphical patching applica- and computer, which illustrates some applications ofthe tion which would use the library as its processing library, is under way. For this piece, a special computer engine. This is another exciting prospect for develop- instrument is being created. This is a GUI program, ment opened by the research work which generated the which performs realtime processing and synthesis of SndObj library. sound, designed to interact with the live instrument. The library is being used (in conjunction with a commercial GUI framework) to generate this application. When fin- 6. CONCLUSION ished, this piece will only depend an audio-equipped PC- compatible computer and the live instrument. All the As introduction to the subject, this article initially pre- sound processing will be controlled by the SndObj-built sented some ofthe currently available systems forcom- program. Also, a cross-platform version of the control- puter-generated music. Sound compilers were presented ling program can be easily built (using a portable GUI as one option for the development of signal processing framework). Considering the present generation of applications, together with libraries and toolkits. The microcomputers, such a piece should not represent great SndObj library, version 1.0, was presented as an object- practical difficulties for performance. oriented framework and toolkit for audio processing. Its components were discussed in detail and its class hier- archy was explained. It was shown that the library is composed ofa set ofclasses that encapsulate all pro- 5. FUTURE PROSPECTS cesses involved in synthesis, processing and IO opera- The SndObj library is available for download at the tions. The library was demonstrated to be a modular and Maynooth Music Technology Laboratory Web site at user-friendly object-oriented system. Application http://www.may.ie/academic/music/musictec. A number examples were provided as an introduction to the design ofSndObj-based GUI applications are also available at ofprograms that can make use ofthe library facilities. the same location. The library is currently being used as A complete reference documentation is available, in a research and composition tool in the Department of HTML format, and the library can be downloaded from Music at NUI, Maynooth. This research includes the the Internet. There are good prospects for the continuous

Downloaded from https://www.cambridge.org/core. Maynooth University, on 21 Aug 2019 at 09:54:54, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1355771800001060 48 Victor E. P. Lazzarini

development and expansion ofits possibilities and SndAiffI AIFF file input applications. SndAiffO AIFF file output SndRaw Abstract base class for Raw file classes SndRawI Raw file input APPENDIX A SndRawO Raw file output List and description ofSndObj Library version 1.0 SndWave Abstract base class for RIFF-Wave classes: classes SndObj Abstract base class for all audio pro- SndWaveI RIFF-Wave file input cessing classes SndWaveO RIFF-Wave file output ADSR Attack–Decay–Sustain–Release envel- SndStdIO Standard input/output class ope generator/processor SndOssRT Abstract base class for Open Sound IADSR Similar to ADSR but including an ini- System realtime IO tial and a final state SndOssRTI Open Sound System realtime input Balance Balances the rms output ofone input SndOssRTO Open Sound System realtime output according to another SndSgiRT Abstract base class for Silicon Graph- Buzz Discrete summation formula pulse ics realtime IO wave generator SndSgiRTI Silicon Graphics realtime input DelayLine Simple delay line SndSgiRTO Silicon Graphics realtime output Comb Comb filter SndWinRT Abstract base class for Windows Allpass Allpass filter MME realtime IO StringFlt String resonator combining a feedback SndWinRTI Windows MME realtime input delay line with allpass and lowpass SndWinRTO Windows MME realtime output filters Table Abstract base class for all maths func- Pluck Karplus–Strong plucked-string sound tion table classes generator HarmTable Harmonic function table Vdelay Variable delay with feedback, feedfor- Ham- Generalised Hamming window ward and direct gain controls mingTable Filter Abstract base class for filter classes PinTable Polynomial table ButtBP Butterworth band-pass filter SndTable SndIO-audio input table ButtBR Butterworth band-reject filter TrisegTable Three-segment table ButtHP Butterworth high-pass filter UserHarm- User-defined harmonic function table ButtLP Butterworth low-pass filter Table Freson Fixed 2nd-order resonator UserTable User-defined table Reson Variable resonator Gain Gain attenuator/booster APPENDIX B Interp Interpolation between two points Lookup Table lookup on behalfofan input List and description ofbasic SndObj-based sample index application programs: Lookupi Interpolating table lookup beau Beauchamp cornet emulation using Mixer SndObj mixer class waveshaping and high-pass filter Oscil Abstract base class for all oscillator cutsf Soundfile segment cutting classes cvoc Channel vocoder using Butterworth Oscilt Truncating oscillator filters Oscili Interpolating oscillator cvoc2 Channel vocoder using fixed reson- Rand Random-signal generator ators Randh Sample-and-hold random signal gen- flanger Flanging and phasing effects processor erator karplus Plucked-string emulation Ring General purpose multiplier and ring mixsf Soundfile mixer modulator risset Cascading harmonics drone generator SndIn SndIO-based signal input raw2wave Raw to RIFF-Wave format conversion Unit Test signal generator schroeder Reverberator using Schroeder config- SndIO Abstract base class for all input/output uration classes slicesf Soundfile splicing SndFIO Abstract base class for all file input/ streson String resonator output classes wavegen Simple wave generator SndAiff Abstract base class for AIFF-format windharp 5-string windharp using string reson- classes ators

Downloaded from https://www.cambridge.org/core. Maynooth University, on 21 Aug 2019 at 09:54:54, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1355771800001060 The SndObj Sound Object Library 49

REFERENCES Mathews, M. 1960. Computer program to generate acoustic signals. Abstract in Journal of the Acoustical Society of Atkins, M., et al. 1987. The Composer’s Desktop Project. America 32: 1,493. Proc. of the 1987 Int. Computer Music Conf., pp. 146–50. Mathews, M. 1961. An acoustical compiler for music and psy- San Francisco: International Computer Music Association. chological stimuli. Bell System Technical Journal 40: Cook, P. 1996. Synthesis Toolkit in C++ Version 1.0. Postscript 677–94. paper available at http://www.cs.princeton.edu/ȁprc/ Mathews, M. 1969. The Technology of Computer Music. Cam- STKPaper.ps bridge, MA: MIT Press. Garton, B. 1994. What is Cmix? HTML document available at Moore, F. R. 1990. Elements of Computer Music. Englewood http://silvertone.princeton.edu/winham/Garton.html Cliffs, NJ: Prentice-Hall. Jaffe, D., and Boynton, L. 1991. An overview of the Sound Pope, S. 1991. Machine Tongues IX: object-oriented software and Music Kits for the NeXT computer. In S. Pope (ed.) design. In S. Pope (ed.) The Well-Tempered Object, pp. 32– The Well-Tempered Object, pp. 107–18. Cambridge, MA: 48. Cambridge, MA: MIT Press. MIT Press. Pope, S. 1993. Machine Tongues XV: three packages for soft- Lansky, P. 1990. Cmix Release Notes and Manuals. Princeton: ware sound synthesis. Computer Music Journal 17(2). Department ofMusic, Princeton University. Scalletti, C. 1991. The Kyma/Platypus computer music workstation. In S. Pope (ed.) The Well-Tempered Object, Lazzarini, V. 1998. A proposed design for an audio processing pp. 119–40. Cambridge, MA: MIT Press. system. Organised Sound 3(1): 77–84. Schottstaedt, W. 1992. CLM Manual. HTML reference manual Lazzarini, V. 1999. The SndObj Reference Manual. HTML ref- available at http://ccrma-www.stanford.edu/CCRMA/ erence available online at http://www.may.ie/academic/ software/CLM/clm-manual/clm.html music/music/man/SndObj/index.html Stroustrup, B. 1995. The C++ Programming Language. Read- Lazzarini, V., and Accorsi, F. 1998. Designing a sound object ing, MA: Addison-Wesley Publishing Co. library. In M. Loureiro (ed.) Proc. of the V Brazilian Com- Vercoe, B., and Piche, J. 1997. Csound Manual (version 3.47). puter Music Symp., pp. 95–104. Belo Horizonte: Editora da HTML reference manual available at UFMG. ftp://ftp.maths.bath.ac.uk/pub/dream Lorrain, D. 1980. Inharmonique, Analyse de la Bande de Wampler, B. 1998. V Reference Manual. HTML document L’Oeuvre de Jean-Claude Risset. Rapports Ircam 26. available at http://www.objectcentral.com

Downloaded from https://www.cambridge.org/core. Maynooth University, on 21 Aug 2019 at 09:54:54, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1355771800001060 Downloaded from https://www.cambridge.org/core. Maynooth University, on 21 Aug 2019 at 09:54:54, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1355771800001060