A Tutorial on Spectral Sound Processing Using Max/MSP and Jitter

A Tutorial on Spectral Sound Processing Using Max/MSP and Jitter

Jean-Franc¸ois Charles A Tutorial on Spectral 1 Route de Plampery´ 74470 Vailly, France [email protected] Sound Processing Using Max/MSP and Jitter For computer musicians, sound processing in the This article is intended as both a presentation of frequency domain is an important and widely used the potential of manipulating spectral sound data technique. Two particular frequency-domain tools as matrices and a tutorial for musicians who want of great importance for the composer are the phase to implement such effects in the Max/MSP/Jitter vocoder and the sonogram. The phase vocoder, an environment. Throughout the article, I consider analysis-resynthesis tool based on a sequence of spectral analysis and synthesis as realized by the overlapping short-time Fourier transforms, helps Fast Fourier Transform (FFT) and Inverse-FFT perform a variety of sound modifications, from time algorithms. I assume a familiarity with the FFT stretching to arbitrary control of energy distribution (Roads 1995) and the phase vocoder (Dolson 1986). through frequency space. The sonogram, a graphical To make the most of the examples, a familiarity representation of a sound’s spectrum, offers com- with the Max/MSP environment is necessary, and posers more readable frequency information than a a basic knowledge of the Jitter extension may be time-domain waveform. helpful. Such tools make graphical sound synthesis I begin with a survey of the software currently convenient. A history of graphical sound synthesis is available for working in this domain. I then show beyond the scope of this article, but a few important some improvements to the traditional phase vocoder figures include Evgeny Murzin, Percy Grainger, and used in both real time and performance time. Iannis Xenakis. In 1938, Evgeny Murzin invented a (Whereas real-time treatments are applied on a live system to generate sound from a visible image; the sound stream, performance-time treatments are design, based on the photo-optic sound technique transformations of sound files that are generated used in cinematography, was implemented as the during a performance.) Finally, I present extensions ANS synthesizer in 1958 (Kreichi 1995). Percy to the popular real-time spectral processing method Grainger was also a pioneer with the “Free Music known as the freeze, to demonstrate that matrix Machine” that he designed and built with Burnett processing can be useful in the context of real-time Cross in 1952 (Lewis 1991); the device was able effects. to generate sound from a drawing of an evolving pitch and amplitude. In 1977, Iannis Xenakis and associates built on these ideas when they created Spectral Sound Processing with Graphical the famous UPIC (Unite´ Polyagogique Informatique Interaction du CEMAMu; Marino, Serra, and Raczinski 1993). In this article, I explore the domain of graphical Several dedicated software products enable graphic spectral analysis and synthesis in real-time situa- rendering and/or editing of sounds through their tions. The technology has evolved so that now, not sonogram. They generally do not work in real time, only can the phase vocoder perform analysis and because a few years ago, real-time processing of com- synthesis in real time, but composers have access to plete spectral data was not possible on computers a new conceptual approach: spectrum modifications accessible to individual musicians. This calculation considered as graphical processing. Nevertheless, the limitation led to the development of objects like underlying matrix representation is still intimidat- IRCAM’s Max/MSP external iana∼, which reduces ing to many musicians. Consequently, the musical spectral data to a set of useful descriptors (Todoroff, potential of this technique is as yet unfulfilled. Daubresse, and Fineberg 1995). After a quick survey of the current limitations of non-real-time Computer Music Journal, 32:3, pp. 87–102, Fall 2008 software, we review the environments allowing FFT c 2008 Massachusetts Institute of Technology. processing and visualization in real time. Charles 87 Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/comj.2008.32.3.87 by guest on 02 October 2021 Figure 1. FFT data recorded in a stereo buffer. Non-Real-Time Tools AudioSculpt, a program developed by IRCAM, is characterized by the high precision it offers as well as the possibility to customize advanced parameters for the FFT analysis (Bogaards, Robel,¨ and Rodet Real-Time Environments 2004). For instance, the user can adjust the analysis window size to a different value than the FFT size. PureData and Max/MSP are two environments Three automatic segmentation methods are pro- widely used by artists, composers, and researchers vided and enable high-quality time stretching with to process sound in real time. Both enable work in transient preservation. Other important functions the spectral domain via FFT analysis/resynthesis. I are frequency-bin independent dynamics process- focus on Max/MSP in this article, primarily because ing (to be used for noise removal, for instance) the MSP object pfft∼ simplifies the developer’s and application of filters drawn on the sonogram. task by clearly separating the time domain (outside Advanced algorithms are available, including funda- the pfft∼ patcher) from the frequency domain mental frequency estimation, calculation of spectral (inside the patcher). When teaching, I find that the envelopes, and partial tracking. Synthesis may be graphical border between the time and frequency realized in real time with sequenced values of pa- domains facilitates the general understanding of rameters; these can, however, not be altered “on the spectral processing. fly.” The application with graphical user interface Using FFT inside Max/MSP may be seen as runs on Mac OS X. difficult because the FFT representation is two- MetaSynth (Wenger and Spiegel 2005) applies dimensional (i.e., values belong to a time/frequency graphical effects to FFT representations of sounds grid), whereas the audio stream and buffers are before resynthesis. The user can apply graphical one-dimensional. Figure 1 shows how data can be filters to sonograms, including displacement maps, stored when recording a spectrum into a stereo zooming, and blurring. A set of operations involv- buffer. ing two sonograms is also available: blend, fade The tension between the two-dimensional nature in/out, multiply, and bin-wise cross synthesis. It of the data and the one-dimensional framework is also possible to use formant filters. The pitfall makes coding of advanced processing patterns (e.g., is that MetaSynth does not give control of the visualization of a sonogram, evaluation of transients, phase information given by the FFT: phases are and non-random modifications of spatial energy in randomized, and the user is given the choice among the time/frequency grid) somewhat difficult. An several modes of randomization. Although this example of treatment in this context is found in the might produce interesting sounds, it does not offer work of Young and Lexer (2003), in which the energy the most comprehensive range of possibilities for in graphically selected frequency regions is mapped resynthesis. onto synthesis control parameters. There are too many other programs in this vein The lack of multidimensional data within to list them all, and more are being developed Max/MSP led IRCAM to develop FTM (Schnell each year. For instance, Spear (Klingbeil 2005) fo- et al. 2005), an extension dedicated to multidimen- cuses on partial analysis, SoundHack (Polansky sional data and initially part of the jMax project. and Erbe 1996) has interesting algorithms for spec- More specifically, the FTM Gabor objects (Schnell tral mutation, and Praat (Boersma and Weenink and Schwarz 2005) enable a “generalized granular 2006) is dedicated to speech processing. The lat- synthesis,” including spectral manipulations. FTM ter two offer no interactive graphical control, attempts to operate “Faster Than Music” in that though. the operations are not linked to the audio rate but 88 Computer Music Journal Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/comj.2008.32.3.87 by guest on 02 October 2021 Figure 2. Phase vocoder design. are done as fast as possible. FTM is intended to be data in a matrix, to transform it (via zoom and cross-platform and is still under active development. rotation), and to play it back via the IFFT; second, However, the environment is not widely used at the an extension of this first work, where the authors moment, and it is not as easy to learn and use as introduce graphical transformations using a transfer Max/MSP. matrix (Sedes, Courribet, and Thiebaut 2004). In 2002, Cycling ’74 introduced its own mul- Both use a design similar to the one presented in tidimensional data extension to Max/MSP: Jitter. Figure 2. This package is primarily used to generate graphics This section endeavors to make the phase-vocoder (including video and OpenGL rendering), but it en- technique accessible, and it presents improvements ables more generally the manipulation of matrices in the resynthesis and introduces other approaches inside Max/MSP. Like FTM, it performs operations to graphically based transformations. on matrices as fast as possible. The seamless in- tegration with Max/MSP makes implementation of audio/video links very simple (Jones and Nevile FFT Data in Matrices 2005). Jitter is widely used by video artists. The learning An FFT yields Cartesian values, usually translated curve for Max users is reasonable, thanks to the to their equivalent polar representation. Working number of high-quality tutorials available. The with Cartesian coordinates is sometimes useful for ability to program in Javascript and Java within Max minimizing CPU usage. However, to easily achieve is also fully available to Jitter objects. Thus, I chose the widest and most perceptively meaningful range Jitter as a future-proof choice for implementing this of transformations before resynthesis, we will work project. with magnitudes and phase differences (i.e., the bin- by-bin differences between phase values in adjacent frames).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us