
AN INTELLIGENT MULTI-TRACK AUDIO EDITOR Roger B. Dannenberg Carnegie Mellon University School of Computer Science ABSTRACT breakthroughs. To illustrate the potential for intelligent, automated editing, and to explore problems that arise in Audio editing software allows multi-track recordings to practice, the author has constructed a working prototype be manipulated by moving notes, correcting pitch, and that: (1) identifies and labels all notes in a multi-track making other fine adjustments, but this is a tedious recording, using a standard MIDI file as a score; (2) es- process. An “intelligent audio editor” uses a machine- timates the performed pitch and pitch error of each note; readable score as a specification for the desired (3) applies pitch correction to all notes; (4) locates all performance and automatically makes adjustments to simultaneous onsets in the score; (5) aligns recorded note pitch, timing, and dynamic level. notes that are not synchronized within a degree of toler- ance; (6) estimates the average recording level for each 1. INTRODUCTION instrument; and (7) balances the instruments according to user-selected ratios (initially 1:1). The result of this Multi-track recording in the studio is a standard produc- fully automated process is a new set of “corrected” tion method for a variety of musical styles, including tracks that can be mixed as is, or edited further by hand. classical, experimental, and popular forms. Multi-track While these steps can be performed manually using techniques are used in many recording situations, in- modern audio editing software, the present work is the cluding ensembles that record all tracks at once, ensem- first to automate these steps, making use of a score bles that record tracks in several passes, and even indi- (MIDI file) to specify the desired result. viduals that play different parts in succession. One of The following sections describe each step of the sys- the advantages of multi-track recording is the control it tem, named IAED for Intelligent Audio Editor. Section 5 gives after the music is recorded. Balance, equalization, describes some experience with the IAED, and this is reverberation, and many other parameters can be altered followed by a discussion, summary, and conclusions. without the time and expense of re-recording. Originally, analog recording allowed individual musi- cians to correct mistakes quickly by “punching in,” i.e., 2. RELATED WORK replacing as little as one note of one track with new ma- terial. Digital recording has ushered in many more pos- The IAED combines and applies various forms of in- sibilities. Now, individual notes or whole phrases can be formation derived from audio and scores. Earlier work shifted in time, stretched, transposed to improve intona- by the author and others offers many ways to perform tion, or even replaced, perhaps using material from the the necessary analysis. One of the purposes of the IAED same performance. is to gain experience combining these techniques in or- As with many technologies, new capabilities work der to guide further development. quickly to establish a new set of expectations. Now, vir- To align tracks to scores, both monophonic and poly- tually every music recording is carefully edited and re- phonic score-to-audio alignment techniques are useful. fined using digital techniques. Ironically, the efficiency Note-based techniques used in monophonic score fol- and effectiveness of digital editors often results in much lowing [3] can be applied. There is also work on mono- more time spent in production than ever before. phonic and polyphonic alignment [11], [15] that can be Meanwhile, there have been many advances in audio used directly. To refine the alignment, onset detection is and music analysis, machine listening, audio-to-score used. Previous work by Rodet and Jaillet [13], Plumbley, alignment, and music understanding. Tasks such as Brossier and Bello [12], Dannenberg and Hu [5], and finding notes, aligning them in time, correcting intona- many others is relevant. tion problems, and balancing dynamic levels can be per- Time alignment and pitch shifting often rely on time formed automatically, at least in principle, and this is the stretching techniques that include SMS [16], phase vo- objective of the present research. In the production ver- coder [8], and PSOLA [14] techniques. sion of this work, one starts with audio recordings and a Finally, it should be noted that the practice of fine- machine-readable score. An automatic process identifies grain editing by hand is well established using commer- possible problems. Using an interface similar to that of a cial editors such as ProTools (www.digidesign.com), and spelling checker, the user is prompted to audition, ac- plug-ins for time stretching and pitch correction such as cept, or reject each suggested edit. The user can also Antares Auto-Tune (www.antarestech.com). Non- simply accept all suggestions and listen to the final mix. commercial editors such as Sonic Visualizer [2] and While this vision of a highly automated and easy-to- CLAM Annotator [1] offer some assistance labeling use editing system will require considerable effort to pitch and onsets, but are not intended for audio editing. realize, the basic techniques require no fundamental 89 3. ANALYSIS linear interpolation between beats. Then, the estimated onset times are refined as described in the previous para- The first stage of IAED processing obtains information graph. about each note as well as summary information about To verify the labeling, clicks can be synthesized at the the tracks. For simplicity, assume that every track is a note onset times. Then by listening to the rhythm and recording of a pitched monophonic instrument. Other timing of the clicks, the user can quickly detect any ma- cases will be discussed later. jor errors. Figure 1 shows three input tracks and labels (below each audio track) generated automatically by 3.1. Labeling IAED. The number shown by each onset time is the in- dex of the note in the track. Seconds are shown at the The process of identifying notes in the recordings and very top. associating them with notes in the score is called label- ing. Labeling can be performed in two ways by the 3.2. Dynamics and F0 Estimation IAED. The first method is based on audio-to-score alignment techniques [6]. An alignment program con- Once note onsets are identified, the IAED labels each verts both score and audio time frames into sequences of note with an indication of dynamic level and funda- chroma vectors and aligns them using dynamic time mental frequency (F0). Since dynamic level is estimated warping. [4] This method works for polyphonic record- for the purposes of balance between tracks, it is assumed ings as well as monophonic ones. Each track is sepa- that the subjective impression is created by the loudest rately aligned with its corresponding standard MIDI file part of the overall note. For simplicity, RMS is used track, resulting in a mapping between MIDI time and rather than a perceptually based measure. The dynamics audio time. This mapping is smoothed and used to map estimation algorithm, which operates on each note, is as each MIDI key-down message to an onset time in the follows: audio recording. • Pre-compute an RMS value for each 5ms frame (a The score alignment is not precise enough for editing, by-product of the labeling step described above). so additional refinement is performed. (Another motiva- • Find the location of the highest RMS value between tion for the refinement is that we are most interested in the current note onset and the next note onset. notes that occur early or late. Their onset times may not • Scan backward from the peak RMS point until the be predicted very well if the score alignment is presented RMS falls to half the peak value (but stop at the note as a smooth tempo curve.) An onset detection function, onset if it is reached). Call this time point A. based mainly on change in the local RMS energy is used • Scan forward from the peak RMS point until the to estimate the onset probability of each 5ms frame in RMS falls to half, stopping if necessary at the next the recorded audio. For each note in the score, this prob- note onset time. Call this time point B. ability is weighted by a Gaussian centered on the ex- • Compute the mean RMS over the interval [A, B]. pected note. This is a roughly Bayesian model where the Fundamental frequency is estimated using the YIN al- RMS feature represents an observation probability and gorithm [7]. YIN generates an F0 estimate and a confi- the Gaussian represents the prior probability based on dence level for each 5ms frame. On the assumption that the score. The location where their product is maximized intonation is most important when notes are loudest, we determines a refined estimate of the note onset. This is a average the YIN estimates over the same interval [A, B] partial implementation of a bootstrapping method [5] used to average RMS values, but ignoring frames where that proved to work extremely well for onset labeling. the YIN confidence level is low. The second method for note onset detection dispenses with the score alignment step and uses labeled beats in- 3.3. Coincident Notes stead. Beats can be labeled simply by tapping along with the music, or the music might already have a click track The IAED makes many timing decisions based on notes or a synchronized MIDI track, in which case the beat that should sound at the same time in different tracks. locations are already known. The score is warped using These coincident notes are detected by scanning the score to find note onsets that match within a small toler- ance.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-