
Orchestra in a Box: A System for Real-Time Musical Accompaniment Christopher Raphael Department of Mathematics and Statistics, University of Massachusetts at Amherst, Amherst, MA 01003-4515, [email protected] Abstract cartoon-like performance descriptions. Most instruments, however, continually vary several attributes during the evo- We describe a computer system that plays a respon- lution of each note or phrase, thus their MIDI counterparts sive and sensitive accompaniment to a live musi- sound reductive and unconvincing. cian in a piece of non-improvised music. The sys- tem of composed of three components “Listen,” We describe here a new direction in our work on musical “Anticipate” and “Synthesize.” Listen analyzes accompaniment systems: we synthesize audio output by play- the soloist’s acoustic signal and estimates note on- ing back an audio recording at variable rate. In doing so, the set times using a hidden Markov model. Synthe- system captures a much broader range of tone color and inter- size plays a prerecorded audio file back at vari- pretive nuance while posing a significantly more demanding able rate using a phase vocoder. Anticipate cre- computational challenge. ates a Bayesian network that mediates between Lis- Our system is composed of three components we call ten and Synthesize. The system has a learning “Listen,” “Anticipate,” and “Synthesize.” Listen tracks the phase, analogous to a series of rehearsals, in which soloist’s progress through the musical score by analyzing the model parameters for the network are estimated soloist’s digitized acoustic signal. Essentially, Listen pro- from training data. In performance, the system syn- vides a running commentary on this signal, identifying times thesizes the musical score, the training data, and the at which solo note onsets occurs, and delivering these times on-line analysis of the soloist’s acoustic signal us- with variable latency. The combination of accuracy, com- ing a principled decision-making engine, based on putational efficiency, and automatic trainability provided by the Bayesian network. A live demonstration will be the hidden Markov model (HMM) framework makes HMMs given using the aria Mi Chiamano Mimi from Puc- well-suited to the demands of Listen. A more detailed de- cini’s opera La Boheme` . scription of the HMM approach to this problem is given in [Raphael, 1999]. 1 Introduction The actual audio synthesis is accomplished by our Syn- Musical accompaniment systems seek to emulate the task that thesize module through the classic phase vocoding tech- a human musical accompanist performs: supplying a missing nique. Essentially, the phase vocoder is an algorithm en- musical part, generated in real time, in response to the sound abling variable-rate playing of an audio file without introduc- input from a live musician. As with the human musician, the ing pitch distortions. The Synthesize module is driven by a accompaniment system should be flexible, responsive, able sequence of local synchronization goals which guide the syn- to learn from examples, and bring a sense of musicality to the thesis like a trail of bread crumbs. task. Most, if not all, efforts in this area create the audio ac- The sequence of local goals is the product of the Anticipate companiment through a sparse sequence of “commands” that module which mediates between Listen and Synthesize. The control a computer sound synthesis engine or dedicated au- heart of Anticipate is a Bayesian network consisting of hun- dio hardware [Dannenberg, 1984], [R. Dannenberg, 1988], dreds of Gaussian random variables including both observ- [B. Vercoe, 1985], [B. Baird, 1993]. For instance MIDI (mu- able quantities, such as note onset times, and unobservable sical instrument digital interface), the most common control quantities, such as local tempo. The network can be trained protocol, generally describes each note in terms of a start during a rehearsal phase to model both the soloist’s and ac- time, an end time, and a “velocity.” Some instruments, such companist’s interpretations of a specific piece of music. This as plucked string instruments, the piano, and other percus- model then constitutes the backbone of a principled real-time sion instruments can be reasonably reproduced through such decision-making engine used in live performance for schedul- ing musical events. A more detailed treatment of various ¡ This work supported by NSF grants IIS-998789 and IIS- approaches to this problem is given in [Raphael, 2001] and 0113496. [Raphael, 2002]. n states can be learned in an unsupervised manner through the Baum- p pc pc Welch, or Forward-Backward, algorithm. This allows our q q q attack 1 sust sust . sust system to adapt automatically to changes in solo instrument, microphone placement, room acoustics, ambient noise, and choice of the accompaniment instrument. In addition, this p(1-c) p p(1-c) p q automatic trainability has proven indispensable to the pro- q cess of feature selection. A pair of simplifying assumptions rest . rest make the learning process feasible. First, states are “tied” *. so that -¥¨')( depends only on several attributes of the state * Figure 1: A Markov model for a note allowing an optional such as the associated pitch and “flavor” of state, (attack, rearticulation, sustain, etc.). Second, the feature vector is ¥¨' ©©/'101 silence at the end. ' divided into several groups of features, , assumed to be conditionally independent, given the state: 5 0 *. &¥¨' ( * 2 Listen &¥2')( 576 43 . To follow a soloist, one must first hear the soloist; “Listen” is As important as the automatic trainability is the estima- the component of our system that accomplishes this task. tion accuracy that our HMM approach yields. Musical sig- We begin by dividing our acoustic signal into a collection nals are often ambiguous locally in time but become easier to of “snapshots” or “frames.” In our current system the acous- parse with the benefit of longer term hindsight. The HMM tic signal is sampled at 8 KHz with a frame size of 256 sam- approach handles this local ambiguity naturally through its ples leading to about 31 frames per second. In analyzing the probabilistic formulation, as follows. While we are waiting acoustic signal, we seek to label each data frame with an ap- to detect the 8 th solo note, we collect data (increment ) un- propriate score position. We begin by describing the label til ( ' ©© ' ED<F ¥ :9<;>=?A@7=CB process. + The “score” containing both solo and accompaniment parts F ;>=?A@7= is known to us. Using this information, for each note in for some threshold , where B is the first state of the the solo part we build a small Markov model with states as- 8 th solo note model, (e.g. the “attack” state of Fig. 1). If this sociated with various portions of the note such as “attack” condition first occurs at frame HG then we estimate the onset and “sustain” as in Fig. 1. We use various graph topologies time of the 8 th solo note by ¥ ( ' ©S© ' O O for different kinds of notes, such as short notes, long notes, ?A@CJEKL?AM + PQ B "R note BI start rests, trills, and rearticulations. However, all of our models !N"O have tunable parameters that control the length distribution This latter computation is accomplished with the Forward- (in frames) of the note. Fig. 1 shows a model for a note that Backward algorithm. In this way we delay the detection of is followed by an optional silence, as would be encountered the note onset until we are reasonably sure that it is, in fact, if the note is played staccato or if the player takes a breath or past, greatly reducing the number of misfirings of Listen. makes an expressive pause. The self-loops in the graph allow Finally, the HMM approach brings fast computation to our us to model of a variety of note length distributions using a application. Dynamic programming algorithms provide the small collection of states. For instance, one can show that, computational efficiency necessary to perform the calcula- for the model of Fig. 1, the number of frames following that ¢¤£¦¥¨§ © tions we have outlined at a rate consistent with the real-time attack state has a Negative Binomial distribution, , § demands of our application. where and are as described in the figure. We create a model for each solo note in the score, such as the one of Fig. 1, and chain them together in left-to-right fashion to pro- 3 Synthesize duce the hidden label, or state, process for our HMM. We The Synthesize module takes as input both an audio recording © © ¦ ¦ write for the state process where is the state of the accompaniment as well as an “index” into the recording © © visited in the th frame. is a Markov chain. consisting of times at which the various accompaniment notes © © The state process, ¦ , is, of course, not observ- are played. The index is calculated through an offline vari- able. Rather, we observe the acoustic frame data. For each ant of Listen applied to the accompaniment audio data. The ©© frame, , we compute a feature vector describing polyphonic and multitimbral nature of the audio data make the local content of the acoustic signal in that frame, . Most for a difficult estimation problem. To ensure that our index of the components of the vectors !"$# are measurements de- is accurate we hand-corrected the results after the fact in the rived from the finite Fourier transform of the frame data, use- experiments we will present. ful for distinguishing various pitch hypotheses. Other compo- The role of Synthesize is to play this recording back at nents measure signal power, useful for distinguishing rests; variable rate (and without pitch change) so that the accom- and local activity, useful for identifying rearticulations and paniment follows the soloist.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-