<<

Description

This document describes the "Tape version" of live-electronics used for The Pyre. The Tape version plays all sound material of the piece from one laptop computer, in the absence of musicians Peter Rehberg and Stephen O'Malley, and can be performed by the sound engineer of the company. The task of the sound engineer regarding the live-electronics is to trigger and mix the 9 different cues (or scenes) on a MIDI desk during the show. Cues are visual, and should be defined together with the musicians before rehearsing.

Live-electronics tools and audio spatialization design of The Pyre were created by Manuel Poletti - manuel.poletti@.fr

------

Requirements for the live-electronics

- 1 Mac Book Pro 2.3 GHz Intel Core i7 or higher with 8 Go RAM and fast drive (SSD recommended) running Mac OS 10.6.8 or later - 1 fully authorized copy of Suite 9 or later software with For Live 6.1 or later extension properly installed - 1 RME Fireface 800 soundboard (hosts and outputs 16 separate audio channels through 2 ADAT ports, has at least one MIDI input port) - 1 Beringher BCF2000 MIDI controller or equivalent - 1 that can host 16 ADAT input channels and 16 separate analog output channels + subwoofers

Please see the technical rider document for and audio setups.

Important: the sound engineer should have some good skills in using Ableton Live and MIDI controllers, and should be familiar with the concept of sound spatialization, as well as the use of Max and Max For Live in real time performance contexts

------

Software installation & settings

- use a fully authorized version of Live 9 Suite (which contains the Max For Live extension). The set may run in a demo version of Live 9 Suite, but as this hasn't been tested at all, and as no changes in the Live set would be saved in such case, we highly recommend using an authorized version of Live 9 Suite.

- from the archive, copy the IrcamVerbStereo51.vst file (which can be found in the "To VST Plugins Folder" folder) into your machine's System/Library/Audio/Plug-Ins/ VST folder (machine-admin password may be required). This is a multichannel reverberator VST plugin from IRCAM which is used in the set and needs to be properly installed at that precise place in the hard drive.

- launch Ableton Live and open the preferences pane. In the Audio section, select the RME Fireface 800 as input and output device. Buffer size should be set to 512 samples, Sampling Rate to 44.1 kHz. Click in the Input Config and Output Config buttons and enable ALL inputs and outputs in both windows. We've observed some occasional malfunctioning of the Fireface 800 when not doing so.

- in the File/Folder pane, make sure that Live uses the Applications/Max 6.1 folder (for Max For Live devices)

- in the MIDI/Sync pane, make sure the BCF2000 is set as Input Device (Remote mode), but NOT as Control Surface

- in the CPU pane, make sure that Live runs in Multicore/Multiprocessor mode

- close the Preferences pane and launch the "The Pyre-Tape Version 2014.als" Live set found in the "The Pyre-Tape Version 2014 Project" folder.

------

Description of the Live set

What you see is a set of grouped tracks (on the left), and 12 auxiliary busses (on the right). The master section is not used.

- the REV grouped track contains 5 IRCAM reverberators VST plugins - the white and blue grouped tracks contain some audio tracks that host audio clips and real-time spatialization devices - the DIRECT grouped track contains the audio recordings of the laptop's output from the musicians - the 7 first Return auxiliary tracks are the master outs of the whole Live set - Fr for front 1->6 speakers - Rm for room (audience) 7->10 speakers - St for stage 11->14 speakers - the 5 last Return auxiliary tracks feed the reverb plugins found in the REV grouped tracks

Originally all sound material was played by the two musicians on their respective laptop running each one Ableton Live set. For each performer some sound material was directly played through the mixing console/P.A., and some material was sent through some dedicated audio busses through their soundboards to a third laptop that performed some audio spatialization effects. For this Tape version, all "direct" and "spatialization" sound material from the performers were recorded and put into some tracks in the third (spatialization) machine's Live set. Thus the entire piece can now be played by the sound engineer using one sole laptop/soundboard unit.

------

Audio routing

If you're familiar with Live, you may have recognized that none of the audio tracks in the set - except one - uses any direct audio output assignment. Rather we're using the Sends level controls of each track to adjust the distribution of the audio through several output busses (Returns). Each of the Return busses are routed to the soundboard's outputs, which feed the corresponding pairs of loudspeakers. In the present case, Front busses feed channels 13-18 of the RME, Room busses feed channels 19-22 and Stage busses feed channels 23-26. These numbers represents the channel numbers of the ADAT ports of the RME.

If you ungroup the DIRECT track for instance, you can see how each track is routed to the speakers using auxiliary Send volumes: The only track in the set that is directly connected to the RME's output (analog 1 and 2) is the "Fender" track, which feeds both Fender Twin amps. The reason for this is simply that Live has only 12 Sends/Returns busses (by design), and that we're already using them all. Thus the analog outputs 1 and 2 of the RME soundboard need to be connected to the mixing console, in order to feed the two required Fender Twin amps that lay on the stage. You could also use the two last ADAT channels still available for the amps, since the Live set only uses 14 ADAT channels, or re-route analog 1-2 to ADAT 15-16 in the soundboard's matrix editor.

Here is a small reference that describes to which speaker each output bus described above corresponds: Additionally, from each track we also have the possibility to feed one or more of the reverberators stored in the REV grouped track, using 5 dedicated Send/Return busses. The Return tracks of these 5 Send busses act as a master FX input for each reverb unit, as the tracks that host the reverberators are set to receive the audio of those 5 busses at their input. This might sound a little tricky, but as we want to also spatialize our reverbs - as the reverbs are multichannel - we need the reverbs to reside in some generic audio tracks rather than sitting in the Return tracks themselves. What you see above is the input (5 last Send/Return) tracks and (7 first Send/ Return) output routing of the 5 reverb units. Each unit has a quadraphonic output, split into two regular Live tracks (R1A, R1B, etc). For each unit, the VST plugin in track A broadcasts its outputs 3 and 4 to track B (this is a VST feature). Each of those two tracks can then be routed independently to any speakers pair, with different levels, using the different Sends available. Any other audio track in the Live set may use one reverb unit (or several), by adjusting the corresponding Send level to that reverb unit.

------

Cues, audio scenes & MIDI triggers

During the show, some visual cues cause the sound engineer to launch some sound scenes. There are 9 cues to trigger. In the Live set, the 9 corresponding sound scenes are grouped into some typical Live scenes: Instead of simply launching the scenes from within Live using the Scenes pane (like above), in order to bring the possibility to crossfade the audio between the scenes, all triggers are performed through the MIDI controller, using one separate push- button MIDI controller to trigger each scene.

If you turn Live into MIDIMap Mode (press cmd-m), you can see that each group has one MIDI controller hooked to one group of audio clips:

For instance, MIDI controller 21 triggers scene 1 (called "Escalade"), MIDI controller 22 triggers scene 2, (called "Faint"), and so on. So it is necessary that the MIDI controller device be configured so that 8 separate push buttons send the correct MIDI control number and value to trigger the corresponding sound scene.

Scene-trigger MIDI mapping: - MIDI controller 21 on channel 1 launches scene 1 - MIDI controller 22 on channel 1 launches scene 2 - MIDI controller 23 on channel 1 launches scene 3 - MIDI controller 24 on channel 1 launches scene 4 - MIDI controller 25 on channel 1 launches scene 5 - MIDI controller 26 on channel 1 launches scene 6 - MIDI controller 27 on channel 1 launches scene 7 - MIDI controller 30 on channel 1 launches scene 8 - MIDI controller 28 on channel 1 launches scene 9

Notice that the sound scenes are stored into some grouped tracks - each playing one sole scene - while the DIRECT grouped track plays audio material in all scenes. The reason for this is that the DIRECT grouped tracks contain the recordings of the audio material which was originally played by the musicians, and which was sent directly from their laptops to the P.A. The DIRECT group contains the right audio routing and sound distribution to loudspeakers - as what was recorded was actually the outputs of both musicians's laptops. These routings shouldn't be modified. All other grouped tracks contains some audio material that need a dynamic spatialization process, which is performed in real time in the Live set.

------

MIDI mixing

While cues/scenes are triggered using some dedicated MIDI controllers, each scene needs to be faded in and out, in order to:

- smoothly interpolate between two successive scenes - adjust the level of each scene independently - enable and disable some per-track spatialization devices that eat some CPU resources

Therefore one MIDI volume serves two tasks simultaneously: muting/un-muting tracks and devices, and adjust fades and levels of several tracks together. This requires a special MIDI mapping, and some special Max For Live tools in each track. For instance, in scene 1 "Escalade", we have two audio tracks:

By default these tracks are muted, and have independent audio levels (relative mixing, which shouldn't be ever changed).

In each track we have some specific Max For Live devices:

The Max Api SendsRand and the Max Api Ctrl1LFO devices control the sound spatialization (referred as "Spat" devices) - we'll talk about them later in this document. The "LeSuperMute" device allows the automatic muting of the "Spat" devices (and thus their CPU consumption) whenever the track is muted. The "MyGain" device simply controls the fade in/out and level of the track's audio.

Now let's have a look at the MIDI mapping of these tracks:

As you can see, both track-mute buttons and gain sliders are mapped to the same MIDI controller 7 (volume ) on MIDI channel 1: the mapping is made so that when the MIDI fader rises a threshold (MIDI value > 5), the tracks are un-muted, and the fade-in starts, until the fader reaches the desired level.

Inversely, as soon as the MIDI value of the volume fader falls under the threshold (MIDI value < 5) , fade-out terminates and tracks are muted (as well as their inherent devices, which don't "consume" any CPU any more). By default, all "Spat" (blue and white) tracks are muted in the set. DIRECT tracks are always on and don't contain any Spat engine.

Note: the MIDI mapping on the Mute button of the Group track that contains the audio tracks is only present for display purpose - so when all tracks are collapsed, you still may see which internal tracks are muted or un-muted.

Volume MIDI mapping: - MIDI controller 7 on channel 1 controls scene 1 - MIDI controller 7 on channel 2 controls scene 2 - MIDI controller 7 on channel 3 controls scene 3 - MIDI controller 7 on channel 4 controls scene 4 - MIDI controller 7 on channel 5 controls scene 5 - MIDI controller 7 on channel 6 controls scene 6 and 8 (which are similar) - MIDI controller 7 on channel 7 controls scene 7 - MIDI controller 7 on channel 8 controls scene 9

There's one additional MIDI mixing controller that is used in scene 7 "Folk". At a certain point in the scene, where sound is played on stage speakers using a Spat device, you need to increase the level of the two first Sends of the track using only one physical knob, in order to play the sound through the front speakers:

Therefore we're using one additional MIDI controller - control 10 on MIDI channel 7 - which can be mapped to a physical knob on the MIDI controller.

------

MIDI mapping summary

This represents the original mapping that is used on the BCF2000 in order to play the piece:

You can display the MIDI mappings in Live - press cmd-m and open the browser. If you double-click on a mapping line, Live will display the track that contains the mapping: ------

Switching live/tape versions

There's one more feature that was firstly implemented in order to prevent different situations such as: #1 both musicians are present at the performance #2 only one or the other musician is present #3 none of the musicians are present The original piece is played live by the two musicians. Some of their audio is diffused directly from their laptop/soundboard to the P.A., and some of their audio is sent to the Spat machine. For each musician, we've reserved 4 stereo busses, that are sent through an ADAT connection between their machine and the Spat machine, each musician using one separate ADAT port. So our RME receives 2 X 8 ADAT inputs - or 4 stereo busses from each musician.

Musician 1 - referred as "S" (Stephen) in the Live set sends its audio into the RME 13-20 ADAT input, and Musician 2 - referred as "P" (Peter) in the Live set sends its audio into the RME 21-28 ADAT input. Tracks are stereo and mention the bus number in their name: S1, S2, S3 and S4 (red) for Musician 1, and P1, P2, P3 and P4 (green) for Musician 2:

ADAT audio inputs are still present in this tape version, which lets the possibility to switch from a "tape" situation to a live situation, with one or both of the musicians being present.

In order to switch easily between live and tape audio, we've reserved 2 MIDI controllers that toggle the Monitor parameter of each track, in a global fashion. Live input would require that Monitor is set to "In":

In such case, the audio clip in the track is bypassed, and Live listens to and monitors the input of the track

Tape source would require that Monitor is set to "Auto": In which case, the clip is used as the audio source, and the input of the track is bypassed when the clip is playing.

The two MIDI controllers on the leftmost part of the BCF let you switch between the two situations, for each musicians, and for all tracks in the Live set:

Here is the MIDI mapping of the Monitor parameter: controller 10 on channel 1 for Musician 1, controller 10 on channel 2 for Musician 2:

These controls should be set at the beginning of the show - by default, the Tape version is set (Monitor set to Auto).

Note: when one or both musicians are present, the corresponding DIRECT tracks should be muted accordingly.

------

Spatialization tools and techniques We've already seen that all the output routing and sound diffusion can be easily controlled using the different Sends in each track. In order to create some movements of the sound through the different speakers, we're using two special devices written in Max that allow the remote control of one track's Sends and Panning:

The Max Api SendsRand device allows a random automation of one track's Sends levels, providing volume fades between different pairs of speakers

The Max Api Ctrl1LFO device allows a random automation of the Panning parameter of a track.

When both devices are combined, the result is a sensation of sound displacement between the different pairs of loudspeakers, and between the two speakers of one pair. These devices are widely used in the Live set. Together with the multichannel IRCAM reverb plugin, they're the heart of the techniques provided in this set in order to perform some sound spatialization within Live.

------Manuel Poletti - April 2014 [email protected]