Orchestral Maneuvers in Surround ' composition commemorating the first anniversary of the September 11 attacks was recently performed at the Opera House in surround. Robert Austral discusses the details with sound designer, Mark Grey.

hen American composer John Adams was Is it a collaboration? commissioned by the Mark Grey: It’s collaborative, though John has the Wto create a work commemorating the first anni- musical concept and then invites me to work on the versary of the September 11, 2001, attacks on the World project. He is very knowledgeable about electronic Trade Center and Pentagon, he knew from the outset music – he started writing with technology when he was that he wanted a setting with music, choir, recorded younger, before he began working with the San Francisco voice and streetscape sounds that felt ‘otherworldly’. Symphony and moving into the large orchestral format. His goal was to convey the presence of many souls and He knows what filters are and what envelope generators their collected energy in order to create what he termed are. He understands synthesis technology, synthesisers a ‘memory space’, where the listener could reflect on and samplers and those architectural tools very well, so grieving and loss. it’s great – we can talk both with a musical language, The resulting composition for orchestra and choir, On because I have a compositional background, and a the Transmigration of Souls, received its debut performanc- technical language. When it gets to a certain level with es at the Lincoln Center in New York under the baton of the technology, though, he doesn’t want to go there, and famed conductor Lorin Maazel. Since then, the piece has that’s where I come in. won a Pulitzer Prize and been performed in many of the RA: You are probably the first person to go into the world’s most prestigious concert halls, such as the Royal Opera House hall and focus two line array systems at Albert Hall in . In January 2004, it was presented the stage. Can you take us through the sound system as part of the Sydney Festival program in the Concert design for this piece and how it was conceived? Hall of the Sydney Opera House. MG: The piece was always conceived to be performed In conceiving how the composition’s disparate with an LCR system at the stage, with mid-auditorium elements would be woven together into a cohesive speakers and rear surrounds in a total of seven discrete aural vision, Adams drew on the technical and artistic zones. What changes is the physical nature of the spaces assistance of San Francisco-based sound designer and and what equipment may be available to me, but I always composer Mark Grey, a San Jose University graduate specify Meyer Sound products for this piece. It’s a little with degrees in both composition and electro-acoustics. daunting always walking into a space and having to AudioTechnology spoke with Mark Grey in Sydney about explain why it should be set up this way, but I really the artistic collaboration and technical process that enjoy the challenge of doing something different. unfolded to present this extraordinary work. Here, the system consists of main left and right arrays, At the Sydney Opera House, Transmigration was each consisting of 10 M2D (compact curvilinear array presented with a 5.1 surround sound system, a new and loudspeakers) with an M3D-Sub directional subwoofer unusual format for orchestral performance. Grey opened flown at the top of the array. The centre channel is eight the interview with the reasons for that decision: “We were M1D (ultra-compact curvilinear array loudspeakers), the attempting to take a work for conventional orchestra and mid-auditorium loudspeakers are two UPA-1P (compact chorus and modify it in a way that has never been done wide coverage) per side (running) in stereo, and, for the before on a major stage. One of the most profound spaces rears, we have stereo arrays of eight M1Ds each. The for people to be with themselves and their thoughts is a front fill is eight UPM-1P (ultra-compact wide coverage cathedral, and I figured the best way to go about creating loudspeakers), also in a stereo configuration. this kind of space for the music was to use surround speakers. John (Adams) already had the idea of using pre- RA: Can you compare using what is basically a 5.1 line recorded sounds with the orchestra, so what better way to array set up in a concert hall to using a conventional approach it than with a surround environment?” system? MG: In the performance here, having the line array is Robert Austral: How do you begin to frame the work?

50 so great, as it has a tight focused sound that can punch to localise sound using loudspeaker placement and equal- through the murkiness all large halls have, and reshape isation to create the illusion it is unamplified. the room by tuning the system to not activate the nodal Broadway musicals are, to my taste, always over- points. amplified, and the vocal quality is usually brittle. What The M2D has a clarity that is exponential, and the I attempt to do with opera is use the sound system to similarity of sound between all the different line array clarify the transient qualities of the soloist and chorus elements help me integrate the intensity coming from the voices, and orchestra instruments, if spot mics are used stage and create an image of clarity that can be pushed in the pit. As the singers are so good to begin with, you quite hard, while still feeling acoustic. just let the room do the majority of the work and clean RA: The transparency of the amplified sound seems up diction with lavaliers, or sometimes Crown PCC area central to the success of this design. mics. With Jonathan and Francois, they both put a high MG: Absolutely. Given the profile of this piece, I can priority on achieving natural vocal sounds and I learnt afford to specify exactly what I need. When I travel with much about the importance of maintaining the vocal the , we don’t always get that luxury, but, integrity. I apply this approach on the opera stage as well as the orchestra stage by using a localised sound source to help with that, we travel with four UPM-IPs so I can like front fills. UPMs and M1Ds are fantastic for this as always maintain that integrity and transparency from the they are capable of reproducing all kinds of information stage. Here, it’s been fantastic, as the local crew have a from light and airy sounds to darker tones. good knowledge of all the products in their inventory, and RA: When I first heard that the piece would be mixed the system has been set up and zoned in such a logical in surround, I envisaged you creating an ambience way that it just makes my job much easier. with a fixed orchestral balance that was pretty much RA: How did you learn and develop the techniques you left static for the duration, but in fact your mix is are using? very dynamic and you are using multiple effects and MG: Jonathan Deans was sound designer for all of John panning. Adams’ early work, but in 1995, John had a new opera MG: It’s to thrust the energy of the orchestra from the and Jonathan was too busy (to work on the project), so stage, then use reverberation to soften the image so you he had Francois Bergeron and myself work on it, and I can then throw it further, because the image from the learned so much from (Deans and Bergeron) about how reverb is coming in from a lot of different directions.

51 phone signal to the rear speakers at all. RA: So you are not really concerned with imaging the system to a given point using delays for this piece, as would be the standard approach? MG: With the thrust that comes from the stage, I can get the LCR system to push the image and I can open up two microphones that are down stage and send them to a reverb processor, which then gets the kind of general mix the conductor is hearing, and push that reverb out up the side of the house. I only delay the mid or rear speakers if there is a difficulty with the feel of the reverb in the room – if I can hear the reverb in the back then maybe I still need the SPL to push it out but I just need to delay it back a bit more. With the sound effects, voices and replay we are using, and with seven discrete zones, it doesn’t really matter if it is 50 milliseconds late or something. RA: What is the effects processing set up for the show? MG: I am using two processors. A Lexicon PCM91 and Max/MSP from Cycling '74, which is a software package designed by computer scientists that work in music and acoustics, available for both Mac and PC now, and used Basically, I have the direct sound at the stage area, and with any standard Firewire audio interface. The program 10 percent (effects/dry sound mix) in the LCR arrays, will allow you to do anything you could conceive of in then the left and right mid-surround speakers are the audio world so far. You can apply filters and delays, bled with, say, a 60 percent (effects/dry sound mix). and from this create reverbs and flanging effects, chorus Sometimes instruments are in the mid-auditorium effects. These audio processes are then the basis of sound speakers to give them a sense of being in a different synthesis and manipulation, either processing the real- space, but you get the clarity of the pitch. time computer audio input (multi-channel with very low It depends on the shape that I am trying to create: in latency), or by first creating soundfiles then processing the certain venues, like the Concertgebouw, I don’t need to stored audio data. Control of all synthesis parameters can do this, but here we have these little alcoves for placing be done by other audio sources or any Midi device. the mid-auditorium speakers and they actually work as a RA: How is the software package integrated into the resonator box for the violins. I can bring the sound from a sound design? different side and you actually hear these lines almost like MG: Max/MSP is driving playback levels of the pre- sound clouds. The strings are playing long pitches while recorded spoken word and cityscape sounds for the entire the rest of the orchestra is playing much more complex piece as well as reverb processing on select orchestra passages, so you have these multiple layers going by with microphones. Eight outputs of the Firewire audio interface long sustain notes. By the time we get to the rear speakers feed console input channels. There are two reverb outputs it’s 80 percent reverb and I am sending no direct micro- and six playback outputs. Through the FOH console’s matrix I feed the respective playback zones to loud- the main sound system, the more I will get feedback speaker grouping as well as where I locate the custom problems, so I spread that reverb to the mid-auditorium Max/MSP-tuned reverb we created. This custom reverb and rear speakers, and I selectively feed instruments to it, processing in Max/MSP is basically very long freeze- depending on the musical passage. It’s how I create those frame tails made from selective orchestra and chorus long trails. microphones into the computer, then pushing through RA: Looking to the future, do you see this style of a multi-band vocoder, then pushing through long reverb treatment and surround sound in the live environment tails, all done in Max/MSP. I can then harmonically tune becoming more accepted? and change all of the bands of the vocoder, in real time, MG: I think it will and it has been happening slowly. as the orchestra performs. It’s like a sequence of block There are systems like the LARES system but a lot of chords moving along tuning the vocoder in real time. The the places that have them are not using them much. Not that it’s a bad conceptual design, it’s just seemed like the wrong time for it. But the concept of creating an ambience for an audience tailored to a piece of music and the particular performance space is a good one. The impact of transient information, especially with an orchestra, is such that you really don’t have to give much, the problem is more that, as soon as you put micro- phones on a stage with an orchestra in a concert hall, you open up all kinds of political and artistic issues. RA: My observation, though, is that the expectations of the average listener are changing, and the classical and opera worlds will need to keep pace with that to maintain a fresh audience. MG: Yes, the 20th century has created recorded music, and the onset of digital technology means that people have access to great sound in their homes, and their expectations are changing daily. People can listen to their favourite passages of music over and over, where, before, people would listen to a show as an event, which would finish and then they may not have a chance to re-listen to Mark Grey during setup of the Meyer system at the Sydney Opera House. Bottom left photo shows the full that music for a couple of years. This is totally changing multimedia approach: live orchestra, surround and video. the approach to music and music-making for modern composers. John Adams, for example, tries to apply tech- result is something we call ‘Tuned Space’. Harmonically nology in his compositions due to that, and he is trying to tuning the reverb to the music creates an image of the push forward the concept of modernising the concert hall concert hall walls disappearing. The playback zones are stage a little bit. a one-to-one ‘zone to speaker’ relationship, though to fill RA: When you are constantly touring and coming in the gaps I sometimes ‘cross blend’ zones, or Max/MSP across new venues how do you communicate with does. The PCM91 is used as reverb processing only. the staff at those venues and get a sense of what lies RA: During the show I was hearing trails of sound ahead? moving up and down the hall. What was happening MG: Typically, I will email out a suggested design for the there? hall, going on what information I have. Here (at Sydney MG: What you are hearing is ‘Tuned Space’ and how Opera House), it was easy, as they have a great virtual Max/MSP is controlling the reverb to speaker relation- tour of the spaces on their website, so I was able to ship. It’s about balancing: sometimes I’ll use the celli mics have a strong sense of how the design would best work. to capture something that is happening mid-stage in the I then wait for the reply and, from that, I usually get a woodwinds, and feed that to the computer, then, at times, feeling for whether I am going to need more help or not. there are these very soft first violin notes and I send those When specifying Meyer for these performances I know directly to the computer. You then really hear this tail in I will get great support from everyone, right through to the high strings and, as an audience member, you know John and Helen Meyer themselves, who seem to have a something is happening, but you’re not quite sure what. great understanding of the artistic vision and seeing it all With the rest of the orchestra the same thing is happening come to fruition. Here at the Sydney Opera House it has but on a much more subtle level. The ear is amazing: you been smooth from day one, the crew have a really strong only have to give it the idea once and it knows. ethic and good knowledge of the hall and the speakers RA: I noticed that the PCM91 had about 29 seconds of they are using, it’s a really tight ship. I’ve been really reverb decay dialed up on it. How do you control that? happy here. MG: The closer the reverb gets to the stage from

54