Richard Elen Article

Richard Elen Article

Whatever Happened to Ambisonics? by Richard Elen Originally published in AudioMedia Magazine, November 1991. The Ambisonic surround sound system was developed in Britain during the 1970s, hard on the heels of the so- called ‘quadraphonic’ techniques --- and became tarred with the same brush. For a number of reasons, more political than technical, it has up to now only received limited acceptance in the consumer and professional audio market places. But now all that is changing. Major interest from Japanese hi-fi manufacturers, and the current interest in encode-only stereo enhancement are bringing the system back into the limelight. But according to Richard Elen who has been working with the system for over 15 years, it never went away. ‘‘In nature, sounds come from all around our ears. Reproduced sounds come from only a few loudspeakers. Directional distortion results whenever our ears can hear the difference. As other distortions in the audio chain have been progressively lessened, so directional distortion has become more noticeable. ‘‘The earliest widely used attempt to mitigate directional distortion is stereo, which however gives a directional illusion only over a frontal sound stage. The Ambisonic technology is the culmination of over two decades of systematic research into how directional distortion can be reduced as much as possible using any given number of audio channels and loudspeakers. ‘‘Just as the accurate reproduction of performed music is the crucial test of audio fidelity, so the ability to reproduce correctly the directionality of natural sounds is the crucial test of a surround sound system. Unless it can do this, there will not be the correct disposition of indirect sound which provides the acoustic ambience of the performance and gives the position-dependent labelling of direct sounds by their wall reflections, which is an important aspect of the appreciation of music. ‘‘If a system can cope with this difficult task, it should go without saying that it can easily deal with the relatively simple problems of synthetic source material. A system of surround sound which is able to reproduce the directionality of indirect reverberant sounds, as well as of direct sources, is termed ‘Ambisonic’’’ --- NRDC Ambisonics brochure, 1979. Ambisonics was the brainchild of a small group of British academics, notably Michael Gerzon of the Mathematical Institute in Oxford, and Professor P B Fellgett of the University of Reading. From the beginning, it was designed as a surround sound system that would overcome the major problems of the so-called ‘quadraphonic’ systems that were its predecessors --- the main one being that they simply didn’t work very well. Research rapidly indicated, however, that in addition to providing full surround sound in an encode/decode environment (where the original recording is encoded into a stereo/mono-compatible form for transmission and later decoded by the listener into multiple speaker feeds), Ambisonics could also offer a significant ‘super stereo’ capability without decoding. With current interest in single-ended stereo enhancement techniques like RSS and QSound [see Audio Media April and Aug./Sept. 1991 respectively], it’s interesting to note that Ambisonic processing equipment has been used as a single-ended stereo enhancement device by radio stations, now especially AM stereo stations in the States, for almost a decade. Ambisonics built on the astonishing work on stereo recording and reproduction performed by Britain’s early audio genius, Alan Dower Blumlein. Blumlein was working on stereo recording and disc-cutting in the Twenties, and as well as developing the stereo cutting system introduced over 30 years later for microgroove stereo LPs, he also invented what is at once the simplest and most accurate of all stereo recording systems: M-S coincident pair recording. At a time when Bell Laboratories in the States were also investigating stereo, but with omnidirectional spaced microphones (which left a hole in the middle that required a third, centre channel to fill it in - a precursor of Dolby surround?), Blumlein realised that there was more to the ear/brain combination’s ability to position sound source in space than merely the difference in level between the ears. The principle is easily illustrated by considering a conventional mixing console panpot being used to pan a mono signal between two speakers --- an illustration that indicates, too, how little Blumlein’s work is now remembered in the audio industry. Panpotted Mono Something we often forget when we mix a multitrack tape to ‘stereo’ is that what we’re doing really represents the spatial localisation of sound sources very poorly. Where we place a track in the space between the speakers is purely a matter of which speaker is louder than the other -- that’s what a panpot does. Just imagine that you’re listening to a sound that’s centre-stage. It has equal levels on both channels, and your meters will read identical values. But now move to the left and what happens? The sound follows you to the left, because now there’s more energy reaching you from the left than the right. That’s the main drawback of this system -- because it’s not ‘stereo’ at all: it’s ‘panpotted mono’. Send all the signal to the left speaker and it comes from the left. Send equal levels to both speakers and it’s in the middle -- or is it? Figure 1. The standard stereo listening position, with the speakers at 60 degrees to each other. Further apart than this and stereo begins to develop a ‘hole in the middle.’ Normal panpots operate with level only, so the listener in this example hears the sound coming from the left, because there is more level arriving from the left-hand speaker. Figure 2. Now the panpot is central, but the listener has moved to the left. The sound still appears to be coming from the left, because there is still more level arriving from the left-hand speaker. Phase-Shift Panning When we listen to sounds in real life, they don’t behave like that. The reason is that level between our two ears is just one of the methods we use to localise sound sources in space. There are two others -- phase and the ‘Haas Effect’. Some researchers think that they’re at least as important as level. If a sound is off to one side, we still hear it with both ears, but there is a difference between the signals arriving at the two ears. Apart from differences in level (and high frequency content for that matter), there’s another factor: phase. The wavefronts from the sound source don’t reach the ears at exactly the same time, and we interpret that phase difference as localisation information. It’s a very impressive effect if you try it yourself in the studio. A simple method of experimenting with phase-based localisation is to set up a pair of delay lines, one variable and the other fixed. Send the same mono signal to both of them and pan the output of one hard left and the other hard right. Make sure that only the delayed signal is delivered to the output -- none of the input signal should be heard -- and that the levels from both DDLs is identical (e.g.. by setting them up on your console metering). Set the delay on the fixed DDL to, say, 100 milliseconds. Set up the other delay to a basic length of 100 ms too, but with a knob to vary the delay equally above and below this figure -- say 100 ms +/-50 ms. Now vary the delay back and forth either side of the 100 --- 2 -- ms position and either side of the 100 ms position and you’ll hear that, without changing the levels at all, you can create a remarkable panning effect. You’ll notice that at some settings, sounds can even seem to go way beyond the speakers. Switch your monitoring into mono while you do this, by the way, and you’ll hear a familiar sound -- the ‘swooshing’ effect of true ‘tape phasing’ or ‘flanging’. This was how, using tape machine record-play head delays instead of DDLs, George Chkiantz at Olympic Studios produced the original sound of ‘phasing’ on the Small Faces hit, Itchycoo Park -- possibly its first controlled use (the effect had been used on the soundtrack of a Fifties movie, The Big Hurt, but this was done by running two identical copies of a piece of music together and changing the speed of one of them to bring it into sync -- a rather haphazard way of creating the effect). Figure 3. Set-up for phase-shift panning. Use this in mono to obtain flanging. Echoes And Delays The third method of spatial localisation used by the ear/brain combination is called the Haas Effect, after the man who discovered it. The theory is simple: if we hear a sound directly, but we also hear it at the same time indirectly, say bouncing off a wall, two signals arrive at the ears. The direct sound ar- rives first, but the reflected sound turns up just a little later. The brain rightly interprets that second arrival as a reflection and doesn’t confuse it with the true direction of the sound. We’re talking here of significant delays in the order of tens of milliseconds. Exactly what delay you can hear will vary be- tween people -- try it with the same set-up as that described above, but set the delays to different values and listen without twiddling at the same time -- but you will notice how the delay is ignored as a localisation cue when it becomes longer than a certain amount.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    18 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us