Resources for Performing Arts Technology Unit Standard 28007 – Select and apply a range of processes to enhance sound in a performance context

Level 3 6 Credits

By Duncan Ferguson

Learning Ideas Limited

Learning Ideas Limited PO Box 21-246 Christchurch New Zealand

www.learningideas.co.nz

Copyright © 2013 Learning Ideas Limited

All rights reserved. This digital resource may be reproduced only for students within the school that purchased it.

Notice of Liability

The information in this book is distributed on an “as is” basis, without warranty. Every effort has been made to make sure all aspects of the question and answer guides are consistent with the guidelines as set out by the New Zealand Qualifications Authority in February 2013. However, it is likely that these guidelines will change over time. Learning Ideas Limited accepts no responsibility for inconvenience caused as a result of any subsequent changes that may affect the accurateness of information in this book.

Introduction

Welcome to the Learning Ideas Limited workbook for the new SOND 3 unit standard – Demonstrate and apply knowledge of sound for a performance context using control and enhancement processes. This unit standard is a continuation of the SOND 1 (26687) and SOND 2 (27703) unit standards. The focus of these has been to use Live PA and Recording systems for a variety of ‘performance contexts’. The main focus of this standard is for students to build upon their experience and knowledge and to bring it all together to produce a much higher quality ‘product’. This standard starts with a lot of the theory behind how sound travels and how we perceive it. I also touch on how it is converted to digital. I believe this knowledge is essential and how well students apply this knowledge to their decisions in their practical setup and operation of PA and recording systems is what will determine whether they get Achievement, Merit or Excellence in their assessments. In addition to this workbook I have also produced a very detailed and extensive video tutorial (over two hours in length). In this video I have endeavoured to demonstrate many of the PA and Recording setup and operation tasks required of students in their practical assessment. Use it as a guide in addition to the many other tutorials available online. You can download it from: http://tinyurl.com/sond3small or; http://tinyurl.com/sond3large In this workbook you will find recommendations that students revise material from SOND 1 and 2 as well as the MUSTEC 3 (27658) workbook from Learning Ideas Ltd. This is because in those books I went further than what was required for those particular standards. So rather than repeat too much here I have provided some direct download links should you not have your workbook handy. Finally, depending on the nature of your assessment for this standard you may also find that it meets the requirements for Outcome One of unit standard 23730 (level 3, 8 credits). You probably just need to add in a track or two of a recorded MIDI performance and you will find that the one task can be used for assessment in two standards.

Duncan Ferguson, April 2013

© Learning Ideas Limited, 2013. 3 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

Outcome 1 – Select and apply a range of processes to enhance sound in a performance context. Range processes used to – amplify; enhance tonal balance, control dynamic range, create ambience, filter; balance multiple sources; create effects; capture, edit. a minimum of six processes are used.

Before we start getting into the nitty gritty of the Evidence Requirements let us discuss the above outcome and ‘range’. Firstly, “select and apply a range of processes to enhance sound in a performance context” basically means: things you do on the various parts of a PA or Recording system to make it sound good! “Performance context” can be anywhere there is a musical performance – such as a live concert (rock or jazz concert, kapa haka, etc), recording studio, music theatre production, etc. The “processes” or ‘things you do’ relate to: • Amplification – this is perhaps the most obvious purpose of a PA system – it amplifies a performance so everyone in a venue can hear the music. We’ll get into what kinds of amplifiers and speakers you should use (how to match power ratings so you don’t blow anything up!) and also recommended ‘monitors’ for a recording studio. • Enhance tonal balance – this is where you use EQ or Equalisation on individual instruments and the overall ‘mix bus’. However another tool you can use is a multi-band compressor which links nicely to the next process: • Control dynamic range – this is where we’ll use compressors, limiters, gates and expanders to reduce the difference (as appropriate) between the soft and loud sections of a performance (individually and collectively – i.e. on an instrument by instruments and whole band or ‘mix’ basis). • Create ambience – this is to do with manipulating the ‘space’ around a sound using tools like Reverb and Delay. Using these tools we can alter the environment that a performance sounds like it was in and provide ‘depth’ to a sound. • Filter – basically filters are any kind of processing that you run audio through. Many DAW’s and video editing applications (like Final Cut Pro) have filters for noise reduction, hum removal, de-essers, noise and crackle removal. Usually these are just combinations of EQ’s, dynamics, etc with particular presets (which are usually adjustable). • Balance multiple sources – this is perhaps the first thing mixing engineers tend to do in the mix process. This is when you are moving the faders and ‘gain’ knobs, as well as adjusting the panning knobs on tracks, to create a balanced or ‘good’ mix. Part of our journey in this area will be learning about maintaining ‘unity gain’. • Create effects – this is when you are putting any ‘candy’ on the sound. It could be a specific distortion, chorus (type of delay), autotune or other ‘plugin’ or fx unit, or it could be clever use of one of the above (such as a technique called Parallel Compression). • Capture – this is when you are recording the performance. This is not only applicable to a recording context as many ‘live’ digital mixing desks give you the option to record all the channels or tracks of the , allowing

© Learning Ideas Limited, 2013. 4 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

you to produce a higher quality mix for release on CD, DVD or the internet (although it could also be as simple as using the ‘tape out’ function of an analogue desk to record to a CD recorder unit or, God forbid, cassette tape!). • Edit – this is only really applicable to a recording studio context. It is where you edit the quality of recorded tracks prior to starting the mix (although, more and more the line between editing and mixing is being blurred). It could involve deleting mouth noises and breaths on a vocal track, or muting a microphone when it’s instrument isn’t playing (or a host of other editing techniques). In the assessment of this standard you are required to demonstrate and articulate your knowledge of all of these processes in a practical demonstration. You should not be tested on each isolated process individually, but on every process at once, over the course of a ‘performance’. Balancing, enhancing tonal balance, controlling dynamic range, creating effects… are all just part of mixing. And that is what you are being assessed on: how well you are able to mix, using the above processes.

Before we get started on the Evidence Requirements I encourage you to go back to SOND 2 (27703) - Demonstrate and apply knowledge of sound control and enhancement processes required for a performance context. That resource gives you a great outline of the mix process and goes into reasonable depth discussing all of the above ‘processes’. Also useful is the discussion on mixing in MUSTEC 3 (27658) - Demonstrate and apply knowledge of production and music notation application(s). You should ensure that you understand everything in those resources from Learning Ideas Ltd before proceeding.

© Learning Ideas Limited, 2013. 5 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

1.1 Sound processes selected and applied to enhance the performance context are described using sound theory. Range frequency, phase, wavelength, amplitude, pitch, sound pressure level, fidelity, reflection, absorption, sampling rate. a minimum of six theories are required

While the following are rather technical, you’ll soon see that to be a good sound engineer you need to have a firm knowledge of the theory behind these words. Don’t believe me? Here is one example:

A common technique for recording kick drums is to use two microphones – one inside the drum close to the beater head to capture ‘attack’, with another mic a couple of feet outside the drum to capture the ‘fullness’ of the tone (because it takes a few feet for some low frequencies to fully develop – another bit of theory we’ll cover later). One mic will have great attack that can cut through a mix while the other is nice and full, but will lack the ability to cut through the mix. The solution? Use them both right? What sometimes happens (at first) is when combining them the overall kick drum sound is rubbish. To understand why this happens you need to understand the theory behind frequency, wavelength (of various frequencies), amplitude and how all these combine to contribute to the ‘phase’ of the combined sounds. Understanding all these will help guide your decisions around mic placement and fader balance (not to mention EQ of the individual channels) to create an awesome kick drum sound that can cut through a mix with the right amount of fullness, laying down a solid bottom end to your band’s performance.

See the SOND 3 video for a practical explanation regarding microphone placement and how it relates to issues of phase, frequency, wavelength, pitch, etc

While working through and learning the following information it is paramount that you keep a couple of key words from the 1.1 Evidence Requirements in the forefront of your mind: select and apply. This unit standard is not just all about learning theory of sound, it is about taking this theoretical knowledge and applying it to make you a better mixer. I will be giving examples of each concept and how it applies to the process of mixing but there will likely be many more. You are encouraged to research each concept as well as seek out professional sound mixers to come in and discuss how these concepts apply to their work (maybe your teacher could spend some STAR funds on this?). Theory1: Sound is a series of vibrations moving as waves through air or other gases, liquids, or solids. A plucked guitar string, for example, sets off vibrations in the air. Detection of these vibrations, or sound waves, is called hearing. The detection of vibrations passing through the ground or water is also called hearing. Some animals can detect only vibrations passing through the ground, and others can hear only vibrations passing through water.

1 A useful video to accompany the following information may be downloaded from: http://tinyurl.com/introtodigitalaudio © Learning Ideas Limited, 2013. 6 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

Lets first focus on vibrations in air. When a string is plucked air molecules next to the string get pushed together (compressed) and pulled apart (rarefied). When they are pushed together air pressure is high and when they are pulled apart air pressure is low. You can see the same sort of movement when you throw a rock into a pool. The energy created by the rock causes the water particles to compress (high pressure) and pull apart (low pressure) creating a wave of energy. Once the wave has passed the water particles move back to their original position. The same happens with air particles once the wave of energy has passed through. The following diagram shows what happens to air particles when sound energy passes through them. The air particles are disturbed by a guitar string.

The compression and rarefaction of air particles can be represented with a wave shape on a graph that represents a vertical axis for pressure and a horizontal axis for time. Below you see the relationship between the pressure of air particles and how it would be represented on a time domain plot. Top of waveform – high pressure (notice how air particles are contracted).

Bottom of waveform – low pressure (notice how air particles are rarefied). Line represents no pressure change (undisturbed air particles).

Above we see the simplest type of waveform, which is a sine wave. This is known as a pure tone. Every musical instrument is a device for creating the minute pressure changes (or sound waves) that pass through the air and are interpreted by our ears as sound. When these pressure changes form an irregular pattern, the sound we hear is noise;

© Learning Ideas Limited, 2013. 7 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

when the pattern is regular the sound is a note of definite pitch. Here is a flute playing a ‘C’. Notice how regular the waveform is2.

And here is a crash cymbal, notice the irregular nature of the waveform.

When the rate of change is rapid, the note has a high pitch; when the rate is slowed down the pitch of the note falls. We call this rate of change the 'frequency' of a note. Instruments set up pressure changes by making something vibrate - a string, reed, player's lips, or the entire body of the musical instrument such as a wood block or piece of metal.

Frequency In the context of sound and how we hear it frequency is the amount of times a sound wave goes from still, to high pressure, to low pressure and then back to the starting point, all within a second. For example, when a complete cycle (also referred to as Hertz or Hz) of a sound wave happens 440 times within the space of a second we get the note ‘A’ above ‘middle C’. That is why standard tuning is called “A440”. When there are 220 cycles of the waveform within a second (or 220 Hz) we also get an ‘A’, but this time an octave lower than the previous ‘A’. When there are 110 cycles per second we get ‘A’ a further octave below. Similarly, when there are 880 cycles per second (or 880 Hz) we hear an ‘A’ an octave above the first ‘A’. In the following picture (over the page) we see a sine wave of 440 Hz and 220 Hz. We can see that when a 440 Hz sine wave has completed a full cycle, the 220 Hz sine wave is only half way through.

2 The difference in height at the start of the waveform to the end is to do with volume – the flute player is playing louder over time. © Learning Ideas Limited, 2013. 8 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

Do you see the pattern here? Every time we go an octave above or below a note the cycles per second of that note doubles or halves respectively. In summary: Frequency = cycles per second = Hertz = Hz If we think of frequency as being the physical characteristic we think of pitch as being the equivalent perceptual characteristic.

So that’s all very interesting I hear you say (or perhaps not!). But how does this relate to the real world? We’ll see this developed further soon as we discuss phase and wavelength. However, one useful tidbit of information is that when you are chopping up audio files on a Digital Audio Workstation (DAW) such as Pro Tools, Garageband, etc you should always aim to make your cuts at a point in which the wave form crosses the ‘0’ line (or the place that represents the air particles being in their natural undisturbed state). If you chop your audio anywhere else you will most likely create an audible ‘click’ or ‘pop’ in your tracks. Zero crossing line

Cut point Many clever DAW’s automatically snap your cut point to be at the nearest ‘zero crossing’ but some do not. If you absolutely have to chop your audio at a point away from the ‘zero crossing’ then you will need to put a quick fade in or fade out on your audio (which in essence alters the audio waveform to start at ‘0’ fading up to where your cut point was inserted).

© Learning Ideas Limited, 2013. 9 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

How else can frequency/pitch awareness be important in mixing? Well, if you hear brass instruments that sound particularly ‘honky’ in your mix those of you with perfect pitch (or good relative pitch) might find that certain notes, particularly in the area 2-3 octaves above middle C, are worse than others. Knowing that the pitch of notes in this area equates to frequencies of approximately 1 kHz-3kHz will help you use EQ much more effectively in fixing up the tonal balance of the brass instrument sound.

Amplitude Amplitude is the characteristic of sound waves that we perceive as volume. The maximum distance a wave travels from the normal, or zero, position is the amplitude; this distance corresponds to the degree of motion in the air molecules of a wave. As the degree of motion in the molecules is increased, they strike the eardrum with progressively greater force. This causes the ear to perceive a louder sound. A comparison of samples at low, medium, and high amplitudes demonstrates the change in volume caused by altering amplitude. These three waves have the same frequency, and so should sound the same except for a perceptible volume difference.

If the amplitude of a waveform exceeds the amount of headroom available in a mixer or DAW channel/track then we see a waveform that looks like this:

This is known as clipping and will often show up on your mixer channel as a flashing red light and will sound terrible (usually very distorted). On a digital system you should never exceed 0dB (the top of the track) – on an analogue mixing desk you can usually encroach a little on the clipping area as this can create a bit of warmth and a sense of the sound being slightly ‘overdriven’. However, as with most things, “less is more” so don’t over do it! © Learning Ideas Limited, 2013. 10 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

In a digital recording system (like Pro Tools) if you have setup your recording session to be recording at 24-bit instead of 16-bit (more on what this means later) then you have no need to push the input gain of your tracks too ‘hot’. There is lots of ‘headroom’ when recording at 24-bit so you don’t need to worry about your gain being too low in your tracks and getting close to the ‘noise floor’.

Sound waves Sound moves through the air like waves in water. Sound waves consist of pressure variations traveling through the air. When the sound wave travels, it compresses air molecules together at one point. This is called the high-pressure zone or positive component (+). After the compression, an expansion of molecules occurs. This is the low-pressure zone or negative component (-). This process continues along the path of the sound wave until its energy becomes too weak to hear. The sound wave of a pure tone traveling through air would appear as a smooth, regular variation of pressure that could be drawn as a sine wave.

The above picture illustrates that when the air molecules are compressed the line of the waves is in the positive and when the air molecules are rarefied the wave from is in the negative. Remember that sound waves move in time so the horizontal x-axis of the above graph is a representation of the point in time at which the wave is displacing the air particles. When the waveform crosses the X-axis the air pressure is at the normal undisturbed air pressure of the room (which for us would be complete silence). Speed of sound - The speed of sound in the air is about 343 meters per second (or 1238 Km/h or 770 mph). The speed of sound is fairly constant no matter what the frequency or pitch. The speed of sound varies with the type of medium, and conditions such as temperature. For example, sound waves travel faster through water than through air. For a list of the equations of the speed of sound through different media I recommend visiting:

© Learning Ideas Limited, 2013. 11 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

http://library.thinkquest.org/19537/Physics4.html

Wavelength The wavelength of a sound is the physical distance from the start of one cycle to the start of the next cycle (this is also known as one cycle or hertz/Hz). Wavelength is related to frequency by the speed of sound. The wavelength of a sound wave of any frequency can be determined by these relationships: Wavelength = speed of sound / frequency or λ=ν/ƒ where λ = wavelength ν = velocity (or speed of sound) ƒ = frequency Therefore to work out the wavelength of a frequency of 1000 Hz the equation would look like this: 343/1000 = 0.343m or 34.3 cm This means that the most rarefied pieces of air are 34.3 cm apart at anytime. The wavelength of A440 would be worked out as following: 343/440 = 0.78m or 78 cm The wavelength of 10,000 Hz would be worked out as following: 343/10000 = 0.034m or 3.4 cm You will notice from these equations that the lower the frequency (or the lower the note) the longer the waveform and the higher the frequency (or the higher the note) the shorter the waveform. This is an important point to remember as it is the beginning of understanding why some waveforms are reflected or absorbed by different materials (which has a big bearing on understanding acoustics in terms of live mixing and setting up recording studios). A great website for working out wavelengths is: http://www.sengpielaudio.com/calculator-wavelength.htm

Bringing back our double miking setup of a Kick drum as mentioned above we can now see the reason why the close mic gives us the ‘click’ part of the sound without any fullness where as the mic further outside the kick drum gives a fuller sound. It’s because the fundamental frequency of a kick drum hit of (for example) 100 Hz requires 3.43 metres to go fully through one cycle.

We now have a lot of theory about how sound travels through the air. I encourage you to stop here, read over it and answer the following revision questions before moving on. We will get to some really relevant and useful information soon but you can’t move too fast with this knowledge, make sure it is secure before you try anything else.

© Learning Ideas Limited, 2013. 12 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

REVISION TEST

1. What do we call the detection of sound waves? ______2. When air pressure changes form a regular pattern what do we hear? ______3. Define ‘frequency’ in the context of a musical note: ______4. How many cycles per second does the ‘A’ note have one octave above A440? ______5. What is the perceptual characteristic associated with the physical characteristic of frequency? ______6. What is the perceptual characteristic associated with the physical characteristic of amplitude? ______7. In the following graph draw a sine wave with the same frequency but roughly twice the amplitude:

© Learning Ideas Limited, 2013. 13 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

Phase When a sound wave is drawn on a graph the section of the sound wave that has air pressure above normal (compressed), it is the positive phase. When the air pressure drops below normal (rarefied) it is said to be negative phase.

When you have two identical sound waves that differ slightly in time (such as a sound being picked up by two microphones that are not the same distance away from the sound source) you have a degree of phase shifting going on. If the two sounds were to plotted on a graph they may look like this:

However, we don’t hear these two waveforms as distinct sounds but one sound with the two waveforms added together. For instance, in the following examples we see some real world examples of what happens when waveforms get combined. In each picture the top two waveforms are being added together to produce the waveform at the bottom. Example 1 Here we see that two waveforms that are exactly the same end up producing a waveform twice as loud when combined.

© Learning Ideas Limited, 2013. 14 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

Example 2 Here we see that when the second waveform is moved almost 180° out of phase we get almost silence as the two waveforms end up cancelling each other out.

Example 3 Here we see that when the waveforms are slightly out of alignment we get a slightly distorted waveform.

© Learning Ideas Limited, 2013. 15 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

Example 4 Here we’ve added another waveform into the mix making the combined waveform at the bottom even more complicated.

You can imagine how complicated the waveforms become when we add even more sounds. And remember, these examples were just using a sine wave or a reasonably pure tone of the same frequency (pitch) and amplitude (volume). When you start getting in to the real world of music the waveforms can become very complex. Here is an example of a stereo waveform from a rock song (all the different instruments’ waveforms have been combined to created this stereo file):

When recording it is very important to remember how the phase relationships between sounds can ruin a recording by distorting which frequencies within a sound get emphasised or reduced. For instance, you may choose to record a certain type of cymbal on a drum kit because of it’s cool tone, but if you’ve incorrectly placed two microphones too close together over the cymbal the sound of this cymbal will

© Learning Ideas Limited, 2013. 16 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

not be as you hoped due to the frequencies being incorrectly summed in your recording console. Or to go back again to our example from above of two microphones on a kick drum. We will inevitably have some phase interaction as the close mic is picking up the kick drum before the far mic is. In theory this is bad. In practise though, it may not be an issue. What will usually happen is that your kick drum may sound ‘thinner’, lacking the punch that you can get by just using one mic. However, by moving the microphones around you are adjusting the phase relationship between the two microphones and in doing so you may create sound that works really well in a mix. There is a general rule you can follow to help minimise the negative aspects of phase interaction between microphones. It’s called the 3-to-1 rule. What this means is that if you have a pair of microphones on a sound source (such as two overhead mics on a drum kit) the distance between the microphones must be three times the distance each one is to the sound source. For example, if the mics are one foot above the cymbals then they must be three feet apart horizontally. However, anyone who has experimented with miking up drums will know that sometimes (usually!) it is preferable to put close mics on the toms, snare and hi-hats as well. A common problem is that the drums may sound good in the overheads but when you add in the close mics they actually end up sounding worse! That is why many ‘famous’ drum sounds were created with only two overhead mics and one kick drum mic, or variations. Here is an excellent video that explains this drum miking technique: http://vimeo.com/58215617 Some excellent drum miking techniques and tips (as well as tips for miking up other instruments and ensembles) can be found in the excellent DVD video resource from Alan Parson The Art and Science of Sound. If your teacher has not purchased this yet tell them they have to! http://www.artandscienceofsound.com

Harmonics When we hear a musical note played we hear the fundamental note (which we often refer to by letter name, for example “middle C”) but also many faint overtones or harmonics. It is important to understand how overtones and the ‘harmonic series’ work, because this explains how we hear different timbre (i.e. why a ‘C’ on a piano sounds different to a ‘C’ of the same pitch on a guitar) and also how notes are produced on wind and string instruments. These overtones have a mathematical relationship to each other and the fundamental note. A note that has a fundamental frequency of 100 Hz also contains harmonic frequencies at 200 Hz, 400 Hz, 600 Hz, and many other frequencies. The reason these harmonic frequencies exist is because many vibrating sources (such as a column of air inside a clarinet or a guitar string) are capable of vibrating in a number of harmonic frequency modes at the same time. A guitar string, for example, not only vibrates in its full length (which creates the fundamental tone) but simultaneously in vibrations of shorter lengths that are precise divisions of the string’s

© Learning Ideas Limited, 2013. 17 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

length – thirds, quarters, fifths, sixths, etc. Each of these integers creates it’s own faint harmonic.

So how do we find out the harmonics that are present within a fundamental note? Go to an acoustic piano and hold down a low ‘C’ note – press down the key softly so that the damper releases for that note but make sure the hammer doesn’t strike the string. Now firmly and sharply press down notes above this ‘C’ (release them quickly – play them like a staccato note). If the open low ‘C’ that you’re holding down resonates with the note that you quickly pressed down then that higher note is an overtone of the fundamental ‘C’. The harmonics present in the low ‘C’ are resonating in sympathy with the higher notes that are getting played. As you go up the keyboard you should find that the following notes resonate strongly with the low ‘C’:

What we see here is the first twelve notes of the harmonic series. We can also see the harmonic series in action in brass instruments. If we take the simplest brass instrument, the bugle, we will find that it can only play certain notes. To change the pitches the player must tighten or loosen their lips. However, the pitches that they get to choose from are determined by the harmonic series. If we get a bugle in Bb we will only be able to play the following notes (which are a part of the harmonic series in Bb).

In fact the low fundamental Bb is probably too hard to play as the person playing probably won’t be able to get their lips loose enough to be able to play it. One of the most popular tunes that uses these series of notes is Taps:

© Learning Ideas Limited, 2013. 18 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

We can also see the overtones or harmonics of notes using a spectrum analyser. The picture below shows an electric bass guitar playing a note. Using the spectrum analyser function built into the EQ of Izotope Alloy3 we can see the fundamental note as well as the overtones further up the spectrum.

Fundamental frequency

Overtones

Harmonics and Timbre (or tone colour) The relative strength or amplitude of the harmonics within notes primarily determine the timbre or tone colour of sounds and instruments. This is why we can tell the difference between the sound of a clarinet playing an A440 and the sound of a violin playing an A440. The notes of each instrument emphasise different parts of their overtones in different ways. An interesting observation is that if our ears hear a series of overtones within a sound we will also perceive the fundamental (even if it is not actually present). Wikipedia explains it well: Our ears tend to resolve harmonically-related frequency components into a single sensation. Rather than perceiving the individual harmonics of a musical tone, we perceive them together as a tone color or timbre, and we hear the overall pitch as the fundamental of the harmonic series being experienced. If we hear a sound that is made up of even just a few simultaneous tones, and if the intervals among those tones form part of a harmonic series, our brains tend to resolve this input into a sensation of the pitch of the fundamental of that series, even if the fundamental is not sounding. This phenomenon is used to advantage in music recording, especially with low bass tones that will be reproduced on small speakers.4 An example of technology that uses this to its advantage is the amazing Maxxbass from Waves. Go to this link to watch a very interesting explanation of how they make this idea work in their technology: http://www.youtube.com/watch?v=Gjwrdlamw-U

3 http://www.izotope.com/products/audio/alloy/ 4 http://en.wikipedia.org/wiki/Harmonic_series_(music) © Learning Ideas Limited, 2013. 19 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

Revision test 2

1. Explain what type of sound you get when a sound gets put 180° out of phase with itself. Explain your answer. ______2. What is the perceptual characteristic associated with the physical characteristic of harmonics? ______3. Why do harmonics exist within notes? ______4. Write out the first twelve notes of the harmonic series in C: ______5. Why is it that bugle players are limited in the number of keys of music that they can play in? ______

© Learning Ideas Limited, 2013. 20 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

As a developing sound engineer it is important to understand how sound will interact with its environment. Reflection A surface or other object can reflect a sound wave if the object is physically as large or larger than the wavelength of the sound. Because low frequency sounds have long wavelengths, only large objects can reflect them. Higher frequencies can be reflected by smaller objects and surfaces as well as large. The reflected sound will have a different frequency characteristic than the direct sound if all frequencies are not reflected equally. Reflection is also the source of echo, reverb, and standing waves: • Echo occurs when a reflected sound is delayed long enough (by a distant reflective surface) to be heard by the listener as a distinct repetition of the direct sound. • Reverberation consists of many reflections of a sound, maintaining the sound in a reflective space for a time even after the direct sound has stopped. Absorption Some materials absorb sound rather than reflect it. Again, the efficiency of absorption is dependent on the wavelength. Thin absorbers like carpet and acoustic ceiling tiles can affect high frequencies only, while thick absorbers such as drapes, padded furniture and specially designed bass traps are required to attenuate low frequencies. Adding absorption can control reverberation in a room: the more absorption the less reverberation. Clothed humans absorb mid and high frequencies well, so the presence or absence of an audience has a significant effect on the sound in an otherwise reverberant venue. Therefore, when working in live sound don’t think that because you’ve got a great sound when you’re doing the sound check/rehearsal and mixing to an empty room you’ll automatically have a great sound in the concert. You must adapt your mix as people come into the room as there will be less reflections and reverberation. This is particularly important for music theatre sound engineers who are generally mixing in smaller rooms on smaller systems than the typical rock concert. Another thing…

If you keep in mind what you’ve just learned about wavelength (every concept/theory is interlinked!) you can see how much harder it is to absorb low sounds than high sounds. If an absorber (such as a curtain) is not thick enough, bass frequencies will just pass through it. This is also why it is hard to get good directional control over bass freqencies but also why you can stick a subwoofer (almost) anywhere in the room and it’ll work ok (because people can’t discern where the bass is coming from). It’s also why when you hear a boy racer driving by with their stereo up loud the only thing you hear is the thumping bass!

© Learning Ideas Limited, 2013. 21 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

We are now at the final section of the 1.1 evidence requirements ‘range’ – Sound Pressure Level. However, before we get into this we need to discuss units of measurement, particularly the decibel.

Units of Measurement – The first main unit of measurement to get our head around is the decibel. The decibel is one tenth of a Bel and is named after Alexander Graham Bell. There are a few things to get clear straight away: • The decibel does NOT express an absolute value (in most cases). • The decibel IS a ratio - the decibel is the ratio between two signals, which can be of actual sound or the electrical equivalent. It is like the term percent, meaningless unless related to some particular quantity. Why use such a term instead of what may appear a more straightforward percentage or numerical value? The human ear does not respond to increases in sound level in a linear manner, but logarithmically. A sound that is twice as loud as another seems only slightly louder to the ear. The decibel takes account of this by following a square law of progression. (We’re about to get really technical so CONCENTRATE!). [Deep breath…] The decibel is a logarithmic scale based on 10. If a sound is 10dB louder than another sound, it has 10 times the intensity. If a sound is 20 dB louder than another sound, it has 100 times the intensity. If a sound is 50 dB more than another sound, it has 105 or 100,000 times more intensity. The same kind of system is used for measuring earthquakes (anyone in Christchurch should be an expert on this now!). An 8.0 quake is 10,000 times more powerful than a 4.0 quake. An audio example is that a 100W speaker will only sound twice as loud as a 10W speaker.5 This is because it is ten times the power (intensity), but only 10 dB louder (twice as loud). One point that must be understood is that 0dB does not mean zero signal. It means that there is no difference between two signal levels, i.e. they are of the same level, or it establishes a reference point with which to compare others. For example, if a mic goes through a mixer and is mixed to 0dB, it is neither boosted nor cut in the mix (this is very important to remember for when we get into discussing unity gain and setting up a mix). Hence 0dB is fairly high on the sliders and VU meters. Moving a slider up 3dB will cause a perceptible increase and going up 10 dB will be twice as loud. Sound Pressure Level (SPL) is also measured in decibels. It is used to measure loudness in an absolute rather than a relative way. How do we perceive SPL? 0 dB SPL (notice we’re now using the term ‘dB SPL’) is defined as the threshold of hearing (although people with very good ears might hear sounds softer than 0 dB SPL). It turns out that 3 dB SPL is the lowest perceptible change in volume (although some people can hear a 1 dB SPL difference) even though it has about twice the energy. It is generally accepted that a 10 dB SPL change is perceived as twice as loud, even though it has ten times as much energy.

5 Hirsch, Scott, Pro Tools 7 Session Secrets, Wiley Publishing Inc, 2006. Page 210. © Learning Ideas Limited, 2013. 22 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

We can see how it all works out when we relate SPLs to common sources of sound, as is done in the following chart: Situation6 dB SPL Threshold of hearing 0 dB Whisper 20 dB Normal conversation 60 dB City noise 70-80 dB Factories/jackhammer/football stadium 85-110 dB. Note: 85 dB is the threshold at at kickoff which hearing damage can start after eight hours of exposure. Close to stage of rock concert 110-120 dB (hearing damage starts after 10 minutes of exposure) Civil defense siren at 100ft 130 dB (threshold of pain) Jet engine at 30 m 150 dB M1 Garand being fired at 1m 160 dB Krakatoa volcano eruption7 at 1 mile 180 dB (loudest sound ever historically reported)

Sampling rate & Bit Depth

There are two key terms you need to know about when dealing with Digital Audio (well, more than two but we’ll start with these): Sample Rate & Bit Depth. Basically the higher each of these are, the better quality your audio will be in your computer. CD quality audio has a sample rate of 44,100 Hz (or cycles per second) and 16 bits. DVD or SACD quality audio is 96,000 Hz and 24 bit. The very best A/D converters used in high-end digital recording systems from companies like AVID (Pro Tools), Apogee and others generally go up to sample rates of 192,000 Hz (although a few systems now go up to 384KHz). You should NEVER be recording at a lower quality than CD audio (even if you plan on compression down to MP3 or AAC for easy web distribution) but as much as possible you should be trying to record at better than CD quality (I generally go for a combination of 44.1 KHz/24 bit or 96 KHz/24 bit). It does make it easier to mix due to increased fidelity. So what do these terms actually mean? If you’ve completed MUSTEC 3 (27658) you should have a basic understanding of digital audio but I’ll recap here anyway. You can think of sample rates as taking snapshots of the audio at a particular moment in time. A sample rate of 44.1 KHz means that an analogue waveform is having 44,100 snapshots of it taken every second. You can think of it like frames of film. Generally there are around 24 frames of film per second. If you increase the

6 http://en.wikipedia.org/wiki/Decibel 7 http://en.wikipedia.org/wiki/Krakatoa © Learning Ideas Limited, 2013. 23 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

frame rate of film (like in The Hobbit which was filmed at 48 frames per second) you get a much smoother, more realistic (and a truer representation) viewing experience8. As we discussed earlier in this resource sound travels as waves (as the compression and rarefaction of air molecules). It is the job of a transducer (such as a microphone or guitar pickup) to convert the sound energy (or acoustic energy) to electrical energy. The microphone does this in different ways depending on the type of microphone it is, but basically there are the vibrations of air that are picked by a diaphragm (or ribbon in ‘Ribbon Mics’). The diaphragm vibrating creates an electrical signal (voltage), which is passed out of the microphone, down a cable and on to a preamplifier (either in a mixing desk or a recording interface). It is the job of the recording interface (specifically the analogue-to-digital function of it) to convert the analogue electrical signal to discrete digital data. If we take an analogue audio signal and superimpose a ‘sample clock’ to represent those moments in time when we would take a sample it could look like this:

In the above graph we see the sample clock represented by the evenly spaced vertical lines. In the next graph we can see that samples are taken each time the waveform intersects the sample clock.

8 I recently saw The Hobbit at 48 FPS and I found it really hard to like it – I prefer the lower quality 24 FPS as 48 FPS seemed almost too real. In fact, when I watched the video again on a plane with a small TV at low res, I enjoyed it much more! However, it occurs to me that my complaints about it are the same complaints people made about digital audio in its early days. Is the problem with me, or the format? It will be interesting to see how people adapt to higher FPS in the movie world in the next few years. © Learning Ideas Limited, 2013. 24 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

In the end we have a digital representation of the analogue waveform that looks like this:

So straight away you can see that the more samples per second, the more accurate the digital representation will be. However, sample rate is only one part of the situation. We also have to look into bit depth. Bit depth is the possible ‘amplitudes’ (or loudnesses) a sample can be at. As previously mentioned, CD quality audio is 16 bit. This means there are 2 to the power of 16 (65,536) possible values that each sample can be captured at. This may sound a lot but 24 bit audio is 2 to the power of 24 which is 16,777,216 possible values (I think you’ll agree that is a much higher fidelity!).

In this graph each arrow-head represents a ‘sample’. The horizontal lines represent the amplitude of each sample. If this picture represents one second of time the sample rate is 12 (as there are twelve samples being taken) – a very low sample rate indeed! The bit depth is 4 (which yields 2 to the power of 4 or 16 possible values – also very low). If you look closely you may notice that many samples don’t exactly intersect with an amplitude line. If we zoom in we can see it a little clearer here:

© Learning Ideas Limited, 2013. 25 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

The intersecting dotted lines are showing exactly where the sample should. What would happen in reality though is that exact part of the waveform would end up being represented by the amplitude below it (as that is the closest line), thus slightly distorting the signal and the audio we eventually hear. The solution? More bits (levels of amplitude) are needed, hence the reason why 24 bit audio is much better than 16 bit audio.

EXERCISE Open up a DAW, import some audio and listen to it in it’s native form. Now export your audio with a bit depth of 8 (if you don’t have a DAW that can do this download Reaper from www.reaper.fm and use that). You’ll notice the audio is familiar, but very low fidelity.

CLOCKS Above I mentioned a sample clock. The whole idea of clocking can be rather complicated but from a practical perspective you need to be aware of the clocking functions of your DAW and interface because if they’re not setup correctly you’ll end up getting pops and clicks in your recordings. If you are using multiple digital interfaces then you will probably need to make sure the “clock source” (i.e. the piece of equipment that is generating the sample clock) is set to an external piece of equipment (i.e. not your computer).

In addition to this, you need to double check that all your interfaces, and your computer DAW session, are all set to be working at the same sample rate, otherwise they won’t talk to each other and you’ll think something is broken (happens to me a lot when I’m switching between sessions that have been recorded at 44.1 KHz or 48 KHz).

© Learning Ideas Limited, 2013. 26 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

1.2 Technical requirements are identified and documented according to the conventions of the performance context. Range documentation includes but is not limited to – annotated script, technical rider, stage plot, input list, cue sheet, track notes; a minimum of one piece of documentation in a form relevant to the performance context is required

Prior to any major performance event it is vital that you do proper planning. Often events are running to a tight budget and as we all know, ‘time is money’! Therefore it is essential that the sound engineer practices due diligence and makes sure they have all the equipment they need and they know in advance how they are going to set it all up. One way to assist you to do this is to use documentation such as a stage plot, input list, annotated script, technical rider, track notes, etc. Below are several examples of these. As proof that even the very best people use documentation, here is part of the track sheet for the Michael Jackson Thriller session.

9

9 http://www.soundonsound.com/sos/nov09/articles/swedien.htm © Learning Ideas Limited, 2013. 27 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

Technical Rider – This is a list of all the equipment needed for an artist or group to be provided by the performance venue. In the case that an artist cannot afford to carry around all of their own sound/lighting equipment and backline they provide a ‘Rider’ to the promoter and venue to have everything provided for them. Examples of technical riders can be found here: http://www.uniqsystems.com/pacodelucia/html/technical_rider.html http://www.jochenroller.de/english/_pdf/TechRider_E_mnemonic.pdf

An excellent website for providing resources for writing your own technical rider (as well as creating stage plots, input lists and providing lists for Backline) can be found here: http://www.30daysout.com/band-tech-rider/band-tech-rider.htm

Stage Plot – This is a diagram of your stage area for a live performance context and where you will place everything on it.

These can be very simple, drawn up with pen and paper. The main use of it is to get other people to do all the heavily lifting and hard work for you while you sit behind the mixing desk and have a coffee!

Input List – This is very useful for making sure you have enough channels/inputs on your mixing desk/recording interface before you get to setting up for your concert/recording session/show etc. An excellent template which I have used can be purchased from here for $1USD: http://www.ultimatetracksheet.com This is a very helpful Excel spreadsheet with microphone/preamps/etc already listed for you to choose from.

© Learning Ideas Limited, 2013. 28 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

In the past when I was working in my own recording studio I made up my own input list as my patching and routing was pretty consistent from session to session. When combined with a DAW template, which had all the tracks premade with all the aux sends (for musician headphones and reverb, etc), this made for a very efficient way of working with new bands on a daily basis.

Track Notes – If you are recording a piece of music that you intend to do multiple takes of (so that you can edit together a perfect ‘comp’ of the best parts of each track) it is essential that you keep good track notes for each instrument. If you don’t note down what are the good and bad parts of each take as you record them you could end up with a lot of takes to work through later on (which could take a very long time). If your musicians are playing off sheet music a good tip is to have your producer (or other qualified person) follow through with the score and put a minus (-) sign on any areas that are less then perfect. Next to the minus sign they should note what number track it is. Then on subsequent takes if any areas that have a minus sign are played well then put a ‘plus’ (+) above it with the take number. Once all the minus signs are accompanied by a plus sign then you know you have enough takes to edit together the perfect comp.

Annotated Script – This is useful for theatre performance contexts (such as a musical or play). This is not to be confused with the technical rider or input list. Basically this is a script of the show in which the sound engineer has written next to each line of dialogue/actor which microphone is to be turned on.

© Learning Ideas Limited, 2013. 29 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

If the show is using a digital mixing desk capable of saving presents/scenes/snapshots then each scene of the performance can have all it’s microphone levels preset. In this case the script may just say when to change from ‘scene’ to ‘scene’ on the mixing desk.

Here is some thoughts from one of Christchurch’s most experienced Music Theatre sound engineers, Stephen Compton:

A script for one season might have quite different requirements than for another depending on changes to text, type of console, make up of band, number of radio mics involved etc. The Court Theatre Grease script has been hacked around significantly with lines put in and out and songs in different order. I have a stack of old scripts but would still re-annotate for the new season. Are there radio pack swaps between cast? Have you sound effects to play? The type of console and the degree of automating groups, VCAs and scenes will impact greatly on the script annotating process.

Side by Side by Sondheim had three cast and two pianos so the script prep really was only who was singing what line and any SFX.

Anything requiring eight or more usually needs a mic plot as a starting point which details who is part of a scene and which cast are singing in a chorus number.

There is a finger on the fader of each line spoken or sung or on a group of males or females. This can get quite complicated to manage individual lines and digital consoles make this manageable by using scenes with VCAs and programmable mutes.

A script will identify the Scene programmed into the desk, who is saying each line, SFX cues and may have a volume guide for the backing eg -12dB. If the console is one that doesn't show the names of the character programmed in then the script may need to display the name or number of the VCA that controls a channel. Depending on those involved musical notes may be required to accent certain parts. Sometimes a SFX cue is actioned at a point on a score so this score, or part of a score, might need to be part of a script. Maybe somebody shouts or there is a loud noise and a note should be made to lower vol on mics at that point. The more the operator gets to know a show the less of the script he might need. I usually colour the name of the top 5-6 cast with a different colour highlighter for easy and quick identification and tracking script

Preparing a script on an analogue console may mean writing in cue points to put channels in and out of groups for solo or chorus numbers. If there are radio pack swaps eq cues may need to be added.

Here is a part of the Grease track sheet/annotated script as discussed above:

© Learning Ideas Limited, 2013. 30 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

More examples of annotated scripts are available on request from Learning Ideas Ltd.

© Learning Ideas Limited, 2013. 31 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

1.3 Equipment required to process sound is selected, set-up, operated and packed down according to the documented requirements of the performance context, safe working practices and the equipment specifications. Range hardware and/or virtual equipment may be used; set-up may include but is not limited to – tuning a sound system to a room, rigging equipment, sound check, pre-production recording activities.

If you have done SOND 1 (26687) and SOND 2 (27703) them I’m sure much of what is above will be familiar to you. Rather than go way in depth here on each part of a PA or Recording system I’ll provide a 1-2 page summary with links to other Learning Ideas and web based resources to learn more. What will probably be most useful to you in this section is to the watch the demonstration videos (see end of book for download links). These will walk you through everything you need to do for your live PA or Recording session assessment activities.

Two excellent resources I highly recommend you view in addition to working through this resource book and viewing my demonstration videos are: • Live Audio Basics DVD from Down 2 Earth Resources for live PA setup. http://down2earthaudio.com • Art & Science of Sound DVD series from Alan Parsons for recording setups. http://www.artandscienceofsound.com While both are cheesy in different ways they do a great job of giving you an overview of every part of Live PA and Recording systems. After watching these, watch my videos which are specifically focused on applying the knowledge from those videos to this unit standard.

© Learning Ideas Limited, 2013. 32 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

MICROPHONES A microphone is a transducer, it seeks to convert one form of energy into another, in this case, acoustic energy (i.e. sound - movement/vibration of air molecules) into equivalent fluctuations in electrical energy. How well it does this determines the quality of a microphone. Three common types of mics: • Dynamic - doesn’t require any power, hardy, good for high SPL (Sound Pressure Level) instruments (such as amps & drums) but less sensitive at higher frequencies so not great for cymbals (drum o/heads), acoustic instruments, vocals, etc. • Condenser - requires power (battery or phantom power), very fragile, often expensive but extremely high quality capture of sound • Ribbon - very, very, very fragile, expensive, but very ‘warm’ appealing tone.

Generally mics pick sound using one or more of the following patterns: Cardioid, Omni-directional or Figure 8.

On mics you will often find a few switches: • Rolloff - eliminates frequencies below a certain number (usually 60, 80, 100 or 150 Hz) • Cut/Pad - reduces output of mic (-10 dB). • Pattern select - some mics can have multiple pickup patterns (such as cardioid, super- cardioid and omni). A few of factors to be aware of: • Proximity effect - the closer a unidirectional mic (such as cardioid) is to it’s sound source the more the bass frequencies will be increased (within 1 foot). Does not occur with omni patterns. • 3-to-1 rule - when using multiple mics the distance between each mic should be at least three times the distance from each microphone to it’s intended sound source. This helps to prevent phasing issues. • Critical Distance (Dc) - In every room, there is a distance (measured from the talker) where the direct speech and the reflected (or reverberant) speech are equal in intensity. In acoustics, this is known as the Critical Distance and is abbreviated Dc. • Reading a frequency plot Stereo mic techniques Excellent to experiment with these, especially for choirs, orchestras and chamber groups but can also be excellent for recording live concerts.

© Learning Ideas Limited, 2013. 33 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

MICROPHONE ACCESSORIES Microphone clips - While something like a microphone clip may seem small and insignificant when you look at what it has to look after you’ll soon realize it is anything but. A microphone clip is only a few dollars but it is charged with looking after a microphone that in some cases may be worth several thousand dollars10. Therefore it is vitally important that you have the correct size clip for the microphone. If the clip is too big your microphone will be in danger of crashing to the ground (and if it’s a condenser or ribbon mic they will break after one drop). If the clip is too small and you force a mic into it you’ll end up stretching the clip. The next time you come to use it with the mic it was intended for you’ll find that it no longer fits – that it is now too big. Therefore, take care with your clips and always use the correct clip for a microphone. Shockmounts – Related to clips are specialized shockmounts for (usually) higher end condenser and ribbon microphones. These keep your expensive microphones nice and snug on their stand as well as protecting the body of the mic’s from vibrations that can come through the mic stand from other instruments and extraneous noise.

Pop Filter – some mics have these built into their body but usually, these are circular frames of mesh hung in front of the microphone. Usually associated with vocal side-address condenser microphones these protect the mic from ‘plosives’ of ‘p’, ‘b’, and ‘s’ noises. An essential investment for recording vocals.

Reflexion Filter11 – Not traditionally seen as an essential microphone accessory but over the last 5-10 years with the rise of the ‘project studio’ where recordings are made in less than ideal acoustical environments this particular product has found it’s way into many thousands of vocal recordings. This does an amazing job of taking the ‘acoustic picture’ of a room out of play, delivering a vocal performance that is ‘dry’, allowing you to put your own reverb or ambience on the track with effects (hardware or plugins).

For more information on microphones and accessories please see SOND 1 26687 Outcome 2 section.

10 I once spent over $200 on a mic clip. When I complained to the importer that I could by a whole SM57 for that he made the point that the mic I was purchasing was worth over $4000 so a specialized clip was a very worthwhile investment at around 5% of the mic price. 11 http://www.seelectronics.com/se-reflexion-filters/ © Learning Ideas Limited, 2013. 34 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

MIXING DESKS As you should well know by now (as I assume you’ve already competed SOND 1 & 2) the function of a mixing desk is to take multiple signals (from microphones and instruments) and to balance them down to a stereo left and right signal (or in some cases mono such as for signals going to a sub) for delivery to amplifiers and speakers. All (decent) mixing desks (analogue or digital) should have all of these functions on each channel: • Mic preamp with a gain or trim control • High pass filter • EQ • Aux sends • Panning • Balance fader (potentiometer) The main difference between an analogue desk and a digital desk is that on an analogue desk each of these functions is duplicated across each channel for how ever many channels you have (for instance, if it is a 24 channel desk each of the above functions will be duplicated 24 times) but on a digital desk there will be one group of controls which are applied to each channel but ‘selecting’ a channel (see my live mixing demonstration videos for my use of a digital desk). Also on a digital desk you are also likely to find compressors and digital fx such as delay and reverb.

A 16 channel Mackie mixer with all gain, EQ, Aux, Pan & Faders duplicated 16 times

The ‘Fat Channel’ on a Presonus Studiolive – one set of controls, which can be selectively applied to any of the 16 channels on the desk (note addition of compression controls and comprehensive EQ section). For detailed guides on using a mixing desk see SOND 1 (26687) Outcome 2 and 27703 (SOND 2) including the demonstration mixing videos of the “Brew Remix”. © Learning Ideas Limited, 2013. 35 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

One vital aspect of operating mixing desks is to be aware of your gain structure in that you try to maintain unity gain.

Setting the gain of your inputs – The gain or trim knob at the top of your mixer is one of the most important controls on the mixer. If you don’t get this right your channels will either be constantly distorting (clipping) or they will not give a clear, clean signal. 1. To set the gain, get the musician/vocalist playing or singing. Make sure they are performing at a volume level similar to what they will in the performance. 2. To start off make sure your gain knob is all the way left (down) and your channel’s fader is set to 0 (note that some mixing engineers will set their faders to around +3dB here so that after they’ve set the gain control they can pull the fader back and have a little more headroom before clipping). Keep the master channels relatively low. 3. Press the PFL (pre-fade listen) button on the channel and watch the meters for this channel. Be sure that all the PFL buttons on the rest of the mixer are off. 4. Now bring the gain for the channel up so that the meters are registering around –6 to –3dB. If they go over 0 on the meters the channel will start to distort. 5. Now return the channel’s fader to off and repeat this process for the other channels. 6. When you have got all the gain set for all channels go ahead and mix the faders to the appropriate levels. Be careful that if you add any effects through the insert channel or boost the EQ on the channel strip this will alter gain structure and could push it into distortion. If you need to boost some EQ bands you might need to back off on the gain a little bit. You will also need to watch out for instrumentalists that suddenly play much louder in the performance than they did in the sound check. Also if the drummer gets louder then the rest of the musicians will probably turn themselves up, mucking up your gain structure. This is why I advised you to set your gain at around –6 to –3 dB but you may need to back off more depending on the situation. Gain Structure – Now that you’ve seen how to do this it is important to understand what is gain structure and why we need to be aware of it. When multiple pieces of electronic audio equipment are used together, the gain structure of the system becomes an important consideration for overall sound quality. This basically refers to which pieces are amplifying or reducing the signal how much. A properly set up gain structure takes maximum advantage of the dynamic range and signal to noise ratio of each piece in the chain (the signal to noise ratio is simply a measurement of a given noise level in a device as compared to the level of the signal). No one piece is doing a disproportionate amount of the amplification unless it is designed for that function (such as a mic preamp). An example of poor gain structure would be a setup where a mixer's master fader is near the bottom, while all of the individual channel faders are near the top. The resulting level out of the mixer is the same as it would be if all faders were at some mid setting, but the chances of distortion are much higher because of limited available headroom in the circuits preceding the master fader, while the signal to noise ratio of the final output isn't as great as it could be were the master fader at a more appropriate level12. When setting up the gain structure it is important to try and maintain Unity Gain, which is basically trying to have all levels at 0 dB in the signal path.

12 http://www.sweetwater.com/insync/word.php?find=gainstructure

© Learning Ideas Limited, 2013. 36 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

AMPLIFIERS, LOUDSPEAKERS & CROSSOVERS For background on the way speakers work, please refer to SOND 1 (26687) pages 19- 22. The following information, while not strictly related to your operation in the assessment activity, is useful know (especially if you end up blowing up a speaker – now you will know a bit about what went wrong!). Direct radiating loudspeaker – This type of speaker has a cone attached to the basket around its edges and a voice coil in its centre (as describe above). The voice coil is basically a lot of wound metal, which becomes an electromagnet as electricity is pumped in to it. As positive and negative charges are delivered in to the voice coil the two magnets (remember the voice coil is an electromagnet) repel and attract each other forcing the cone to move in and out. Acoustic energy (sound) is the result of this. On the left is a close up of the magnet inside the base of the driver. Notice the hole in the middle for air to travel through. The first ring outside the middle hole is for the voice coil to slot in to.

Here is a picture of the cone with voice coil and spider as they look when taken out of the basket. The voice coil can be seen on the bottom.

This picture is of the cone with voice coil showing (the dust cap has been taken off). Notice how the voice coil slots in to the magnet.

Here is a picture of the complete driver removed from an enclosure (note that the dust cap is missing).

© Learning Ideas Limited, 2013. 37 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

Crossovers – To produce good quality sound over a wide frequency range a speaker enclosure will need to contain specialised drivers (usually a tweeter, mid-range driver and a woofer). To dedicate each driver to a particular frequency range a speaker system will need to divide the signal into high, mid and low frequency chunks. This is the job of a crossover. There are two types of crossovers, passive and active. The most common type of crossover is the passive crossover. These do not need power and are usually built into the speaker enclosure. The passive crossover uses capacitors and inductors to split the signal from the amplifier up in to the frequency chunks, which then get delivered to the appropriate driver. The active crossover does require power, as it is an electronic device placed before the amplifiers in the signal chain. With an active crossover you can easily adjust which frequencies get handled by which drivers. Once you set the desired frequency the signal for the chosen frequencies get passed on to its own amplifier, which is used to power the driver. Each driver and crossover has its own amplifier circuit. This gives higher quality sound, but requires individual amplifiers and wires for each speaker.

The above diagram shows the audio signal flow from the mixer to the crossover where the signal gets split in to three separate signals (high, mid and low) which are then fed on to their dedicated amplifiers. The signals are then amplified and passed on to their dedicated speakers.

Using speakers & amps One area that consistently causes issues with inexperienced sound technicians is using speakers and amps incorrectly and causing damage to them. It is VITAL that you have followed the operation manuals for your speakers and amplifiers to ensure that they are correctly matched and operated correctly to prevent damage. If you’re unsure a quick phone call to your local sound hire company could clear up any confusion. If you have Active Speakers (speakers with the amplifier built into them) as I do then you have little to worry about – generally the speakers are smart and can shut themselves down before any damage happens. However, if you’re pairing passive (unpowered) speakers with amplifiers there are a few rules you need to follow.

© Learning Ideas Limited, 2013. 38 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

1. As I said above, carefully consult the manuals for your equipment. However, if you haven’t yet bought any speakers or amps you should consult a skilled salesman from a well recognized PA system shop. However some other guidelines are: 2. Make sure you get an amp that is at least 50% more powerful than your speakers. Therefore, if you’re getting a pair of 500w speakers (@ 4 ohms) your amp should be at least capable of 750w @ 4 ohms. 3. Never get an underpowered amp. If you were to pair a 250w amp with 500w speakers you’d end up blowing up your amp. There is much more to know about the matching of speakers and amps (including 4 speaker setups) so I recommend watching this video to start: http://www.uniquesquared.com/blog/The+Ultimate+Amp+Matching+Guide or http://www.youtube.com/watch?v=pUou_noD1Gc

In a recording studio situation we generally refer to speakers as ‘monitors’. While in some high-end studios you’ll find passive speakers matched to amplifiers for the huge majority of studios (including home, educational and project studios) you’ll find ‘near- field’ active monitors. The price and quality can vary hugely but you’ll find that you are now able to purchase 2- and 3-way monitors for very low money. Be prepared to spend at least $1000 on a semi-decent pair. For $2k-3k you can now get a very good pair but if you want the best in near-field studio monitors be prepared to spend $7k-$10k for something like the Focal SM9’s. BACKLINE These are the musical instruments provided by hire companies for visiting performing groups. It is usually the gear that is literally in a line at the back of the band! It can include drum kits, amplifiers, keyboards, etc If a band asks what Backline you can provide they are usually meaning amps, drums, keyboards and sometimes guitars.

© Learning Ideas Limited, 2013. 39 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

OUTBOARD EQUIPMENT INCLUDING COMPRESSORS, NOISE GATES & EFFECTS UNITS

If you operate in a Live PA environment using a digital mixing desk (as I do) such as the Presonus Studiolive range, then this section is a bit irrelevant as all your ‘outboard’ equipment needs are found inside the mixing desk (unless of course you want to use a particular type of reverb or compressor not found inside the digital desk – in which case it is relevant!). If you are using an Analogue desk, then yes, pay attention! Connecting external equipment to a desk is found in SOND 2 (27703) in Outcome 2.1 (pages 33-36) so I suggest you read that now. But to recap… There are two main ways of connecting external equipment: using the ‘Insert’ points on each channel, and using ‘Aux’ sends. Inserts are typically used for processing that is specific to a certain channel. Compression settings, for example, shouldn’t be used globally. The compression setting for your bass guitar is likely to be wildly different to what you’d use for vocals. Therefore, each channel needs to have it’s own compressor connected via the channel Insert point13. See the SOND 2 resource for a step-by-step guide to using your Insert points (see below). Auxiliary sends are typically used for providing effects that are not instrument/channel specific. Reverb is a good example. You generally don’t want separate reverb/ambience settings for each instrument as the goal of reverb is to provide a sense of consistent space to ‘glue’ the instruments together. You want them to have the same settings (although to varying degrees of ‘wetness’). Therefore, setting up a reverb outboard unit on an auxiliary send (and bringing it’s processed signal back through another channel or the ‘Aux returns’ on your desk) is a good way to go. Once again, see the SOND 2 resource for a step-by-step guide to using your auxiliary sends. It is worthwhile noting that the above principals work the same for using DAW recording systems. You shouldn’t be setting up an individual reverb plugin for each track but maybe should only have one or two on their own Aux (or Bus) channels, which are being fed by the aux sends from each track. If you don’t have your SOND 2 workbook handy you can download the relevant pages from here: http://tinyurl.com/sond2insertsandaux For guidelines and tips for using compression, noise gates and other effects please see SOND 2 (27703) pages 15-30 and watch the tutorial videos linked to at the end of this section. You can download these from here: http://tinyurl.com/sond2dynamics

13 So if you have 16 channels running and you want to use compression on each channel you’ll need 16 external compressors! The same is true for noise gates. A digital desk on the other hand has it all built in – I know what I’d rather use! © Learning Ideas Limited, 2013. 40 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

RECORDING DEVICES & AUDIO INTERFACES This is all covered quite extensively in SOND 1 (26687). If you are doing a Live PA system as the performance context for your assessment you may not need to include a ‘capturing’ aspect (only six out of nine processes are required) but if you are using a recording studio as your context then this section is very important. If you are doing a live PA context and you are using a digital desk then you’ll see that it is very easy to also capture (or record) the performance – see the tutorial video later on to see how this works with my system. If you are using an analogue mixing desk and you want to do a ‘multi-track’ recording you need to make sure that: 1. You have “Direct Outs” on each channel on your desk or you have Insert points (but you’ll need to modify cables for this to work properly – there are guides for this on the internet) 2. You need to have a recording interface with as many inputs as you have direct outs on your mixing console 3. You have a computer system to record on to Alternatively, you can record the stereo “Master Fader” output signal to a tape recorder, CD recorder14 or a laptop that has a stereo input (but this means you have to live with the final Front of House mix which may not be ideal). However, over the last year or two there are some interesting new recording devices for Live PA systems. These are rack-mountable systems that can record all the inputs to a USB drive. These offer the peace of mind of a stable system (more stable than you laptop might be!), high quality inputs and A/D conversion and make all your audio easy to transfer to a computer. A good example is the JoeCo Black Box15. However, we must not forget about the non-computer based, all-in-one recording device. What they lack in the ability to edit audio they make up for in convenience and reliability. Systems from companies like Roland and Tascam and still producing fantastic multi-track/input systems like the TASCAM DP-2416. While it is difficult (but not impossible) to edit captured audio on these systems they can still be used to meet the requirements of the SOND 3 unit standard.

COMPUTERS AND DAW’S (DIGITAL AUDIO WORKSTATIONS) In probably 95% of cases in NZ high schools people are going to be recording in the same way that the majority of the professional recording industry does – on to a computer using an interface (analogue-to-digital converter) and a DAW such as Pro Tools, Logic, Studio One, Reaper, Cubase, Reason, FL Studio, Garageband, Mixcraft… This has become such a good option because:

14 http://www.sweetwater.com/store/detail/CDRW900SL 15 http://www.sweetwater.com/store/detail/BlackBoxB/ 16 http://www.sweetwater.com/store/detail/DP24/ © Learning Ideas Limited, 2013. 41 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.

1. Computer power is so cheap now so you can work with many tracks 2. Many great DAW’s are free (Garageband, Studio One FREE, etc) 3. You can upgrade your systems piece by piece as you find your skills out grow the equipment What DAW should you use? Initially it doesn’t matter. The important thing is that you’re learning how to record and mix – and you can do this on any DAW. Eventually however, if you have aspirations of working in a recording studio or in the film industry then you’ll have to learn Pro Tools from AVID. This is without doubt the industry standard. You’ll also see a lot of producers using Logic, Nuendo and Cubase but Pro Tools is the big one. If you haven’t already, watch the SOND 3 tutorial video from Learning Ideas Ltd which demonstrates Live PA setup and operation and recording, editing and mixing on a DAW (in this case, Pro Tools). A low res version of this video (1.16 Gb) can be downloaded here: http://tinyurl.com/sond3small A high res video (10 Gb) can be downloaded here: http://tinyurl.com/sond3large

SOND 3 VIDEO OUTLINE: • Setup/rigging of equipment including microphone choice and positioning • Tune room • Sound checking a band – including setting of gain (unity gain) and quick EQ • Mixing a band for a live performance (while also using the ‘firewire’ outputs of a digital mixing desk to record to Pro Tools on a laptop computer). The mixing process will involve balancing the faders, using panning to create a stereo spread, using high pass filters (or low cut filters) to remove unwanted low frequencies from instruments that shouldn’t have them (like vocal microphones), using EQ bands set as peak or shelving filters to remove unwanted frequencies and to boost certain desirable frequencies, using compression and limiting to reduce the dynamic range, using gating on microphones to reduce spillage/leakage between sound sources and possibly applying fx such as reverb, chorus and delay to create a sense of ambience or other spatial effects. • Editing the recorded tracks in Pro Tools (but using techniques that can be used in any recording/editing software). • Mixing the recorded tracks in Pro Tools. • Bouncing down a final stereo mix for burning to CD or exporting as MP3.

© Learning Ideas Limited, 2013. 42 This workbook may only reproduced for students of the high school that purchased it. Any other copying is strictly prohibited. No sharing between schools and studios is permitted.