Notes on Sound Recording
Total Page:16
File Type:pdf, Size:1020Kb
The Kino-Eye.com handout collection Some notes on sound recording v.3 David Tamés, March 6, 2015 This document is in perpetual beta, please send suggestions and/or corrections to the author at [email protected] What is sound? Characteristics of sound Sound is vibrations in air. A movement, like those of Te unique pattern of air pressure variations or sound human vocal cords, or perhaps hitting the head of a refections produced by a particular disturbance are nail with a hammer, creates waves in the air, much known as waveforms. Below is a perfect sine wave, like the waves caused by a rock throw into water or a however, actual sounds actually have much more duck swimming in a pond. Tese waves go in all complex waveforms that are made up of a directions, bounce off walls, travel around corners, combination of the fundamental frequency and a it’s very messy. It’s not like camera and light. Tere is series of overtones (a.k.a. harmonics). More on this no edge of the frame. Sound from all over mixes in later in this handout. with the specifc sound you want to record. Sound is often measured and described using two Acoustic energy travels from the source of distinct reference points: 1. our psychophysical disturbance outward as a wave, the air molecules perception of sound (as perceived by our ear and vibrate, yet, the air itself is not moving from the brain), and 2. quantifcation of sound phenomena source to our ears or to the microphone, instead the with measuring devices. Psychophysical terms such as wave energy is carried through the medium by loudness, pitch, and timbre describe the human compression and rarefactions of the medium (usually perception of the same acoustical phenomena as air, but sound travels through water and other amplitude, frequency, and everything else that can’t materials too). be quantifed as amplitude, frequency, or harmonics. We’ll use the following terms to describe our perception of sounds: When you play back the sound you recorded, the reverse process takes place, the sound you recorded, stored as a digital representation (created through process know as sampling) is converted back into an analog signal which is amplifed, and in turn, causes the vibration of the speaker diaphragm, which creates sound waves in the air. Te vibrations in air don’t become sound until we perceive them. Tese are never exactly like the original, there’s always Pitch is our psychophysical perception of high and something that changes in the process. Tere’s always low frequency sounds corresponding to frequency, an noise and distortion added along the way. We try to acoustic measure of the number of cycles per second minimize that using good techniques and good gear. of a waveform. Harmonics (acoustic) are the mixture of a fundamental frequency and multiples of that frequency that when combined contribute to a particular musical instrument or sound effect having a unique aural signature. Harmonics are just one of many factors that contribute to timbre. Te books, websites, manuals, as well as professionals you encounter as you study sound will use psychophysical and acoustic terminology Loudness is our psychophysical perception of air interchangeably, which can create confusion when pressure in sound waves corresponding to amplitude, you’re frst starting out. Knowing the relationships an acoustic measure of air pressure of sound waves between the terms can help. Robert Erickson (1975) quantifed in decibels (dB). published a table of subjective experiences and related physical phenomena summarized below. Phase refers to the relative displacement in time between two sound waves. Multiple sound waves will amplify each other. Te amount of reinforcement depends on their relative phase (at what point in the cycle the peaks occur). Two identical waves half a wavelength out of phase (180˚) will cancel each Some of these terms are beyond the scope of this other. Sound waves in the real world are more handout, though if you’re interested in the science of complex than the ideal sine waves illustrated here, sound they are defnitely worth looking up. We will therefore, the cancellation and reinforcement occurs focus on the terms most commonly used in sound at different frequencies and thus changes the recording and sound postproduction for flm and characteristics of the sound as well as the perceived video. volume. Phase issues play an important role in room acoustics, microphone placement, and microphone design. The audio recording signal chain Te vibrating air molecules create a wave through Timbre (a.k.a. tone color or tone quality) is the psychoacoustic phenomenon that distinguishes which the sound energy moves through the air, different types of sound, such as musical instruments, 1. the wave of sound energy leads to the air molecules voices and sound effects. We use timbre to describe surrounding the diaphragm in the microphone to what makes a particular sound different from another move, these movements are converted into an sound, even when they have the same loudness and analog electrical signal by the microphone, pitch. For instance, it is the difference between a piano, guitar, and synthesizer playing the same note 2. the analog electrical signal (an audio signal) is fed at the same loudness. Audiophiles often use the term through a microphone cable into the camera, to describe the subjective characteristics of one loudspeaker in comparison to another. Te objective 3. the camera’s audio input circuits converts the acoustic measures corresponding to timbre are quite analog audio signal into a stream of digital bits complex and include the following: the spectral through a process of analog to digital conversion, envelope, the time envelope (attack, decay, sustain, 4. the stream of bits are stored in a fle using a and release), changes in the spectral envelope particular format (e.g. WAV) on a storage medium (formant-glide), changes in the fundamental (an SD card when using the camcorder or portable frequency (micro-intonation), and several other audio recorders). factors. The Kino-Eye.com handout collection: Some notes on sound recording v.3 2 / 17 With most video cameras the audio and video are • Limited to close-proximity vocal recording encoded into distinct formats but in most cases they applications, for example reporters microphones are stored in a single fle combining the audio and and vocalist microphones are most often video. Most prosumer camcorders record audio in dynamic in design, though there are some the video production standard (16-bit/48KHz) and condenser models available. don’t offer a choice of audio formats, while the Condenser microphones Canon VIXIA we’re using uses the AAC compressed audio format. Digital audio recorders can be set to In a condenser microphone, sound pressure levels record to many different formats. I suggest choosing move a diaphragm, this is translated into capacitance uncompressed WAV (”wave”) with 16-bit sampling (the diaphragm is placed between charged plates) at 48KHz. You should avoid compressed formats like electronic circuitry and a power supply are required, MP3 for original recordings. often powered by phantom power, this is power that a mixer or camera provides to the mic over the same Playback reverses the process of recording, sending wires the signal. the signal through a media player to an amplifer and then to speakers or headphones or earbuds which are Advantages: re-creating the original vibrations and eventually • More sensitive and more accurate that dynamic translated to the perception of sound by our nervous microphones. system. Disadvantages: We need not go deep into the physics or neuro- science to appreciate this. What’s important to • More expensive than dynamic microphones, understand is the essence of sound, think of it as one • More sensitive to extreme environmental of the materials you work with, you shape and mold conditions, and it to create your work. It’s a medium just as paint is a • More prone to handling noise (requires the use medium, but the cleanup is usually simpler when you of a shock mount). spill it. Microphone form factors Microphone capsule technologies Tere are three microphone form factors commonly A microphone may employ one of several used in video production: handheld, lavalier, and technologies for translating vibrating air particles into boom-mounted microphones (which may also be used electrical energy that in turn is fed into your hand-held in a pistol grip or mounted directly on the camcorder or audio recorder through the microphone camera). cable. Te two most widely used microphone capsule technologies in video production are dynamic and Handheld microphones condenser. Handheld microphones are typically Dynamic microphones dynamic, but condenser models do exist. Tese are good for use in high-noise Sound pressure levels move a diaphragm connected environments when close placement is to a moving coil, the movement of the coil within a possible, typically omnidirectional (e.g. magnetic feld translates the sound into a voltage, no Electro-Voice RE50) or cardioid (e.g. additional circuitry is required. Physics: movement of Sennheiser MD46) in terms of their pick-up a coil within a magnetic feld causes electricity. Tis is pattern. Some are designed to reduce the the inverse of a moving coil and magnet in a speaker effect of handling noise. Te Media Studio moving the cone of the speaker when an electrical has a variety of handheld microphones you can check signal is applied. A speaker works the same way in out. reverse. Lavalier microphones Advantages: Lavalier microphones are typically condenser and are • Inexpensive when compared to as condenser good for use on subjects or actors when microphones boom, handheld, or camera • More rugged that as condenser microphones microphones will not do Disadvantages: the trick in terms of proper placement.