<<

Performative Listening

A Cultural Anatomy of Studio Sound Enterprise

Name: Steven Vrouwenvelder

Student number: 6049400

Thesis rMA Cultural Analysis

Supervisor: Dr. Timothy F. Yaczo

Second reader: Prof. Dr. Julia J.E. Kursell

Date: 15-06-2016

Vrouwenvelder 2

Contents Acknowledgments ...... 3 Introduction ...... 4 Listening ...... 6 Cultural anatomy ...... 8 Performativity ...... 9 Translation ...... 11 The External Ear: Creation ...... 13 Tuning ...... 14 Modes of listening as ideality ...... 15 Forward listening ...... 15 Verstehen and forelistening ...... 16 Construction of roles ...... 17 Directed listening ...... 18 Ascoltando ...... 19 Guide...... 19 The Middle Ear: Recording ...... 22 The Microphone ...... 23 Assistive media ...... 23 Resistive media ...... 26 The Room ...... 28 Construction of the self through listening ...... 28 Construction of SSE-Noord ...... 29 The Inner Ear: Processing ...... 31 Imitative Devices...... 32 Echo, reverb and delay ...... 32 Is imitation listening? ...... 33 Reflexive listening ...... 34 Non-human listening ...... 36 The Nervous System: Mixing ...... 38 Arrangement ...... 40 Psychoacoustics ...... 43 The Brain: Beyond SSE-Noord ...... 45 Reduced listening ...... 46 Entendre and Reduced Listening ...... 46 Reduced Listening at SSE-Noord ...... 48 Detached listening ...... 50 Conclusion ...... 53 Works cited ...... 57 Vrouwenvelder 3

Acknowledgments

For the realization of this thesis, I was dependent upon the cooperation and assistance of many people. First of all, I would like to thank Frans Hagenaars, because without his permission I could not have observed and analyzed my object. I would like to thank all of the band members and musicians who I was allowed to observe and interview: Aart Schroevers, Annita Langereis, Arnold

Lasseur, Bart van Strien, Ben Bakker, Berend Dubbe, Brian Pots, Danny Vera, Erik Kriek, Josephine van

Schaik, Peter Peskens, Pyke Pasman, Reyer Zwart, Robert-Jan Kanis, Sonny Groeneveld, and Sophie ter Schure. I am very grateful to M.D. Emil den Bakker for introducing me to the basics of the human ear and for lending me his textbooks. Of course, I also wish to thank my supervisor, Dr. Tim Yaczo, and my teachers at the cultural analysis and musicology departments, especially Prof. Dr. Julia

Kursell, for inspiring me and stimulating me to keep improving my thesis. Lastly, I would like to thank my girlfriend, Iris Gadellaa, for her inexhaustible support.

Vrouwenvelder 4

Introduction

It is cold—February cold—outside Studio Sound Enterprise, but inside a “desert” song is about to be recorded. Frans Hagenaars, Ben Bakker, and Reyer Zwart listen to the demo which today’s artist, Danny Vera, has recorded. When thinking of this song, Danny imagines a cowboy on a horse in the desert. In the future, he wishes to perform this song with three people live for the radio.

Frans, who is directing this session, reminds the musicians that they are currently recording. He means that Danny can always strip down the song for live performances. Frans suggests that the synthesizer heard in the demo can be replaced with high strings. He reckons further that they should add double trumpets to attain the stereotypical Mexican sound.

Danny plays a soundtrack by Ennio Morricone and tells Ben that he is looking for a similar sound. Ben, as a regular session drummer, knows how to get that result. He explains, but the others do not fully understand. Ben gets the chance to record so the others can hear what he means. They decide to record a basic take. The musicians record with a “click track”1 to maintain their tempo. In this way, they can replace every recorded track with another later. After recording, they listen to the result, but no one is enthusiastic about what they hear. Danny tells Ben that the result is too

“military,” so Danny does not want to continue in this direction. They decide to record a new basic take. This time, Ben plays with brushes and Danny trades his acoustic for an electric one.

Listening to this new basic take makes Danny happy. This is the take he wants to work with—this take evokes the picture in Danny’s imagination.

When I first walked into Studio Sound Enterprise (SSE-Noord) in Amsterdam, I was surprised by the construction of the building. When I looked down the hall I saw several rooms with microphones in them, but I needed to go upstairs to see the control room: the space with the recording equipment and the mixing console. In the SSE-Noord control room, one can only aurally

1 The click track is a metronome that makes an audible click on each beat to indicate the tempo. Vrouwenvelder 5 communicate with the musicians, who are located downstairs. These musicians, for their part, each record in separate rooms. SSE-Noord does not match the popular preconceptions about a music studio’s layout: a producer sitting behind a window telling musicians to record the song. In contrast to this popular image, I started to perceive the studio as a site of listening instead of documenting.

With my limited understanding of the human ear, I conceived the recording room as the pinna and the control room as the processing brain. The connection between rooms only exists in electronic and digital media, just as the eardrums connect the outside world with the internal receptor apparatus. In this thesis, I expand and elaborate on this analogy between human perception of sound via the ear and the perception of musical sound in the studio.

I asked Frans Hagenaars, who owns and operates the studio, what “studio” meant for him.

He explained that a studio should be a place where an artist can easily record music, and where the result should instantly sound good (Hagenaars, 3 Mar.). Surprisingly, this resonated with my preconceptions, but SSE-Noord showed me more, as the anecdote above exemplifies. I observed more listening to music, than simply the playing and recording of it. Many modes of listening were employed: listening to the demo, to Morricone, to each other’s playing, to the click track, and finally to the recorded take. These modes of listening are not clear-cut or ready-made. The band’s inability to understand the drum part Danny wanted shows that listening can fail. People practice and test their listening at the studio. This goes beyond “easily recording” and “instantly sound good” that

Hagenaars mentioned.

The people and equipment involved at the studio listen to each other in order to come to a final and definitive listening. “Listening” is a gerund—a verb that takes the form of a noun. It is the holistic and subjective sound reception experience with a strong connection to its verb “to listen”: the action that results in this experience. The result of a recording session is the construction of an ultimate listening—the piece of music in its recorded state. This challenges the concepts of song and composition, because both presume a fixed identity: a stable unity between performances. This Vrouwenvelder 6 limited view reduces the operations at a studio to mere documentation. I examine the multitude of listenings that precede the ultimate listening. This approach emphasizes that music changes between listenings and that its identity is absent, or at least unstable. The human auditory sense has a special agency that is called listening, because it translates air pressure into mechanical movement and finally into electric neural information in a way that is unique to every individual. This function is mirrored by the different instances of listening during the process of recording. Listening performatively establishes a specific situation that shapes and directs the recording of music. In this thesis I investigate these situations through, what I call, a cultural anatomy, because I divide the listening that happens in a studio into stages just as an anatomist dissects the ear. Only after such an operation can the different functions that the parts perform be investigated. This thesis analyzes how listening functions as a performative agency at Studio Sound Enterprise.

Listening

Sounds can be perceived because they make the eardrums vibrate. In other words, sound only sounds when it is heard. Hearing is often understood as passively receiving sound, while listening is oriented or focused towards a particular sound source (Herbert 1-3). Musicologist Ruth

Herbert points out that people conceive of this dichotomy as a hierarchy in which active perception is deemed the proper engagement with music. This hierarchy implies that sound conceals a universal understanding only to be grasped through listening. I rather see listening as a performative engagement with music: the listener attributes meaning to music by listening to it in different ways.

One generally does not listen to a click track for inspiration, just as one does not listen to an Ennio

Morricone composition simply to keep the right tempo. However unlikely, one could listen in these ways. One changes the Morricone piece by listening to it in different ways. I call this a mode of listening. Sound comes into existence through reception, whether active or passive. The Vrouwenvelder 7 comprehension of sound and the realization of a listening, on the other hand, varies according to modes of listening. This thesis, then, is an investigation into modes of listening at SSE-Noord.

An increasing amount of scholarly work on listening has been published in the recent decade

(Carlyle and Lane; Herbert; Kane; Nancy; Sterne; Szendy; Tuuri and Eerola; Voegelin; Wolvin).2 The literature is comprised of listening-based ontologies and phenomenologies and also explores everyday encounters with listening. Communication scholar Jonathan Sterne, for example, explores how the MP3 format and people’s distracted listening fuel each other. Others (a.o., Nancy and

Voegelin) explore the way listening provides a perception of the world that is markedly different from seeing.3 In both cases, scholars figure the listener as a universal subject.

These elaborations on listening focus mainly on the receiving end of perception. Although they enhance understanding about human listening, they largely fail to explore how listening enhances and alters one’s understanding of the world. An exception is the French philosopher Peter

Szendy, who investigates listening agency throughout European history as he examines several specialist listeners.4 He thereby deviates from the assumption that listening is universal, but rather sees it as a subjective appropriation of a musical work (Szendy 8). The listeners that Szendy discusses are specialists because they have the agency (i.e., potential) to communicate their listening. In other words, their modes of listening create a new listening for others. Szendy explores when and where these listenings were conveyed as they went beyond the individual experience. For example, the arranger (Szendy 35-68) is the one who interprets (i.e., listens to) a musical work and rewrites it for another (set of) instrument(s). In other words, the arranger can write down a listening allowing others to listen to her or his listening.

2 This list contains several books and articles that I address in this thesis or that are exemplary for the general tendency of listening research. Tuuri and Eerola as well as Wolvin are included because they reviewed much of the pertinent research. They show that it is mainly psychological(ly informed) and formulated in universal terms. 3 I elaborate on this in chapter 2. 4 He discusses judges, composers, arrangers, and DJs as specialist listeners. Vrouwenvelder 8

In the introduction, Szendy wonders, “Can one only make one’s listening heard by rewriting, by radically crossing out the work to be heard? Can one adapt, transcribe, orchestrate, in short arrange in the name of the work?” (Szendy 7).5 Surprisingly, Szendy does not investigate the production of music at a recording studio.6 For many artists, a studio is the place to carry out a listening and to preserve it for eternity. In contrast to the individual specialists Szendy discusses, the listening established in the studio is the work of multiple actors. The studio is a unique situation that provides this opportunity. This is evident from listenings done outside the studio; for instance, live concert music is different from studio recorded music. These divergent perceptions, even or especially with the same song, are the result of different modes of listening that direct the music at these locations. This thesis intervenes to hear what was previously silent in Szendy, by examining how SSE-Noord provides the space to develop a communicable listening. This thesis contributes to an understanding of listening as specialists at a recording studio practice it. It also enhances the general concept of listening, because I emphasize the non-universality of listening.

Cultural anatomy

During a conversation with a friend, who is a medical doctor, it occurred to me how many functions of the human ear resemble operations that take place at SSE-Noord. These functions of the ear are optimized for proper hearing and enhance the audibility of the auditory world. This perspective revealed that the diversity of actors at SSE-Noord were connected, because they were all oriented towards the audibility of music. This realization stimulated me to investigate the analogy between the ear and SSE-Noord. The auditory organ of the ear and the recording studio are both widely perceived as a unity. Hagenaars states that an artist comes to the studio to crystallize a song that the artist rehearsed before entering the studio (5 Oct.). This view obscures the creative process

5 Emphasis in original. 6 Maybe it is his predilection for so-called European classical music that causes him to neglect the recording studio, because there still seems to be a dubious relation between recording and this musical tradition. Vrouwenvelder 9 that music undergoes during a recording session. I look at the studio as a listening space, instead of just a site to record and document songs. In order to understand the creative processes that take place at SSE-Noord, I will perform a cultural anatomy. Like the anatomist who dissects the ear, I will cut the studio into parts. Sound travels from the external ear through the middle and inner ear, via nerves into the brain. Each of these stages performs a crucial part of the process of hearing and listening. Although the functions of each part are described, they only function together. The process may appear unidirectional, but in fact, the parts send information back and forth within the ear.

Ear dissection allows for the exhibit of parts that are distinguished on the basis of a connected function. Similarly, at SSE-Noord, modes of listening with related functions can also be separated from one another. My cultural anatomy distinguishes five components that correspond to the parts of the auditory organ: creation, recording, processing, mixing, and beyond the studio, respectively. Although it may appear so, the order of chapters here does not represent the progression of recording, but correlates to clustered modes of listening. The modes of listening I examine, are active throughout the recording process, but are emphasized in the separate stages that I distinguish.

Performativity

The cuts of this cultural anatomy are “agential cuts,” as formulated by Karen Barad (815), because actors (musicians, equipment, etc.) perform these cuts. Barad contributes to the discussion between linguistic performativity (an entity is because it is called accordingly) and material performativity (an entity is because it acts accordingly), while seeking to emphasize the importance of the latter. She argues that performativity is “linked not only to the formation of the subject but also to the production of the matter of bodies” (808). She relates this to Judith Butler’s gender performativity: “a set of free-floating attributes, not as an essence—but rather as a ‘doing’.” (808n8).

In other words, a perceived entity exists as an entity because it acts accordingly not because it is. Vrouwenvelder 10

Actions divide subjects from objects. This division is what she calls an agential cut, because it separates matter through the agency of the subject. Similarly, modes of listening separate the greater process of recording into stages.

The cutting subject is the thing that acts and thus called the actor. The object is the thing influenced by this action. I identify constellations of such actions as the role. For example, in the relation between me (my physical body, my notebook, and my background) and SSE-Noord (Frans

Hagenaars, its equipment, and the musicians involved), I perform the role of the researcher, and the studio is the object of study. Performativity is acting according to a role and thereby establishing a relation between a subject and an object. Agency is the potentiality of actors to perform or act. I investigate listening’s agency: actors distinguish stages in the recording process through different modes of listening.

Caleb Stuart researched performativity in relation to live performances of so-called “laptop music.” He claims that audiences can hardly relate to performances where only a laptop is present

(Stuart 59). However, they could relate if they would focus on listening instead of watching the live performance (Stuart 64). He argues that these performances blur the line between documentation and performance. In this thesis, I am arguing for a similar shift, although I concentrate on the recording process. I argue that the recording process is performative rather than documentary. With this shift, one’s engagement with the recording changes. When recording is conceived as a performance, those involved in the recording are actors and their operations are performative.

The main actor in this research is Frans Hagenaars, the owner and operator of the studio. He acts according to the roles of sound engineer, mixer, mix engineer and producer (though he prefers the word “director”) simultaneously. In Dutch, one can distinguish between a producer and a producent, but both translate as “producer” in English. I use director to indicate the latter as it refers to the actions related to organizing, enabling, and directing the recording sessions. According to

Hagenaars, a producer is associated with a virtuoso multi-instrumentalist that has a strong opinion Vrouwenvelder 11 on sound and production, while a director collaborates with the artist (3 Mar). One can detect from this last statement that these roles are not identities but rather temporary relations. Hagenaars acts as a director, or rather steps into a directing role, only at specific moments.

I elaborate on these relations among actors throughout this thesis, as these are not fixed at a certain stage. The relationality of events and actors is inherent both to the creation of music and to the process of doing fieldwork. This thesis reflects a research period (November 2015 – February

2016) during which I listened to and at SSE-Noord. Over this time, I listened to Frans Hagenaars,

Danny Vera, Ben Bakker, Reyer Zwart, The Mysterons, Blue Grass Boogiemen, and Berend Dubbe.

This thesis only pertains to the relation between the events, the actors, and myself in that segment of time.

Translation

Translation is switching from one mode of comprehension to another. If someone is unable to understand Dutch, I switch to English to express myself. Similarly, I translate my listening into written words in this thesis. Media theorist Marshall McLuhan stressed that translation is the primary force of knowledge: “… when we say that we daily know more and more about man. We mean that we can translate more and more of ourselves into other forms of expression that exceeds ourselves”

(63).7 McLuhan uses the phrase “extensions of man” to denote media (7). This phrase performatively, constitutes the human body as an acting subject and the media as objects which are acted upon.

Media extends human organs—for instance, the telescope extends the eyes. This extension is a type of translation, as sight is translated from lens to eye. In this process, human sensory organs become perceived unities. As I mentioned above, this is only a perception, by examining the ear more closely, one could discover the process of translation inside the organ. The sensory organs work so well that one often forgets they are made up of tiny parts which have developed to work in harmony. The

7 Emphasis mine. Vrouwenvelder 12 ideas of media as translating machines and media as extensions of human beings clash with each other, because translation happens inside the human body. The studio is also perceived as a unity, but inside the studio are several machines for translation. I use the ear as an active metaphor alongside my examination of the studio. I translate between them in order to enhance the understanding of the studio, the human ear and, ultimately, the concept of listening.

Sounds are recorded by translation from air pressure, to electric current, to digital code, back to electric current, back to air pressure, and, finally to the sounds that knock at human eardrums where they, again, set a chain of different media in motion. In this procedure, translation is another word for listening. Rather than the passive reception of sound, listening is a performative agency, because it potentially affects information as it translates from one mode of comprehension to another. Listening agency establishes a relation between the listener and the entity that is listened to. Different modes of listening invoke different relations. The studio is the specialist listening space, because it provides the possibility to perform listening agency and ultimately to establish a listening.

In contrast to Szendy’s individual specialists, the studio is an assemblage of specialists that employ modes of listening. Translation takes place in order to communicate between the specialists. During the recording process, translation changes and arranges sounds. The process only exists because of listening. This thesis is an investigation into modes of listening and how they affect and enable the process of creating recorded music. This analysis deviates from common assumptions, showing the inherent multiplicity both of the studio and of the ear.

Vrouwenvelder 13

The External Ear: Creation

I sit downstairs in the main recording room of SSE-Noord, where the musicians rehearse the song that they are about to record. On the opposite side of the room, drummer Ben Bakker and bassist Reyer Zwart discuss the order of recording with an absent third person. This person could be

Danny Vera, who is in the adjacent room, or Frans Hagenaars, who is upstairs. I think about the session that I witnessed last week. Without the use of headphones, the Blue Grass Boogiemen recorded their songs with four musicians and two singers in this same room. At that time, I sat behind player Bart van Strien. Despite the penetrating sound of his instrument, I could hear everyone and thus get an idea of the song. This time I hear only drums, for the bass is directly plugged into the system. Luckily, Frans brings me a set of headphones so that I can listen in. Now I can confirm that Ben and Reyer are talking with Danny. I hear the guitar, the bass, and each person’s voice because of my personal headphones. The sounds are interrupted by a telephone-like voice that asks whether the musicians are ready to record. A loud metronome turns on and the musicians start to play.

Danny Vera and the Blue Grass Boogiemen exemplify the variety of sounds that enter the recording system of SSE-Noord. I consider the process by which these sounds are directed into the studio’s recording system. First, the musicians bring their ideas and capacities from outside; ideas can only enter the recording system when played in front of a microphone. The microphone represents the eardrums of the studio. The ear consists of different compartments through which sound is transmitted and transformed. This transmission starts at the eardrum: the membrane that separates the inside from the outside. The external ear directs sound to the eardrum (Widmaier,

Hershel and Strang 240-41). Before the eardrums can vibrate along with the sounds played by the musicians, the sounds have to be created. In this chapter, I discuss several modes of listening that perform the primary stage of recording: tuning, forward listening and directed listening. These are all Vrouwenvelder 14 modes that must first occur before airwaves hit the microphone membrane, causing sounds to enter the recording system.

Tuning

Before musicians can start playing, they have to tune their instruments. This means that listening begins before playing and thus before recording. Before they enter the studio to record their songs, artists exchange pre-recorded material (i.e., demos) and ideas with Hagenaars. In a subsequent meeting, Hagenaars and the artists discuss what kind of songs are to be recorded, how many musicians will participate, how well they know the music, and how many days they plan to work together. Though it appears to be a formal arrangement, the primary aim of such a meeting is to feel whether there is a good connection, or rather what kind of connection is appropriate. This stage, preceding the recording, is necessary for all actors to attune to one another.

If two strings are well attuned, one string, when struck, can cause the other to vibrate. The unstruck string sympathizes with the other, resulting in a richer sound. A recording can have a similarly rich sound if the actors sympathize with each other. When musicians are in tune with each other, both literally and metaphorically, they perform well together. Berend Dubbe, for example, first recorded his at home and later reached out to Hagenaars to mix it. They were able to start mixing immediately because Hagenaars was in the right mindset and well prepared for this project.

Hagenaars seems able to adjust his mindset to every musician he works with. Artists praise his ability to make everyone feel comfortable and to set the right atmosphere for the specific session of the moment. Tuning is necessary for sympathizing—it is a primary condition for making music together.

Vrouwenvelder 15

Modes of listening as ideality

From the preceding paragraphs, one might conclude that Hagenaars is able to fully sympathize with an artist. While in practice this may work, it is theoretically impossible. The French philosopher Jean-Luc Nancy pointed out that in order to transfer information undisturbed there must be a commonality between receiver and sender (50). Between two strings of equal length and thickness, there is a commonality. Therefore, the struck string communicates undisturbed with the unstruck. The latter sympathizes with the former. This indicates that in every other situation, there is always some level of disturbance or noise in communication. Consequently, undisturbed communication is an ideal form of listening. By “ideal” I do not indicate that which is highest in a hierarchical order, but rather a state which can never be met and only strived for. Hence, listening is not a univocal essence, but a relation between practice and an ideal put into action. This relation differs from mode to mode, because the actual listening is practice-dependent and not the ideal of undisturbed communication. Hagenaars’s listening, at this earliest stage of the recording process, aspires to feel (sympathize) with the recording artist in order to collaborate with them.

Forward listening

When an artist enters the studio, she or he cannot record immediately. The artist, the director, and the session musician have to start sympathizing again, this time specifically for the music of the day. Session musicians Bakker and Zwart frequently mentioned “vooruit luisteren,” or

“forward listening” (as I will call it henceforth) as one of their primary qualities (23 Feb.). In forward listening, musicians listen to grasp the idea of the piece of music. Through this technique, musicians claim to hear more than is already there and especially to hear what is possibly still to be recorded. I distinguish two different modes of forward listening—in one, actors can hear from a musical gesture what an artist wants to convey, while in the other, they hear how to get there. Musicians need forward listening before they start recording, because everyone needs to be on the same page. Vrouwenvelder 16

Verstehen and forelistening

“Een goede verstaander heeft genoeg aan een half woord” is a Dutch proverb that expresses the first mode of forward listening. It translates as “a word is enough to the wise,” but the English translation does not convey the same message. Just “half of a word” is enough for the Dutch, because she or he is a “good listener”: good listeners can grasp the intention or message from an incomplete gesture. The stem verstaan, like the German word verstehen, can be translated as both

“hearing” and “understanding.” German philosopher Hans-Georg Gadamer, who extensively elaborated on verstehen, states that a text is like a question (352-53). In order to understand a text, the interpreter has to understand the question and thus to understand the one who poses the question. Forward listening is a practical verstehen, because it is related to an incomplete work, while Gadamer’s hermeneutic verstehen implies a completed work and total understanding of meaning. Therefore, forward listening is twofold. First, a practical verstehen followed by a performative explanatory or technical listening.

The performative listening that Bakker and Zwart practice concerns a work in progress. They do not hear “the future” as it will come, but they hear a future that can come. Their listening is not disclosing a truth but constructs a relation between the heard and the potential future. This performative future listening can be explained with the figurative expression “voor ogen zien”

(“seeing in front of the eyes”). In Dutch, this means that one can imagine something added to the reality that is already optically perceived. Zwart proposed “voor oren horen” (“hearing in front of the ears”) that is analogous to the Dutch expression (Bakker and Zwart, 23 Feb.). I suggest to call the session musicians’ practice forelistening as a derivative of foreseeing. As musicians forelisten, they strive to understand the technical means that they will need to realize the added potential that they hear in actual music. Because it is guided by an ideal, the practice of forelistening can continue endlessly as musicians keep hearing possible improvements to the sound. Therefore, musicians Vrouwenvelder 17 practice forward listening during the whole process of recording and not only in this initial stage.

During a day of recording, one’s perspective on the song can change, so the ideal sound also changes and forward listening continues. Forelistening’s agential qualities manifest these situations of potential sound realization. Forward listening performatively separates versions or manifestations of the artists’ musical ideas.

Construction of roles

As a separate stage of the recording process, forward listening occurs mainly at sessions with session musicians positioned next to the artist. Artists eventually have the final word in decisions as they provide the first musical and textual gestures, and own the copyrights of the recorded music.

The artist cannot listen forward, because the musical gestures have to be verstehen. In Gadamer’s, terms, the artist is the one who poses the question to be understood by others. Nonetheless, artist is a role that can be adopted by anyone at any time and does not always encompass all of the qualities

I have described.

The Blue Grass Boogiemen exemplify this flexibility in roles. The songs they recorded are a supplement for In The Pines: a comic book by Erik Kriek. The comic artist translated five murder ballads into comic strips and invited the Blue Grass Boogiemen to record the songs. In this case, the

Blue Grass Boogiemen act as session musicians. The band practices forward listening in relation to existing ballads; their forelistening concerns their own idiom. For example, they interpret “Where the

Wild Roses Grow,” initially recorded by Nick Cave, by adapting it to their instrumentation: , banjo, bass and guitar. The Blue Grass Boogiemen also usually perform their own songs alongside interpretations of traditional ones, so they inhabit the role of artists on other occasions. Kriek also sang and played guitar on some songs and thereby trusted in the opinions and authority of the Blue

Grass Boogiemen. These sessions show that artist and session musician are performed roles and not Vrouwenvelder 18 fixed identities; they change according to the mode of listening employed. Forward listening is the agential cut which separates the artist from the session musician.

Forward listening shows that producer is also a performed role. According to Hagenaars, the producer is a virtuoso multi-instrumentalist who knows how to achieve a specific sound and imposes his preferences on the sessions (3 Mar.). The “virtuoso multi-instrumentalist” quality of the producer is mainly carried out by the musicians, while Hagenaars’s forelistening concerns musical interpretation as well as the operation of the recording devices.8 This leads Hagenaars to say that the role of producer is shared between him and the musicians (3 Mar.). Together they verstehen what an artist wants; the musicians know how to play what they forelisten and Hagenaars knows how to record it. This necessarily happens before recording can commence.

Directed listening

Forward listening is a creative and communicative listening which, in some cases, is skipped as a separate stage. This is typical for recording projects that do not involve session musicians. The

Mysterons did such a project when they recorded their single “Mellow Guru.” Hagenaars still practices forward listening, but only during the day of recording and not as a separate stage. From the moment they set up their gear, the Mysterons started practicing music to get in the right mood for playing the song. Often, musicians do this to master the structure of the song and to set the direction for the actual recording. I call this mode of listening directed listening, because it involves getting everyone in sync and ready to record the song properly.

8 I expand on these devices and their operation in chapters 2, 3, and 4. Vrouwenvelder 19

Ascoltando

I distinguish two modes of directed listening—the first is formulated in Szendy’s book with

Nancy, who wrote the foreword. Nancy proposes ascoltando (“listening” in Italian) as an instruction for musicians to play while listening (ix). A musician always has to listen, of course, but the instruction “ascoltando” indicates a kind of playing that is characterized by listening. Szendy adds that “ascoltando” evokes “following” (105-10); therefore, it implies one listener’s submission to the expression of another. The musicians start playing before they intend to record, in order to not only follow one another, but also for a mutual submission to establish.

When musicians play ascoltando, recording commences. The first element that has to be recorded is the basic take. A take is the totality of the parts that are recorded at once. The basic take is this totality and it functions as a basis for the recorded music. The musicians play their main parts for this take, causing the structure (form, harmonies, rhythms) to take shape. During this recording process, musicians never reach complete submission, because this is an ideal that cannot be attained. When they are finished playing, Hagenaars invites the musicians into the control room to listen to the recording. The musicians, for their part, often refuse to listen the take. This can occur either when they already feel that the take was lacking in quality or when they judge that they have come close to collective submission and therefore want to continue recording to maintain the harmonious state. Musicians will allow the interruption only when they feel that the basic take was good enough to review.

Guide

The second mode of directed listening has practical implications to record the basic take. For instance, both Danny Vera and Josephine van Schaik, singer of the Mysterons, prefer to record the vocal track later. Nevertheless, songs are often oriented towards singing. In the case where a vocalist prefers to record their part later, they will sing a “guide” to direct the others. This is a practical mode Vrouwenvelder 20 of directed listening. “Guides” are parts (instrumental and non-instrumental) that are not recorded but function as a direction for the others. Guide tracks should not interfere with the actual recording and therefore are made with the assistance of multiple recording rooms and headphones. Each musician has a personal headphone remote mixing station. With this device, one can individually adjust the volume of the different signals. A musician can, for example, raise the volume of the bass because they want to guide one’s own playing. This does not affect the input signal of the respective track into the control room’s mixing console. Similarly, Hagenaars can make a click track to direct the tempo of the music. This is necessary when the artists wish to later add many more tracks to the basic track, and of course, to avoid unwanted sloppiness.

To sum up, directed listening has two modes: one pertaining to an ideal of complete submission and the other, a practical mode. I indicate the former with ascoltando, while the latter is organized around the guide deployed by the actors at SSE-Noord. The two modes interrelate because the former can implicate the latter. The Mysterons stated that they can only play well (ascoltando) when Josephine sings along, but this is only possible through the use of guides (14 Jan.). Directed listening happens in the instant before recording and is necessary for proper recording. Directed listening involves devices that encompass the complete recording system.

Before recording commences, actors at SSE-Noord perform different modes of listening. This practice challenges theories on listening, including Gadamer’s verstehen and Nancy and Szendy’s ascoltando, because it shows that listening always pertains to an ideal of undisturbed communication. The modes of listening discussed in this chapter resemble the concepts developed by these theorists, but relate to a practical situation; they focus on partial understanding, because complete understanding is an unattainable ideal. These modes of listening reach partial understanding specified by their particular functions: attunement for sympathizing between actors, forward listening for realization of the musical idea, and directed listening to submit to each other’s Vrouwenvelder 21 playing. SSE-Noord reveals the performativity that was previously unheard in the theories I discussed in this chapter.

The performativity of these three modes of listening occurs in the translation between the temporal manifestations of the musical idea preceding the studio work (i.e., live performances or demo) and the planning of the recording process. That is, the three modes make clear for each human actor what the recording entails and which steps should be taken to realize it. The agency of these three modes of listening signifies a simultaneous departure from the musical idea and the commencement of the recording process. The musical idea exists only because the modes of listening at the studio recognize it as such, each working towards its realization. These modes comprise the primary stage of recording, because they separate idea from recording, while preparing the actors for the recording process. These modes of listening necessarily happen before recording can commence.

Vrouwenvelder 22

The Middle Ear: Recording

Danny Vera and his band are working on the drum part for “Wrong,” after recording the basic take earlier in the day. Because it is one of the more up-tempo and aggressive songs on the album, Danny wishes to expand the sound and feels that the drum part needs body. Ben records the clatter of a cutlery tray on every first beat to give it a sharper unfocused metallic sound. He also suggests stamping on the floor to thicken the sound. Danny is enthusiastic about this idea and immediately jumps around the main recording room to find a good sound. Frans reminds the others that the room is not suited for this activity. He has built extra thick walls and windows, put dampers on the walls, and added carpets to the floor to avoid unwanted vibrations. Through this design, he intends to capture a dry sound which he can manipulate later.

While Frans explains this to Ben and Reyer, Danny continues his search for a way to make the right deep-wooden sound. There is no sound insulation in the toilet and the kitchen, but the former is too small and the latter has stone tiles. The stairs are too dangerous and the control room upstairs has also been designed to reduce vibrations. Danny finally finds satisfaction in the office: a small messy room. However, Frans foresees problems with the office, because it is the only room in the studio that is not connected to the mixing console. There is no output for headphones and no input for microphones. Nevertheless, Danny insists on using the room’s floor. Improvising, Frans builds an apparatus with a very sensitive microphone, which is placed on a stand and directed through the doorpost. The stand is placed in the hallway so that the wire can be connected with an input located in another room. Then, the basic take is softly replayed in the control room at the other side of the hall, so that the musicians can follow the tempo and rhythm. The plan succeeds and the sounds of feet stamping on a wooden floor are added to the drum part.

Frans Hagenaars is proud of his studio because there are no “mistakes” in the rooms so that the subsequent recordings “do not lie.” Hagenaars can record easily and relaxed, because of the well- designed sound insulation in the recording rooms. Recording begins as sound enters the recording Vrouwenvelder 23 system through microphones. Sounds are amplified between the microphone and the mixing console. Without amplification, sound will not be audible; without sound, recording is pointless. In this chapter, I investigate the specific modes of listening that establish the studio as a recording space. I examine two actors that enable sounds to be recorded: the microphone and the room. These nonhuman actors perform modes of listening so that the ideas of an artist can become audible for the recording system.

The Microphone

While the musicians set up their gear, tune their instruments, and begin to rehearse,

Hagenaars is setting up the microphones and amplifiers, connecting everything to the control room.

He performs the role of sound engineer: the person who enables the music to be recorded and is responsible for the overall sound. With “sound,” I mean both the material and aesthetic properties of the sound, because he makes sounds audible but also creates a particular aesthetic with them.

Hagenaars owns a range of different microphones, each with a characteristic sound. He decides how many microphones to use, for which purposes, and at what distance from their sources. He creates the connection between the musicians and the control room. Similarly, the bones in the middle ear are the medium. They connect incoming sounds from the external ear to the processing center in the inner ear. The microphone is another medium that connects the sounds produced by instruments with the control room. This medium can either offer assistance or resistance.

Assistive media

If there were no bones in the middle ear, then incoming sounds would immediately knock against the oval window (the oval-shaped membrane that separates the middle and inner ear). These sounds would either not be audible or at least be very weakly heard. Bones assist hearing as they amplify sound by fifteen to twenty times (Widmaier, Hershel and Strang 241). This is possible Vrouwenvelder 24 because there is a translation: air pressure waves are received by a membrane which sets a bone lever into motion. Compression and rarefaction of air is translated into bone movement. Similarly, microphones need amplification to translate air pressure into electric current.

According to McLuhan, with translation from one medium to another, comes a redistribution of the senses; every medium appeals to some senses more than others in a unique way (19, 49). This

“alteration of the sense ratios” seems anthropocentric because it views the human as an inseparable unit. McLuhan’s formulation thus deploys the linguistic performativity that Barad (803) is arguing against. To take up her critique, people perceive the ear as a unity because people describe it with a singular noun. Although, viewed from a functional perspective, one can still come to the same conclusion: the ear is perceived as a unity, because the ear acts as such. When media are figured as

“extensions of the body,” as in McLuhan’s conceptualization, the implied body is a human body and thus humans are at the center of the worldview. A close study of the sense of hearing reveals a process of translation inside the hearing organ. By examining a dissected ear, one can see that the organ does not consist of one material but is instead composed of a combination of media. The claim that media are extensions of the senses is anthropocentric, because it is oriented towards human perception.9 This view neglects the heterogeneity of the media in the ear and also obscures the fact that the flow of information within it is bidirectional. In other words, the performativity of the organ goes unnoticed.

Reviewing McLuhan’s terminology, however, makes his theory useful for developing the concept of performativity. For instance, the word “sense” indicates more than just the organs of perception, but also refers to comprehension. Reviewing McLuhan’s theory with this broader definition allows for a non-anthropocentric perspective. Sense can be understood as mode of comprehension. To communicate information from one mode to another, one has to translate—with comprehension comes translation. By its nature, translation alters the information, because every

9 I do not argue that McLuhan intended for his theory to be anthropocentric, because he also states that “man is the sex organs of the machine world, just as bees are for the plant world” (51). Vrouwenvelder 25 mode of comprehension has its characteristic limitations. When one views the ear as a heterogeneous assemblage of modes of comprehension, one can better understand that sound is changed through its perception. So the auditory sense is not unidirectional and anthropocentric, but instead performs a bidirectional translational agency.

Microphones have a similar function as the bones in the middle ear. They translate air into a mode of comprehension that is completely focused on sound transmission. This is an alteration of the sense ratios, because air has many qualities and is relatively bad at transporting sound, while bones and microphones are well-suited to the task. In McLuhan’s terms, air is a “cool” medium because it serves many senses, while the other two are hot, because they are solely focused on one sense (24-35). Of course, this is still too simplistic. Each person is unique and each ear is like a fingerprint; indeed, every set of middle ear bones differs in shape and size. Similarly, every microphone is unique and there are, of course, different types of microphones with characteristic modes of comprehension. Microphones are designed to listen to specific sounds in specific environments for particular purposes. While a person cannot change his middle ear bones, microphones can be changed according to the demands of human actors. Different microphones thus assist the recording of sound in their own particular ways.

To take one example, The Mysterons reserved the second day of recording for the vocal track. Van Schaik, their singer, started the day by choosing a microphone. Hagenaars suggested that she test the U47 and the M49.10 She sung a test part with both microphones to decide which one to use for the actual recording. The band was amazed by the sounds of both but also perceived clear differences. The U47 appeared to emphasize high-frequency peaks while the M49 was judged to have a “rounder” sound. Opinions differed because the thinner sound of the U47 better matched the song, while the M49 was appropriate for her voice. In the end, Van Schaik chose the latter, suggesting that she performed better when she heard her own voice through this microphone.

10 These two are often considered among the best microphones ever produced (Garner 30-51; Howlett 1). Vrouwenvelder 26

Expressing this in terms of sense ratios, one could say that the U47 listens more accurately to high frequencies while the M49 treats the totality of frequencies equally. In other words, each microphone performs a different mode of listening, thereby communicating information differently.

This example shows that microphones perform a listening agency, because they alter the registered sound according to their technical specifications. These specifications differ from model to model but also between microphones of the same model, because every single microphone is affected by corrosion and other natural processes. All microphones translate air pressure into electric current, but each produces a unique sound and each listens differently.

Resistive media

One can view a microphone as an aid for recording the voice, but one can also understand it as an obstacle to ideal audibility; any media placed between the message and the receiver can also be seen as creating resistance.11 In the end, the studio aspires to deliver the ideas of the artist in the form of a song or an album. Recording is the process to make these ideas audible. This is the paradox of recording—for the ideas to be audible, they have to be translated from medium to medium and handled by different actors. As I mentioned in the section entitled “Tuning,” it is impossible to transfer ideas from one person to another without disturbance. Hagenaars acknowledges this, stating that ideally the studio is transparent, meaning that it should “not to be heard” (3 Mar.). If the studio were conceived otherwise, it would mean that he imposes his own opinion and his own sound onto the artist. This approach is one he despises—Hargenaars intends to think with (and not for) an artist, hence his preference for the role of director over producer. Nevertheless, he admits that his studio inevitably has its own sound (Hagenaars, 3 Mar.). Media as resistance corresponds to the concept of the parasite as theorized by French philosopher Michel Serres.

11 In engineering discourse, disturbance of communication is expressed by a “signal-to-noise ratio.” Vrouwenvelder 27

The French word parasite refers both to a biological parasite and to electrical static. The parasite interrupts a stream and gives it another purpose or destination; the tapeworm interrupts the stream of food for the host and makes a source of food for itself. The operator of a medium is always subordinate to the laws of the medium, claims Serres (38). Microphones interrupt the stream of air pressure, transforming it into electric current. The sound engineer, is subject to these laws of sound transmission. This explains why Hagenaars has so many different microphones. Microphones parasitize him, but at least, he can still choose which microphone will act as parasite. From Serres’s perspective, one could say that microphones have agency in their relation with the people who use them. They act as parasites on the idealized sound that the engineer is trying to realize.

I mention the parasite here as a contrasting view to McLuhan’s media theory. Both concepts explain the variety of studio recording devices and both stress the translating functions of media.

McLuhan defines media as extensions of the body, thereby figuring a center (the body) with extensions (media). In this view, each medium has a different sense ratio, emphasizing some senses, while de-emphasizing others. Media differ from each other precisely because they emphasize different senses or modes of comprehension. Serres, on the other hand, rejects the idea of a center, showing that everything is mediate and that audibility is an ideal. The two theories seem to complement each other. Following McLuhan, we can see that every microphone performs a unique mode of listening. The agency of the microphones, in this perspective, concerns their potential affection of the sound signal. Following Serres, on the other hand, all media create the conditions for communication, because without them, streams would not be interrupted and redirected towards a receiver. The agency, in this perspective, is manifested in the constitution of subject and object, or, phrased differently, of parasite and parasitized.

Vrouwenvelder 28

The Room

When microphones listen, they register not only the direct input of an instrument or voice, but also a delayed input of the same sound. This is caused by reflection of sound against the walls and furniture of the studio. Because not every sound reflector is placed at the same distance nor made of the same material, sound reflects at different times, speeds, and volumes diffusing into a unique ambience. Just as Hagenaars owns multiple microphones, SSE-Noord has several recording rooms. Adjacent to the main recording room is a smaller room that allows for the drums to be recorded separately from the other instruments. This is intended to prevent sound leakage, but it also creates the possibility to open the door and extend the acoustics into the uninsulated hall.

Reverberation is a word that describes the ambient totality of sound. SSE-Noord is designed to diminish unwanted reverberation; the less there is, the easier it will be to process and manipulate the sounds later. Nevertheless, reverberations assist and amplify sounds, promoting their audibility.

Construction of the self through listening

Vibration of the eardrum and the consequential motion of media inside the auditory organ is commonly called “hearing.” One’s own voice is heard through the bones of the skull and through the

Eustachian tube that connects the middle ear cavity with the pharynx. No one can hear your voice in the way that you do. This hearing results from the connection between your intention (voice) and your physical body (vibrating bones and eardrum). This process therefore constructs a self-image, or rather a self-audition. Jean-Luc Nancy points out that one hears oneself inside the body and constructs a self as a subject, because it is immediately clear that sound needs to be sent and to be received (16-17). It is the immediate referral to the self from the self that makes the self a subject of itself.

A similar process of referral also happens in non-human bodies. For example, electric make almost no sound when they are not electrically amplified, because they have a very limited Vrouwenvelder 29 soundbox for sounds to reverberate within. By contrast, acoustic guitars can be easily heard because of their larger soundboxes. For a guitar to function as a musical instrument, it has to resound itself; in other words, it must listen to itself. The soundbox’s reverberations are the immediate referral of a guitar’s sound. Although an electric guitar needs other media to be heard clearly, both electric and acoustic guitars feature a relation of something (the strings) that brings airwaves into motion and something (the sound box) that reverberates these airwaves. This relation is what constitutes it as a guitar and allows it to be handled as such by human actors.

The specific construction of form and material makes each guitar sound unique. Each guitar is a self in its own right, because it is a unique combination of sound source and reverberation. The difference between reverberation and vibration is a matter of scope determined by the body. Both phenomena refer to the reception of sound on a surface and a consequential resounding, affected by the shape and fabric of that surface. While reverberations mainly concern an outside or an inside of a body, vibrations connect the outside with the inside of a body. Walls are perceived as reverberating because a listener usually hears only the reflection of sounds that come from the same side (interior or exterior) as the listener. However, one could perceive sound from the other side of a wall (for instance, imagine standing outside a discotheque) and know that walls also vibrate. In both situations, the wall performs the same material agency, although humans perceive its effect differently. Reverberation shows listening’s performative agency as it constitutes a relation between sound source and reverberating surface, thereby constituting a material unity. This example shows that listening is also an activity performed by non-human agents and it thereby extends Nancy’s conceptualization of listening.

Construction of SSE-Noord

The acoustic connection between source and body makes every sound unique, as every surface reflects sound differently. This is also what gives a particular room its unique sound. Vrouwenvelder 30

Hagenaars limited the reflection of sound by putting soft materials on the walls and floor. Soft fabric absorbs rather than reflects airwaves. Hagenaars aspires to control the studio’s ambience, so that ideally, the sounds that are created are also the sounds that are recorded. Nevertheless, the walls resound in their own way, creating the particular sound of SSE-Noord. The sound engineer manipulates sound by moving microphones closer to or further from the source, thereby recording fewer or more reverberations. Ambiance is welcome as long as it is controllable, because it shapes the uniqueness of the recorded sound.

Two sets of actors—rooms and microphones—perform the reverberation and registration of sound and thereby establish the studio: a space to record sound. This seems to resemble the commonly held view that the studio is simply a place where artists come to document their songs.

However, I have added nuance to this view by showing that the human and non-human actors in a studio practice performative listening. Through their listening agency, these actors add ambience to the produced airwaves and translate these into electric current. This changes the relation between human actors and the recording process, because the non-human actors perform a disturbance of the musical idea that enters the recording system. The recorded music features the unique sound of

SSE-Noord’s rooms and microphones, although this is probably only unconsciously perceived by the passive listener.

Nancy’s notion of self construction through listening can be extended to the assemblage of human and non-human actors that constitute the recording studio, because the studio, like human listening, involves a self-referential chain of translations. In this perspective on the studio, the theories of McLuhan and Serres paradoxically invigorate each other, because the latter stresses media’s fundamental necessity for noise while the former emphasizes that every medium deploys a unique sense ratio which performs that noise. The studio is not a place of documentation, but performs a listening that transforms both the sound and its producers.

Vrouwenvelder 31

The Inner Ear: Processing

Walking up and down the stairs between the recording room and the control room is a weird experience, because the music sounds different on each floor. I stand in the doorway of the recording room listening to the Blue Grass Boogiemen. I mainly hear the loud banjo, despite the fact that Bart, the banjo player, sits with his back to the door. Meanwhile, although Aart Schroevers and his bass are facing me, I cannot really hear him play. Upstairs, by contrast, I can hear everyone together. Frans is operating his equipment; as he turns the knobs on his mixing board, the sounds of individual instruments change. The banjo is still loud here, but Frans adjusts this. Compression lessens the volume of the instrument without diminishing its intensity. The mandolin, on the other hand, is enhanced by a low-pass filter, giving the lower frequencies more prominence. Anita’s voice comes into the mix via the reverb plate causing heavy reverberation. Erik’s voice, on the other hand, is passed through the tape recorder to give it a delay effect. Gradually the combined music sounds like a band. The sounds are balanced and clear—each part is audible on its own but everything also coheres into a unity. In the control room, I finally hear the song “Caleb Meyer” and all of its parts together, rather than the banjo-heavy sound which I encountered in the recording room.

At the end of the electrical circuit of sound is the A/D converter: a device that translates analogue signal into digital code. Digital code is interpreted by the computer which in turn directs the mixing console. Similarly, at the end of the trajectory of sound within the ear, a small organ called the organ of Corti translates mechanical power into electrical signals and sends them into the nervous system. Before sound enters the mixing console, it passes through several analogue devices that manipulate it. These devices can be divided into two kinds: imitative and reflexive. The first kind imitates sounds and thereby imitates space, because reverberation consists of multiple sound imitations and the brain perceives reverberation as space. The second kind of device reflexively responds to sound according to preset parameters. These devices together with the microphones, make up the chain of media that sound passes through before it enters the mixing console. In the Vrouwenvelder 32 previous chapter, I examined non-human listening by microphones and rooms. In this chapter, I investigate to what extent these analogue devices also perform non-human listening.

Imitative Devices

The human auditory system does not feature an imitative function. Regions in the brainstem do connect sounds registered with similar properties within a time interval and determine location by measuring the time interval (which can be translated into distance) between these sounds that are perceived as similar. The brain associates different kinds of repetitions with different spaces and thereby helps to orient towards a sound source. Because the auditory sense does not create and only registers imitation, one can wonder whether the imitative devices of the studio are “listeners” or not.

Echo, reverb and delay

Echo, reverb, and delay are the three terms used to indicate sound imitation in sound engineering. At SSE-Noord as well as on the internet,12 these terms are perceived as referring to different effects. Although they are distinct, they all stem from the same sound imitation principle.

One can conceive of them on a three-dimensional coordinate system. Point zero in this sphere represents a single iteration of sound, with no diffusion (first axis) and no decay (second axis). This represents delay, which is the distinct iteration of a sound within a fixed time interval. The iterated sound has exactly the same pitch, duration, amplitude, and characteristic timbre. Sound imitation is called “delay” when there is little to no diffusion or decay. The third axis, which refers to the number of iterations, does not require a change in terminology. Sound imitation is called “reverb” (a diminutive of reverberation) when diffusion and decay are added to such an extent that one does not

12 I examined the first hits on Google and YouTube (Academy AV; Ben; crazyfiltertweaker; Hunter). These sources discuss the differences between the terms, yet they all make different claims. Scrolling through other hits shows even more variety in explanations. Vrouwenvelder 33 perceive any distinct reflections. The amount of diffusion and length of decay can theoretically be extended to infinity, and the effect will still be called “reverb.” Like delay, echo is a distinct imitation of a sound with multiple copies, but, like reverb, it also has a certain amount of diffusion and decay.

Only machines can imitate sound as delay. In an acoustic situation, a sound is imitated as it is reflected by a surface, but this will always be a diffuse sound. In addition, any given sound will most likely reflect from multiple surfaces at the same time. Delay thus only exists via machines, because only machines can precisely copy a given sound. When the effect of an imitative device tends towards delay, the input sound is returned as a rhythmic output. A single tone contains pitch, volume, duration, and timbre, but the repetition of this tone adds a rhythmic dimension, because the original sound is perceived as on the beat while the imitation is perceived as off-beat. Hence, delay is not a function of hearing, but a specific tool from sound engineering with an aesthetic purpose.

Contrary to what one might assume, imitation as reverb can only be achieved by machines as well. SSE-Noord has an EMT140 plate reverb specifically designed to create reverberation. The steel reflects sound and because these reflections are diffuse reverberation is perceived. Reverb sounds distinct from spatial reverberations, because the device’s reverberation is created by one surface, rather than a multiplicity of surfaces. When the effect of an imitative device tends towards reverb, it nevertheless adds spatiality to the sound. Our auditory sense is trained to interpret spatial dimension and the material of a surface’s reverberations. One’s mind perceives an imaginary space for these sounds, because they are notably different from those created in an acoustic situation with its multiplicity of surfaces, fabrics and materials. Reverb, like delay and echo, thus only exist as a modifiable device.

Is imitation listening?

These imitative devices are like mirrors. Mirrors do not watch but nevertheless change what can be seen. Imitative devices resist being understood from a listening perspective. It is hard to call Vrouwenvelder 34 imitation “listening,” because effects like delay, echo, and reverb are added to the input sound and not present to be perceived. One could argue that the forelistening of human actors, which I conceptualize in chapter 2, is similar. However, the difference is that the human listeners really hear things that are not present in the music, because they have a wide frame of reference. In addition, the outcome is never exactly what they hear, because forelistening is oriented toward an ideal.

These imitative devices are the opposite—they do not have a frame of reference, yet they perform accurately. Thus, forelistening is a mode of listening while imitation is not.

In McLuhan terms, imitative devices are extensions of the tactile sense or the hand, rather than extensions of the auditory sense or the ear. An echo can be produced manually by cutting and splicing the audio track several times. Hagenaars, for example, manually created an echo by copying a part of Berend Dubbe’s voice track and pasting several copies with time intervals and a decreasing amplitude away from the copied part.13 I found the resulting sound alienating, because while I perceived it as an echo with depth, I could not relate it to a space in the physical world. This is exactly what imitative devices accomplish as well. The work, when it concerns a normal echo, is outsourced to imitative devices, which replace (or extend) the manual manipulation of music. Unlike rooms and microphones, these imitative devices are not non-human listeners; they are more like the plectrum, a tool used by guitarists to extend the human fingers and create a particular sound. However, because they change the relation between listener and sound, these imitative devices do perform non-human agency just as a plectrum augments a guitarist’s style of playing. Therefore, imitative devices do not listen, but rather change what can be heard.

Reflexive listening

Music cognition scholars Kai Tuuri and Tuomas Eerola claim that reflexive listening is the first mode of listening, or initial response, to sound. This includes responses like shock or attempts to

13 Hagenaars did this by hand because Dubbe wanted the echo to precede the initial sound. Vrouwenvelder 35 locate the sound source. Hence, they also state that it is not a “pure” mode of listening because it is not conscious, or at least is characterized by a lack of focus (141). These scholars describe an unconscious process resulting in a clear bodily response (for example: startling). Similarly, the inside of the ear also contains reflexive mechanisms. These have more ambiguous bodily effects, although they are significant for hearing. The first reflexive mechanism is located in the middle ear and acts to protect the inner ear from damage. The other is located in the inner ear and enhances audibility. In contrast to Tuuri and Eerola’s conceptualization, I argue that reflexive listening is made conscious in the studio. Both reflexive mechanisms are present in the recording studio, embodied respectively in the compressor and the equalizer, and each can be intentionally modified by human actors.

Compressors reflexively diminish the volume with a preset ratio, whenever the input exceeds a preset threshold, so that sounds are audible (i.e., not overloading the system) and mixable in the later stages of recording. A similar mechanism is also at work in the ear. The eardrum’s muscles can be tightened or slackened to control the input level of the ear, because the inner ear needs to be protected from too much energy (Widmaier, Hershel and Strang 241). An expander is the opposite of a compressor; this device diminishes the sound below a certain threshold. This makes a signal cleaner, because ambient noise is filtered out. Hagenaars does not use expanders, because the recording room at SSE-Noord already adds little ambient noise and also because he prefers a slight rumble. Although expanders and compressors have similar reflexive mechanisms, the former is not present in the ear while the latter is.

Equalizers are devices which boost or weaken specific frequency bands within a signal, resembling a function of the ear, which can also discriminate among frequencies. The inner ear is a spiral tube that is divided by a membrane. This membrane is smallest and most tense at the base of the spiral. It progressively widens and slackens, allowing high frequencies to be received at the base and low ones at the top. This spatial mapping of frequencies is called tonotopic organization

(Békésy). The organ of Corti is located on the membrane and reflexively ignores specific frequencies Vrouwenvelder 36 in order to focus one’s listening (Wolters and Groenewegen 190). Hence, one can discriminate between a conversation and background noise, by focusing on the specific frequencies of one’s conversational partner. Equalizers perform an improved function of this reflex, because they enable manipulation of the sound through the precise (de)emphasizing of particular frequencies.

Compressors and equalizers are devices which perform functions analogous to those found in the human ear. Expanders, by contrast, do not mimic a mechanism in the ear. Nevertheless, I consider all of these reflexive devices to be non-human listeners. The compressor, for example, diminishes the amplitude of loud sounds a little too late and continues diminishing the amplitude even when it is not necessary. In other words, compression strives toward an ideal. This ideal is determined by the specific limitations of the ear. For example, the ear cannot hear all frequencies with equal attention. In this case, listening is thus defined after the differences in attention determined by the organ of Corti. One should measure whether something “listens” or not by looking at similar functions within the ear and not by whether some specific mechanism is similarly physically present in the ear.

Non-human listening

All of these analogue devices appear as tools in the studio, both because they are seen as necessary to the recording process and because all enable the manipulation of sounds according to preset parameters. However, whatever a machine does, it can only be called “listening” if its functions (rather than its constructions) are modeled after the ear. This in turn foregrounds the fact that linguistic performativity hovers over academic writing, because it shows that my observations are shaped by the vocabulary I have at my disposal. I therefore identify reflexive (but not imitative) devices as non-human listeners.

Equalizers and compressors perform specialist modes of listening, because they focus on and improve two reflexes of the ear. These machines enhance the technical audibility of the tracks, Vrouwenvelder 37 because they manipulate them in preparation for mixing. When Hagenaars sets up such devices, he performs the role of sound engineer, getting the signal ready for the mixing process. Imitative devices help in this process, but they do not listen as Hagenaars does. Applying reverb by itself does not enhance a track’s audibility, for it only does so when other tracks are assigned different degrees of reverb. On the other hand, reflexive devices do enhance the audibility of the track itself, by adjusting for when an input signal is too loud, whether in its totality (pertaining to compression) or only in some frequency bands (pertaining to equalization). In this way, they prevent distortion, while providing greater control over the range of frequencies and the sound’s overall audibility. Hence, reflexive devices practice performative modes of listening while imitative devices do not. While the rooms and microphones in the previous chapter make sounds audible for the recording system, the non-human actors in this chapter enhance the modifiability of sounds for the succeeding steps in the recording process. These actors perform an agency that can only take place between microphone and mixer, because before entering the mixer, the characteristic sounds of the microphones have to be optimally modifiable.

Vrouwenvelder 38

The Nervous System: Mixing

Having recorded the vocal parts and second instrumental parts (dubs) in the previous week,

Frans Hagenaars and The Mysterons today begin mixing their song “Mellow Guru.” Frans starts by re- recording the guitar track. The guitar was recorded “direct-in,”14 so by rerecording they seek to add the warmth of amplifiers to the mix. On these extra tracks, they experiment with different effects.

Guided by the tempo of the music (144bpm), Frans respectively assigns a slower (140bpm) and faster

(148bpm) delay for the left and right channels of the guitar. The resulting sound is wide and spacey.

Guitarist Brian Pots and the other band members think that the sound has become too spacey. Frans, admitting that the music press’ description of The Mysterons’s sound as “psychedelic” has misled him, removes the effect.

The band thinks that the sound of the direct-in signal from the guitar is still too “dry,” so

Frans rerecords the tracks with microphones placed at different distances from the amplifier.

Through this combination, they seek a more “dreamy” signal from the distant microphone and more

“body” from the closer microphone. They next decide to gradually mix these two sounds together.

The sound of the closer microphone diminishes in volume as the other slowly fades in until they are both at the same volume. The sound of the guitar becomes more spacey over the course of the song.

After this, Frans joins all of the separate tracks of the guitar dubs into one unified track. This track is then “panned,” meaning that different gestures are spatially distributed between the left and right channels. The Mysterons are enthusiastic about the result, as the guitar part now adds enough spaciness and strangeness to “Mellow Guru” to attract the listener’s attention without drifting into psychedelia.

Thus far, I have only taken the process of one ear of the human auditory sense into account, but humans register the input of two ears. Each ear separately translates mechanical pressure into

14 The guitar was directly connected with the mixing console. This is called “direct-in” recording and is opposed to acoustic recording with microphones. Vrouwenvelder 39 electrical signal, but the two signals are joined in the internal auditory canal (Pusz and Littlefield

2159). Different regions in the brainstem send signals back and forth between the ear and the auditory cortex: the part of the brain that registers sound. The superior olivary complex, a region in the brainstem, directs the reflexes in the ear and thus interprets amplitude and pitch. It also localizes sound sources as it measures the differences between the two ears in relation to the position of the vestibular system (the organ of balance within the inner ear) (Wolters and Groenewegen 189-90). In the studio, the separate tracks are joined at the mixing console, which at SSE-Noord is a Trident

Series 80b.

The mixing console is a large desk that assembles all of the tracks. Every track can still be individually modified, because the console features 32 separate channels. SSE-Noord is a hybrid studio, which means that it utilizes analogue as well as digital techniques. The analogue mixing console cooperates with the computer, which stores the separate tracks and assigns them to channels in the mixing console. The computer, because it is a digital medium, performs a different sense ratio from that of the analogue devices. The human operators can “listen” visually, as the Pro

Tools mixing software translates the digital information into waveforms on the screen of the monitor. The mixing console and the computer therefore extend the possibilities of the studio. The sounds can be stored, replayed, combined or separated, and effects can easily be added and undone.

The mixing console thus allows for every stage of recording to be repeated and for the tracks to be listened to in new ways. Additionally, the results from the recording stage are given meaning, because the mixing console interprets them.

As I mentioned in the introduction to this thesis, the studio is a specialist listening space, and this is due to the performative listening carried out by the mixing console. In this chapter, I investigate how mixing enables a definitive listening. At the studio, the actors work toward the construction and realization of an ultimate listening. I have discussed many different modes of listening that precede the finalization of studio work. In the end, all of these modes of listening are Vrouwenvelder 40 aimed at establishing a final mix that leaves the studio to be mastered, pressed, and released to the public. I call this final mix “the ultimate listening,” because, taking Szendy’s perspective, it represents the combined materialized, assembled, and definitive listening of all of the actors (human and non- human) involved. This idea also contrasts with the concept of a singular listening which can differ from actor to actor. For example, I can still “hear” an extra blues harp during the introduction to one of the Blue Grass Boogiemen recordings, because they initially recorded it in my presence. While this is part of my own listening, it does not exist in the ultimate listening, because it did not end up on the released record. In order to research the completion of a recording project and the establishment of an ultimate listening, I examine two concepts: arrangement and psychoacoustics.

Arrangement

Usually, mixing and recording happen on separate days, devoted to these specific activities.

However, while recording and mixing might seem like separate stages of a project, in practice, they are not clearly divided. For example, on recording days, Frans Hagenaars typically finishes by making a so-called “rough mix” of the day’s recorded music. This rough mix provides a sketch of the aesthetic idea and communicates the creative atmosphere of the recording to the mixing day. The rough mix primarily contains the dynamic contours of the track and a sketch of the spatial divide. In other words, Hagenaars prepares tracks for the mix during what are otherwise understood as “recording days.” This flexibility also applies to mixing days. While mixing, many tracks are rerecorded and although this does not often happen, new tracks can be recorded and added to the mix. Despite these overlapping working methods and for the sake of clarity, I will examine mixing as a separate stage. To investigate the impact of mixing on the realization of a listening, I take up Szendy’s concept of arrangement.

In everyday discourse, arranging indicates assembling parts in a particular configuration to form a unity. Szendy uses the notion of arrangement from musical discourse, claiming that Vrouwenvelder 41

“[arrangers] may even be the only listeners in the history of music to write down their listenings, rather than describe them (as critics do)”15 (36). The arranger translates a piece of music so that it can be played by another set of instruments. In other words, arrangers communicate their listening of a piece by rewriting it. The original work is present in the arrangement, but with certain passages emphasized, ornamented or modified in another way. For Szendy, the arrangement constitutes the original, because the arrangement communicates the initial impression and recognizes it as worth communicating (39-40). At SSE-Noord, the computer arranges (or, plainly writes down in digital code) the listening and enables others to listen to its listening.

To understand how mixing establishes a listening, one has to depart from the notion of an original. According to Szendy, because the arrangement hints at (yet is not) the original, it creates a degree of desire for the original within the listener (36-37). In other words, the arrangement ideally aspires to be the original, but this is impossible because it is a translation. The notion of the ideal can be extended to the initial piece, which thereby loses its initial-ness. The performed piece, just like the notated piece, is already a translation of the idea of the artist, but the idea is not yet a piece of music.

In the studio, the idea passes through many manifestations and none of these can lay claim to being the initial or original.

At every stage, the piece is translated from one medium to another. Translating, like arranging, emphasizes and changes aspects of the input sound. A recorded track hints at the input sound but is not the same, because it is routed through a chain of media. The separate tracks are assembled in the mixing console where they can be arranged. The console preserves the temporary manifestations of the performed idea of the artist. I described my own listenings in the anecdotes at the beginning of each chapter, but there is no better way to listen to the temporary manifestations than through the mixing console. This demonstrates the relative power of listening agency over

15 Emphasis in the original. Vrouwenvelder 42 linguistic agency. Like a human arranger, the non-human mixing console allows its listening to be listened to.

While listening, operators of the mixing console can continually change the tracks and work towards the ultimate listening. Through the use of the console, the mix engineer ensures that every track has the desired volume and sound (i.e., manipulable and mixable). The mixer, embodied by

Hagenaars and the individual musicians, starts mixing by balancing the tracks’ volume and stereo division according to the artistic idea. In this respect, the role of mixer differs from the role of arranger. At SSE-Noord, the same person performs both the role of artist and mixer (except for

Hagenaars who never acts as artist). An arranger, on the other hand, is usually not the same person

(and, in any case, not at the same time) as the composer of the initial music. While mixing, the console creates a version of the piece that could not take place otherwise. The console listens simultaneously to tracks that have been recorded at different times and at different places. The balanced assemblage of these various tracks constitutes a listening that has never happened in reality. The Mysterons, for example, recorded several organ and vocal parts and mixed them into their song. Van Schaik sung the parts of a choir for several successive tracks, while using headphones to listen both to the band and to her own voice. In the recording room, a human actor could only hear her voice, while the mixing console listened to the song and all of Van Schaik’s voices together.

Mixing is an integral process in the realization of an artist’s musical ideas.

In contrast to subjective and ephemeral listening experiences, a studio listening can be preserved, thanks to the mixing console and the computer software. These devices have the agency to “hear out” multiple combinations of the artist’s idea. Mixing is a non-human mode of listening that enables humans to constitute a listening. It presents existing material in a new way and is therefore similar to the process of arranging which Szendy conceptualized.

Vrouwenvelder 43

Psychoacoustics

When mixing, one does not simply balance tracks to adjust audibility, but also processes and manipulates tracks through the implied principles of psychoacoustics, a scientific field which investigates how humans perceive sound. Since understanding of the human brain is limited, psychoacoustic knowledge is obtained through empirical research (Fastl and Zwicker 59-60). This discipline not only enhances the understanding of sound perception but also provides models for manipulating it. At SSE-Noord, Hagenaars makes use of these insights; he frequently refers to research on psychoacoustics when he uses the mixer to augment and shape sound. Mixing is therefore a practical application of psychoacoustics.

The physical capacities and limitations of the ear are fundamental to the utility of psychoacoustics. The ear is not capable of registering every sound in its totality. For example, humans can only hear frequencies between 20Hz and 20kHz, because the inner ear has a limited length. This insight led to the development of equalization, which was later deployed to make telephony more efficient (Sterne 2-3). Equalization emphasizes frequencies found within the human speech range and cuts the redundant (according to the human ear) sounds. This affects the timbre of transmitted voices, while preserving the clarity of their linguistic content. In the end, equalization reduces the amount of electricity necessary for transmitting sound via telecommunication cables.

Research on psychoacoustics thus led to the focused manipulation of sound for specific aims.

Equalization has aesthetic implications (i.e., its effect on timbre), besides the mere practical removal of redundant data. In a studio, these aesthetic possibilities only become meaningful during mixing, which brings together all of the media involved in a recording. A single track can be manipulated, but the result is only audible in relation to other tracks. In psychoacoustics, this relation is called musical consonance and dissonance (Fastl and Zwicker 362-64). Musical consonance is the sum of functional harmonic consonance and a timbre of “high sensory pleasantness” (Fast and

Zwicker 362). Session musicians Bakker and Zwart mentioned that they always search for contrasts Vrouwenvelder 44 within the music, an activity which they identify as “the game of musical consonance and dissonance” (23 Feb.). Bakker, for example, records a snare roll with a microphone positioned close to the membrane. The microphone registers a loud input which is distorted by severe compression.

However, by adding a large amount of reverb to the sound and lowering the volume in the mix, the instrument sounds like a distant military drum. This is the very sound Bakker imagined in the anecdote I relayed at the start of this thesis. The snare drum could not have sounded distant and martial, if the other tracks were mixed in consonance with it. In other words, the snare drum achieves this particular effect, because it is relatively dissonant in comparison to the other tracks which are instead consonant with each other. The mixing console, because it listens to multiple tracks, creates meaning by alternating and juxtaposing sounds with varying degrees of musical consonance and dissonance.

In the previous chapters, I investigated the relation between actors and stages of a single track as temporary manifestations of a musical idea. During a mixing session, many of the stages I have identified—creation, recording, and processing—reappear in the form of recorded tracks in the mix. Human operators repeat different modes of performative listening of the recorded tracks if necessary for mixing. The mixing console performs a new mode of listening, because it listens to all

(or any number the operator wishes) of the recorded tracks together. At this stage, the unique characteristics of the instruments, musical content, rooms, microphones, and analogue devices on each track are combined. Mixing constructs and conveys meaning by listening to these tracks together. In the end, mixing assists in the construction of an ultimate listening in two distinct ways.

First, mixing (like arranging) assembles and balances tracks that have been recorded separately from each other. Second, mixing involves the processing and manipulation of these tracks, guided by psychoacoustic knowledge. After mixing, the actors have together established a final, material manifestation of the artist’s idea through the various modes of listening that they have employed.

Vrouwenvelder 45

The Brain: Beyond SSE-Noord

At the end of a day of mixing, The Mysterons seem satisfied with their new single “Mellow

Guru.” Frans reminds them that there are still some things to do before they can go home. First, they have to listen “like twelve-year-old kids do.” So he sends the final mix to his cellphone in order to play the song over its tiny speakers. Frans instantly hears that the cymbals sound too shrill while the is too loud. Organist Pyke Pasman adds that he cannot hear enough of himself in the mix.

Frans stops playing the song from the phone and returns to his computer. There, he fixes these mistakes with equalization and a little boost to the volume of the organ.

On their second listening via the cellphone, the band is satisfied. Frans still thinks that the balance between the voice and instrumental tracks is not equal in every part, but says that this will be fixed during mastering. Darius van Helfteren, the mastering engineer, masters songs in stages so that the voice is prominent throughout the song. For this reason, Frans “bounces” (i.e., makes a final mix of) the vocal tracks so that they can be treated separately from the rest. Frans also bounces a reference mix, so that Darius can understand how the voice and accompaniment should sound together. After mastering, “Mellow Guru” will be added to the Spotify library and eventually released as a vinyl single.

Mastering is the final stage a song passes through before it is publicly released. Hagenaars outsources mastering, because he believes that one needs distance from the music to produce a quality master (3 Mar.). Darius van Helfteren, who is Hagenaar’s regular mastering engineer, makes the song ready for release. Each medium (vinyl, CD, etc.) has different technical standards, so Van

Helfteren needs to master the track for each medium individually. According to Blue Grass

Boogiemen’s banjo player Van Strien, Hagenaars delivers high quality mixes and therefore makes mastering easy. Hagenaars confirms that he indeed takes mastering into account, by focusing on the production of a superior mix. Vrouwenvelder 46

While an integral part of the recording process, mastering does not take place at SSE-Noord.

In this chapter, my analogy between the recording studio and the auditory sense runs into conflict.

The brain is an integral and important part of the auditory sense. By interpreting sounds that are established in the ear and brainstem, the brain is involved in learning and behaving in relation to perceived sound. I regard the studio as a place where a listening is established, however further interpretation of this listening happens beyond the studio. In this chapter, I investigate how listening to a recording after it leaves SSE-Noord influences the processes at the studio. Here I investigate the two most evident influences: mastering and the consumer. I do this by examining two modes of listening: reduced listening and detached listening.

Reduced listening

During the stages of recording, Hagenaars thinks of mixing, and while mixing, he takes mastering into account. Each stage anticipates the next. In order to think about and prepare for the next stage, Hagenaars practices reduced listening, a concept theorized16 by French composer Pierre

Schaeffer and grounded in the phenomenological methods of Husserl and Merleau-Ponty (Kane 17).

In this section, I investigate how Hagenaars uses reduced listening to achieve a mix that is easily accessible for mastering.

Entendre and Reduced Listening

Reduced listening involves perceiving sound without reference to its source or location, by suspending17 the external world from one’s perceptions (Chion 18; 31). To achieve this, Schaeffer proposes “the acousmatic situation” where the sound source is both audible and invisible (Kane 24).

16 Schaeffer notated his methods in diaries in relation to his own compositions. Brian Kane and Michel Chion, among others, interpreted the composer’s methods. Chion was given the authority to explain Schaeffer’s theory, by the composer himself (Schaeffer, Préface, par Pierre Schaeffer 9-11). 17 The suspension of the external world is fundamental for Husserl’s phenomenological method. Schaeffer adopted this method and applied it to musical perception (Kane 23-24). Vrouwenvelder 47

For avant-gardist Schaeffer, the acousmatic situation was achieved through sound recording on tape or vinyl, because they allow for recording and replaying sounds that have been produced elsewhere at another time. He discovered that sounds could be deprived of their causal relation with a spatiotemporal situation through conscious manipulation. According to Kane, reduced listening is aimed at perceiving sounds as such and not as a medium (25).

Schaeffer uses the word entendre to indicate the kind of perception that reduced listening is derived from. “Entendre” means both “understanding” and “hearing.” Ironically, entendre is best explained by the quote: “the better I understand a language, the worse I hear it.” (Schaeffer 93 qtd. in Kane 26). Entendre is not so much about what is said but rather how it is said. This recalls

McLuhan’s mantra that “the medium is the message,” as it urges one to look not at the content but rather at the medium. According to McLuhan, it is the medium that determines human action; the medium’s content can never be as effective as the medium itself (11). Obviously, Kane interprets the word “medium” differently from McLuhan. For Kane it is merely a referral, while for McLuhan mediality is an intrinsic quality of phenomena. In the end, both observe that focusing on the content of an utterance obscures other qualities of that utterance. McLuhan suggests to investigate the mediality of cultural expressions, while Schaeffer (according to Kane and Chion) proposes that this can be achieved through reduced listening.

Because every sound comes to existence in a spatiotemporal situation, entendre is an ideal which reduced listening aspires towards. Sound is the product of a relation between a source and a reverberating surface, meaning that sound is inherently spatiotemporal. Sound never exists as such, and consequently can never be perceived as such. The actors that come closest to entendre are machines, because they do not perceive the causal relation of sound, but instead hear the specific qualities they are designed to detect. Human actors, by contrast, inevitably listen to both the content and the medium of sounds. Reduced listening is a practice for listening to a sound’s medial quality while deprived of the spatiotemporal situation (or context) that the sound was created in. Vrouwenvelder 48

Reduced Listening at SSE-Noord

Human actors practice reduced listening to an increasing degree, over the course of a recording project. Each stage demands preparation for the next and this can be achieved by listening to the sound’s mediality. Hagenaars first practices reduced listening while setting up the recording system. This is only done to a certain degree, because Hagenaars not only listens as a sound engineer; when he installs the microphones, he also performs the role of director. The sound engineer is the person who sets up a recording situation so that the sounds are most modifiable.

Because sounds cannot be deprived of their spatiality, the process of choosing particular microphones for particular rooms is an aesthetic choice.

SSE-Noord features an exemplary acousmatic situation, because a floor separates the recording rooms from the control room. As the operator of the mixing console during recordings,

Hagenaars is not able to see the musicians. This brings two advantages. First, as I have already mentioned, the acousmatic situation enhances reduced listening in the control room. Second, it promotes content-focused creation in the recording room. Mysterons’s singer Van Schaik states that she is more comfortable singing in this situation because she cannot be seen (14 Jan.). She can instead concentrate on her vocal performance and the precise articulation of every tone, because she knows that the actors on the other side of this chain are attentive to the mediality of the sound.

The floors of SSE-Noord separate musicians from mixer, and thus content from media. To a greater extent than in the first stage, Hagenaars can reduce his focus on the sound of the tracks because the musicians are fully occupied with creating content.

At the final stage of mixing—and this is where the consideration of mastering returns— reduced listening is deployed more than ever. Hagenaars listens to the mix in different environments and on different media than that found in the control room at SSE-Noord. For this reason, he prepares a rough mix at the end of a recording session. With this, he and the musicians can listen at Vrouwenvelder 49 home or in the car, away from the spatiotemporal situation that the music was created in—to attune themselves to the sound in a different context. As I mentioned earlier, the final mix is ultimately tested on cellphone speakers. These reduced listenings have primarily technical implications, but in the recording studio they still function to make the artist’s idea audible. This changes from the moment in which the final mix leaves the recording studio and enters the mastering studio.

The mastering engineer is not involved with the content, but only with the medium. The mastering engineer has the task to boost the volume of the mix and to make it ready for different media. Hagenaars explained that Van Helfteren works according to the “2011-norm,” which came to existence with the rise of internet music streaming services such as Spotify. This norm dictates that the dynamic differences in the mix should be left intact in reaction to the “loudness war;” from the

1980s on, many mastering engineers believed that music stands out on the radio and consequently sells better when dynamic variety is diminished (i.e., the music sounds louder) (Vickers 1-2).

Hagenaars claims that Spotify does not compete in the loudness war, because it plays everything at

“loudness -14,”18which makes hypercompressed music sound dull and flat. This has led to development and adoption of the 2011-norm. In practice this means that Hagenaars can use dynamic differences to give his mixes depth. The 2011-norm shows that even the mastering engineer is involved in the aesthetic process, but only to a small extent.

The mastering engineer, among all of the actors, practices reduced listening the most. He is furthest removed from the recording while still involved with its audibility. Hagenaars makes three final mixes: referential, vocal, and instrumental, so that Van Helfteren can easily master the mix.

Hagenaars mixes the tracks with the mastering process in mind. For example, Sonny Groeneveld, drummer for the Mysterons, was dissatisfied with the sound of his instrument when listening on speakers at home. Hagenaars assured him that this would be fixed during mastering, because Van

Helfteren ensures that the music will sound right on each medium. In other words, Hagenaars

18 For an extensive explanation of the term loudness, see Fastl and Zwicker 203-38. Vrouwenvelder 50 reduced his listening, paying little attention to the audibility of the drum track, because he heard that the totality was ready for mastering. Therefore, reduced listening enabled Hagenaars to deliver a master-ready mix to the engineer, who for his part, uses reduced listening to make final decisions in the mastering process. Reduced listening is thus involved with the technical audibility of a recording and performs a relation between mastering and recording.

Detached listening

In contrast with these precise technical preparations for the mastering engineer, Hagenaars’ process of relating to the prospective consumer is a largely unconscious and ill-defined one. Those involved in the recording process frequently admit that it is hard to detach themselves from the process, but nevertheless, they regard detachment as very important. Detached listening resembles dissociative disorders, because both are interruptions and disordering in the actions of everyday life.

Ruth Herbert, in her investigation on “everyday listening” and dissociative listening, notes that dissociation is theorized in psychopathology in relation to several disorders (91). I use dissociation as a metaphor, while examining the practice of detached listening.

Hagenaars constantly and often unconsciously acts as a stand-in for the listener. Bakker considers this one of Hagenaars’ most important abilities claiming that without him, the musicians would endlessly continue forelistening, recording, and improving tracks (23 Feb.). Hagenaars, however, can hear when a recording is good enough. Both modes of listening are focused on results, but with an important difference. While forelistening is oriented towards the actual realization of the incomplete music, detached listening is concerned with the prospective reception of the completed music. Detached listening thus happens mainly at the end of a recording session. Hagenaars also practices and encourages detached listening, through personal listening experiences conducted outside of the studio, where the shift in context allows the listener to consider the music’s wider reception. When he practices detached listening, Hagenaars virtually displaces himself, or viewed as Vrouwenvelder 51 a kind of dissociation, he acts according to a missing or skewed identity. Being estranged from his familiar place (and fixed role) within the work forces him to perceive it differently.

Hagenaars acknowledges that he is attempting to act as a bridge between the work in the studio and the listening of the prospective consumer (3 Mar.). He says he wants to produce music which conveys similar impression to music he enjoys in his leisure time. Listening to music without bias is difficult, because he is a specialist listener. He often hears the studio’s influence in the recorded work and therefore cannot approach the music with an open mind. Hagenaars calls this an

“occupational disability” (“beroepsdeformatie”). In order to bypass his disability, Hagenaars listens to the music of postwar avant-garde composers, because he cannot “hear” how this music was created.

Listening to such recordings frees him to be attentive to the music, without hearing the distractions of the studio’s role within it. Insights from such listening experiences inspire him in his own mission at SSE-Noord. Hagenaars depends on a split consciousness to listen to music as an audience might, but it can also be distracting, since he cannot “tune out” details when working in the studio.

Detached listening is a mode of human listening, because only humans can (aspire to) displace themselves from a current situation. Detached listening occurs when an actor in the recording process wonders how someone else will listen to their recording. Therefore, it enables the actors to detach from their immediate investment in the recording process and hear the music from a different and slightly estranged perspective. To do this, they may choose to stop working on this specific song or to stop working for this particular day. Eating lunch is an important stage at SSE-

Noord, as it allows time to reset the ears. During a daytime recording, the actors can become absorbed by listening to the recording. The (silent) lunch is a break in this process, after which actors can listen to a recording in a completely different way. They are often more easily satisfied with the result because they have had time to detach from it. Detached listening allows actors to become aware that they can no longer hear the forest for the trees. Hagenaars is often the best equipped to do this. He prefers to spread the work for a song over several days, so that the musicians can more Vrouwenvelder 52 easily detach themselves from the creative process, while hearing gradual improvements in their own tracks.

To sum up, detached listening, like reduced listening, is aimed at finishing a recording for a prospective audience. Musicians get caught up in perfecting their own parts, while losing perspective on the end result. Detachment helps to stop this process. According to Hagenaars, detached listening is characteristic for the director, because she or he needs to keep the project’s feasibility in mind (3

Mar.). In this sense, anyone who suggests that the actors should take a step back, take a break, or remind themselves of the totality of the music, acts as a director. Detached listening is most often used by Hagenaars, because he is already the furthest removed from the creation process. Besides, he personally prefers a little sloppiness and even welcomes small mistakes. The performative agency of detached and reduced listening emerges primarily towards the end of a recording process, as actors practice these modes to remind themselves of the music’s trajectory beyond the recording studio. Actors perceive music differently through these modes of listening and potentially change their engagement with the music.

Vrouwenvelder 53

Conclusion

In this thesis, I examined the recording studio SSE-Noord as a listening space. I illuminated the performative agency of listening in the recording process and thereby emphasized the practice of specialized listening as opposed to a universal idea of human listening. This shift can be observed in the first chapter, as I researched which modes of listening have to be in place in order for recording to commence. Three modes of listening are practiced to achieve this: tuning, forward listening, and directed listening. The performativity of these three modes exists in the translation between the temporary manifestations of the musical idea preceding the studio work (i.e., live performances or demos) and the planning of the recording process. These modes pertain to a work in process rather than to a finalized piece of art. This perspective contests Gadamer’s concept of verstehen and Nancy and Szendy’s concept of ascoltando, because it rejects the common assumption that listening is a unidirectional activity for understanding the essential meaning of a particular sound. Complete understanding is an ideal, because communication always involves some degree of noise and disturbance. The ideal comes from the artistic idea, from the music which the artist has in mind and wants to materialize in the studio.

In the second chapter, I examined what modes of listening establish the studio as a recording space. Two sets of actors, rooms and microphones, perform reverberation and registration of sound and thereby establish the studio as a space to record sound. Although this view resembles the common conceptualization of the studio, I also showed that the actors practice performative listening. Microphones have a characteristic construction that affects sound. I showed that this change can either take the form of assistance or resistance, and thereby suggested that the seemingly contradictory theories of McLuhan and Serres can be combined. When assisting

(McLuhan), a microphone makes sound recordable by translating it from air pressure into electrical current. When resisting (Serres), a microphone—due to its particular composition and properties— listens to specific aspects of a sound, thus creating disturbances which are an obstacle to the ideal. Vrouwenvelder 54

Whether offering assistance or resistance, rooms and microphones change the manifestation of the artist’s musical idea, thereby altering the artist’s stance towards the music. The studio is not a place of documentation, but performs a listening that transforms sound and its producers.

In the third chapter, I continued researching non-human listening. I showed that not all of the devices found in a studio can be viewed as listeners, because the act of listening is always modeled upon the capabilities of the ear. Imitative devices (i.e., reverb and delay) refuse to comply with laws of listening, because they do not aim at understanding information, but only at affecting it.

Reflexive devices (i.e., compressors and equalizers), by contrast, are listeners because they perform with slight inaccuracy in their registration and processing of sound. This shows that listening always aspires towards an ideal and undisturbed transfer of information, that is nevertheless always somewhat disturbed. These devices perform an agency that could only take place between microphone and mixer, because they relate to the characteristics of the microphones and enhance audibility before entering the mixer.

The fourth chapter detailed the performative agency of mixing; a mode of listening made possible by the mixing console. Mixing is listening to the characteristics of the instruments, musical content, rooms, microphones, and analogue devices of each track together. The arrangement of tracks that is closest to the actors’ idea is preserved as the ultimate listening. The mixing console gives meaning to the recording process, because the particularities of separate tracks are listened to in perspective and in relation to other tracks. Mixing makes the studio a specialist listening space, because it allows the actors to cooperatively convey their shared listening rather than their individual experiences. This fills the gap that Szendy left unmentioned, because while he focused only on specialist individuals, I have examined how the studio performs a specialist assemblage of listening.

Having been mixed, this listening leaves the studio to be interpreted by others.

In the last chapter, I investigated how these others (the prospective audience and the mastering engineer) influence the practices at SSE-Noord. The performative agency of reduced and Vrouwenvelder 55 detached listening manifests primarily towards the end of a recording process. Actors practice these modes to remind themselves of the music’s trajectory after the recording studio (from mastering to consumption) and to finally end the recording process at SSE-Noord. I revealed that Schaeffer’s conceptualization of reduced listening may be more practical than he envisioned, because Hagenaars practices various degrees of reduced listening along with other modes of listening. In addition, because of its separate floors, the acousmatic situation at SSE-Noord enhances the performance of the sound source. This was unmentioned by Schaeffer. Reduced and detached listening change engagement with the music, because these techniques aspire to perform the listening of an uninvolved person, thereby changing the perception of sound.

This thesis contributes to a reconsideration of the word “song” (and its cognate

“composition”). Song, because it is a singular noun, supposes an unchangeable essence. The title of a song refers to a set of musical parameters, which are its content. Because most people perceive no significant difference between the live performance of a set of musical parameters and its studio recording, both are assigned the same name. In other words, song imposes a singular identity on work that is, by its very nature, transient and unique in every performance and in every listening. I do not claim that there is no commonality between a live performance and a studio recording, but I want to stress that thinking in terms of songs and compositions obscures the specificities of the environments where music is produced and heard. This results in ignorance of these specialized environments; it also might explain falling record sales and the general devaluation of music in the marketplace. This too may need further research. If one appreciates the specialized environment where music is recorded, one initiates a shift of emphasis from song to listening. While the former is a noun, the latter is a gerund (a verb form that functions as noun). As such, it both identifies a thing and describes an activity. The word recording has the same quality, but this pertains only to recorded music, while listening can refer to every subjective listening experience. The word listening grants performative agency to the listener. It thereby contests the single and essentialized identity of the song, exposing it as just another disturbed instance of an ideal. Vrouwenvelder 56

Sounds only “sound” when they are listened to, so one can adopt a mode of listening in order to change one’s engagement with the music. This engagement changes the sounds because every mode of listening translates or interprets airwaves differently. Undisturbed reception is an unattainable ideal. This multitude of listening modes shows the infinite possibilities of listening to sound. My thesis encourages readers to listen to a piece of music many times in different ways and to construct a listening out of these separate listening experiences. By performing a listening of your own, you subjectively appropriate a piece of music, experiencing it in a particular way.

Finally, this thesis contributes to the growing body of literature on listening.19 I have expanded this concept to consider the listening of non-human objects, since they too are actors with listening agency. I showed that assigning this capability to a machine relies on anthropomorphism, because one describes listening by analogizing to the functions of the human ear. In addition, I shifted the conceptualization of listening from a universal activity to a specialized one. I have shown that listening is not universal, but changes from actor to actor according to their frames of reference.

By mobilizing these concepts to develop a new perspective, this thesis has provided a detailed investigation into the operations of Studio Sound Enterprise, Frans Hagenaars, and several musicians during the winter of 2015 and 2016 in their various capacities as performative listeners.

19 Of which the ones from the list I provided in the introduction are the most important. Vrouwenvelder 57

Works cited

Academy AV. "Audio Theory - Delay." 23 April 2015. YouTube. Web. 15 May 2016.

Bakker, Ben and Reyer Zwart. Personal interview. 23 February 2016.

Barad, Karen. "Posthumanist Performativity: Toward an Understanding of How Matter comes to Matter." Signs: Journal of Woman in Culture and Society 28.3 (2003): 801-831. Print.

Békésy, Georg von. "Concerning the pleasures of observing, and the mechanics of the inner ear." Nobel Lecture, December 11, 1961 (1961): 722-746. Web. 25 4 2016.

Ben. "The Difference Between Reverb and Echo." 19 May 2008. The Makeshift Musician. Web. 15 May 2016.

Blue Grass Boogiemen and Erik Kriek. In The Pines. 2016. CD.

Carlyle, Angus and Cathy Lane, On Listening. London: Uniformbooks, 2013. Print.

Chion, Michel. Guide des Objets Sonores: Pierre Schaeffer er la Recherche Musicale. Paris: Éditions Buchet/Chastel, 1983. Print. crazyfiltertweaker. "What is the difference between an Echo and a Delay?" 8 November 2012. KVR Audio. Web. 15 May 2016.

"Entendre." n.d. Van Dale. Web.

Fastl, Hugo and Eberhard Zwicker. Psychoacoustics: Facts and Models. Berlin: Springer-Verlag, 2007. Print.

Gadamer, Hans-Georg. Waarheid en Methode: Hoofdlijnen van een Filosofische Hermeneutiek. Trans. Mark Wildschut. Nijmegen: Uitgeverij Vantilt, 2014. Print.

Garner, Kelly. Vocal Recording Techniques for the Modern Digital Studio. Diss. Florida: University of Miami, 2014. Web. 20 4 2016.

Hagenaars, Frans. Personal interview. 5 October 2015.

—. Personal interview. 6 January 2016.

—. Personal interview. 3 March 2016.

Herbert, Ruth. Everyday Music Listening: Absorption, Dissociation and Trancing. Burlington: Ashgate Publishing Company, 2011. Print.

Howlett, Mike. "Fixing the volatile : studio vocal performance techniques." The 3rd Art of Record Production Conference (2007): 1-3. Print.

Hunter, Dave. "Effects Explained: Echo, Delay, and Reverb." 7 October 2008. Gibson. Web. 15 May 2016.

Kane, Brian. Sound Unseen: Acousmatic Sound in Theory and Practice. New York: Oxford University Press, 2014. Print.

Lysloff, René and Leslie Gay, Music and Technoculture. Middletown: Wesleyan University Press, 2003. Print. Vrouwenvelder 58

McLuhan, Marshall. Understanding Media: The Extensions of Man. New York: Ginko Press, 2013. Print.

Michel, Serres. The Parasite. Trans. Lawrence R Schehr. London: The John Hopkins University Press, 1982. Print.

Mysterons, The. "Mellow Guru." 2016. Spotify.

Nancy, Jean-Luc. "Foreword: Ascoltando." Szendy, Peter. Listen: A History of our Ears. Trans. Charlotte Mandell. New York: Fordham University Press, 2008. ix-xiv. Print.

—. Listening. Trans. Charlotte Mandell. New York: Fordham University Press, 2007. Print.

Pusz, Max and Philip Littlefield. "Physiology of Cochlear Nerve." Encyclopedia of Otolaryngology, Head and Neck Surgery. Ed. Stilianos Kountakis. London: Springer, 2013. 2159-2163. Print.

Schaeffer, Pierre. In Search of a Concrete Music. Trans. Christine North and John Dack. Berkeley: University of California Press, 2012. Print.

Schaeffer, Pierre. "Préface, par Pierre Schaeffer." Chion, Michel. Guide des Objets Sonores: Pierre Schaeffer er la Recherche Musicale. Paris: Éditions Buchet/Chastel, 1983. 9-11. Print.

Schaik, Josephine van and Sonny Groeneveld. Personal interview. 14 January 2016.

Sterne, Jonathan. MP3: The Meaning of a Format. Durham and London: Duke University Press, 2012. Print.

Stuart, Caleb. "The Object of Performance: Aural Performativity in Contemporary Laptop Music." Contemporary Music Review 22.4 (2003): 59-65. Print.

Szendy, Peter. Listen: A History of Our Ears. Trans. Charlotte Mandell. New York: Fordham University Press, 2008. Print.

Tuuri, Kai and Tuomas Eerola. "Formulating a Revised Taxonomy for Modes of Listening." Journal of New Music Research 41.2 (2012): 137-152. Print.

"Verstehen." n.d. Van Dale. Web.

Vickers, Earl. "The Loudness War: Background, Speculation and Recommendation." 4-7 November 2010. sfxmachine. Web. 10 May 2016.

Voegelin, Salomé. Listening to Noise and Silence: Towards a Philosophy of Sound Art. New York: The Continuum International Publishing Group Inc, 2010. Print.

Widmaier, Eric, Raff Hershel and Kevin Strang. Vander's Human Physiology: The Mechanisms of Body Funciton. New York: McGraw-Hill, 2006. Print.

Wolters, Erik and Henk Groenewegen. Neurologie: Structuur, Functie en Dysfunctie van het Zenuwstelsel. Houten: Bohn Stafleu Van Loghum, 2004. Print.

Wolvin, Andrew, ed. Listening and Human Communication in the 21st Century. Oxford: Blackwell Publishing Ltd., 2010. Print.