<<

Florida State University Libraries

Electronic Theses, Treatises and Dissertations The Graduate School

2018 Experiencing Sound: A Hybrid Approach to Electronic Analysis Andrew Selle

Follow this and additional works at the DigiNole: FSU's Digital Repository. For more information, please contact [email protected] FLORIDA STATE UNIVERSITY

COLLEGE OF MUSIC

EXPERIENCING SOUND: A HYBRID APPROACH TO

ELECTRONIC MUSIC ANALYSIS

By

ANDREW SELLE

A Dissertation submitted to the College of Music in partial fulfillment of the requirements for the degree of Doctor of Philosophy

2018 Andrew Selle defended this dissertation on February 27, 2018. The members of the supervisory committee were:

Evan A. Jones Professor Directing Dissertation

Denise Von Glahn University Representative

Clifton Callender Committee Member

Mark Richards Committee Member

The Graduate School has verified and approved the above-named committee members, and certifies that the dissertation has been approved in accordance with university requirements.

ii

To my mother, for always supporting and believing in me, and to Elainie, for teaching me how to listen for the joy of it.

iii TABLE OF CONTENTS

List of Figures ...... v Abstract ...... vii

1. INTRODUCTION ...... 1

2. METHODOLOGICAL CONCERNS ...... 51

3. SEGMENTATION AND FORMAL DESIGN ...... 78

4. BEHAVIOR OF SOUND OBJECTS AND FORMAL UNITS ...... 133

5. AND LIVE INSTRUMENTS ...... 174

6. EPILOGUE ...... 205

Bibliography ...... 216

Biographical Sketch ...... 223

iv LIST OF FIGURES

3.1 Example of Parametric Intensity Graph and 15-5 Value-Change Graph ...... 85

3.2 Risset: Sud – Parametric Intensity Graph and 15-5 VC Graph for Overall ...... 91

3.3 Risset: Sud – 10-5, 15-5, 20-5, and 30-5 VC Graphs for Overall Noise ...... 92

3.4 Risset: Sud – Parametric Intensity Graph and 15-5 VC Graph for Harmonic Density ...... 94

3.5 Risset: Sud – Parametric Intensity Graph and 15-5 VC Graph for Tessitura ...... 95

3.6 Risset: Sud – Parametric Intensity Graph and 15-5 VC Graph for Brightness ...... 97

3.7 Risset: Sud – 15-5 VC Composite ...... 98

3.8 Chowning: Stria – Parametric Intensity Graph and 15-5 VC Graph for Brightness ...... 102

3.9 Chowning: Stria – Parametric Intensity Graph and 15-5 VC Graph for Dynamic Level ...... 104

3.10 Chowning: Stria – Parametric Intensity Graph and 15-5 VC Graph for Harmonic Density ...... 105

3.11 Chowning: Stria – Parametric Intensity Graph and 15-5 VC Graph for Onset Density ...... 107

3.12 Chowning: Stria – 15-5 VC Composite ...... 109

3.13 Ligeti: – Parametric Intensity Graph and 15-5 VC Graph for Onset Density .112

3.14 Ligeti: Artikulation – Parametric Intensity Graph and 15-5 VC Graph for Stereo Field Utilization ...... 114

3.15 Ligeti: Artikulation – Parametric Intensity Graph and 15-5 VC Graph for Glitchiness ...... 116

3.16 Ligeti: Artikulation – Parametric Intensity Graph and 15-5 VC Graph for Perceived Aural Distance ...... 118

3.17 Ligeti: Artikulation – 15-5 VC Composite ...... 119

3.18 Lillios: Threads - Parametric Intensity Graph and 15-5 VC Graph for Tessitura ...... 123

3.19 Lillios: Threads – Parametric Intensity Graph and 15-5 VC Graph for Harmonic Density 125

3.20 Lillios: Threads – Parametric Intensity Graph and 15-5 VC Graph for Pitched vs. Unpitched Sounds ...... 127

v 3.21 Lillios: Threads – Parametric Intensity Graph and 15-5 VC Graph for Dynamic Level ....128

3.22 Lillios: Threads – 15-5 VC Composite ...... 130

4.1 Interaction between Noise and Objects in Sud ...... 141

4.2 Surrogacy of Plucking Gesture in Lösgöra ...... 143

4.3 Textural Motion from Smalley ...... 147

4.4 Textural Motion in Stria (A) ...... 151

4.5 Textural Motion in Stria (Whole Work) ...... 152

4.6 Motion and Growth Processes from Smalley ...... 154

4.7 Levels of Motion in Threads (Introduction) ...... 157

4.8 Source Bonding Link in Mobilis in Mobili (Transition) ...... 162

4.9 Listening Space Variances from Smalley ...... 165

4.10 Re-conception of Smalley’s Spaces ...... 167

4.11 Space in Vocalism Ai (Ending) ...... 168

5.1 Synchronisms No. 6, Parametric Intensity Graph and 15-5 VC Graph for Spectral Unity ....182

5.2 Synchronisms No. 6, Parametric Intensity Graph and 15-5 VC Graph for Texture Streaming ...... 184

5.3 Synchronisms No. 6, Parametric Intensity Graph and 15-5 VC Graph for Electronic Salience ...... 187

5.4 Synchronisms No. 6, Parametric Intensity Graph and 15-5 VC Graph for Resonance ...... 190

5.5 Synchronisms No. 6, Composite 15-5 VC Graph ...... 191

5.6 Saariaho - Prés, mvt. 2, Visualization of Texture Paradigm ...... 198

5.7 Saariaho – Prés, mvt. 2, Transfer of Motion Types in Closing Section ...... 202

vi ABSTRACT

This dissertation addresses one key question throughout: “How does the experience of hearing a piece of music inform the ways in which understand its formal structure and syntax?” Because is typically solely an aural experience (often lacking any sort of appreciable musical score), it is especially suited to this investigation. In analyzing electronic music, traditional methodologies that rely on or depart from a musical score are often ineffective.

Furthermore, the seemingly drastic aesthetic difference between electronic and might lead one to infer that it is impossible to perceive musical structure in the first place. I argue, however, that by focusing on the listening experience itself as a tool for analysis, meaningful musical structures emerge.

In Chapter 1, I outline related work that has been done in in both theoretical and analytical domains. I discuss many threads that have proven to be important in this dissertation, including theories of phenomenology and sonic definition (especially Smalley’s theory of spectromorphology). Chapter 2 furthers this discussion, focusing solely on ’s theories of listening modes and musical objects, as well as defining the analytical methodologies that I use throughout the project.

Chapter 3 revolves around the perceptual segmentation of works of electronic music into smaller syntactical units. I achieve this through a process I call “parametric analysis.” This involves focusing on one’s own listening in order to determine which sonic parameters (such as dynamic, density, texture, or ) are important and relevant to the musical experience and then tracking the development of these parameters over the course of the piece. I measure the intensity of each parameter at any given time (on a scale of 1-5) and also the total amount of change that a parameter has undergone over a given span of time. By examining these two

vii elements, we are able to create an effective model of each individual’s listening experience and examine how formal structures might emerge from it. I discuss four complete works in this chapter: Artikulation (1958) by György Ligeti, Sud (1985) by Jean-Claude Risset, Stria (1977) by , and Threads (1998) by Elainie Lillios.

Chapter 4 focuses on small-scale formal function present within the segments identified through parametric analysis. In other words, it examines those musical elements that make a given section function as an introduction or a transition, for example. I utilize Denis Smalley’s theory of spectromorphology, which is a way to describe the sonic qualities of a and the ways in which they are transformed over time. Segments from six works are examined, including the Risset, Chowning, and Lillios pieces from Chapter 3 as well as Lösgöra (2016) by

Jon Fielder, Mobilis in Mobili (2006) by Natasha Barrett, and Vocalism Ai (1956) by Toru

Takemitsu. Through this examination, I reveal that formal function in electronic music can be actively experienced through the deployment of musical objects rather than passively inferred as a result of temporal location within a work.

Chapter 5 extends the techniques developed in Chapters 3 and 4 into music that combines acoustic instruments with electronics, examining ’s Synchronisms No. 6

(1970) for and electronics and ’s Près (1992) for and live electronics.

Though each of these works reintroduces a notated score into the analytical process, my examination reveals that the phenomenologically-based procedures developed in previous chapters work equally well for music that has notation. Because these techniques are fundamentally based on the experience of hearing sound, I argue that they can be effectively utilized as a part of analyzing any genre of music, whether it is acoustic or electronic, tonal or non-tonal. Furthermore, by extending these techniques into common-practice music, I believe

viii that we might find additional phenomenological markers of traditional musical structures present in the experience of listening.

Finally, Chapter 6 provides a short epilogue detailing those which I was unable to fit into this project that warrant further examination through a phenomenological lens, including drone, ambient, dance, and acoustic music. Though these genres are typically more familiar to many listeners in that they contain more markers of traditionally “musical” sounds (beat, , etc.), I argue that a phenomenological investigation may reveal additional valuable information. Ultimately, each individual listening experience is unique and interesting unto itself, and it is worth comparing our listening experiences with one another and engaging in meaningful, critical discourse.

ix CHAPTER 1

INTRODUCTION

Defining the Field

The first step in the process of proposing any sort of new analytical methodology or thought paradigm in music is to define that repertoire which is being examined. Though the term

“electronic music” might seem self-explanatory, reconciling its face-value meaning with our individual, intrinsic understandings of what it musically constitutes often proves challenging.

One might begin with a definition like Andrew Hugill’s: “the term electronic music refers specifically to music made using electronic devices and, by extension, to certain mechanical devices powered by electricity…”1 Joanna Demers’s definition is even more straightforward:

“Electronic music is any type of music that makes primary, if not exclusive, use of electronic instruments or equipment.”2 A brief consideration of these definitions reveals two problems, however. The first of these problems is simply that the definitions are far too broad; Hugill’s and

Demers’s definitions, taken at face value, could include everything from musique concrète to heavy metal to Futurist and everything in between. This is primarily due to the second problem, that these definitions are based entirely in the physical production of sound rather than the resulting musical aesthetic. Though all electronic music necessarily involves the use of electronics in producing sound (thus excluding entirely acoustic forms of music making), we might distinguish between “electronic music” and “music produced electronically.” For

1 Andrew Hugill, “The Origins of Electronic Music,” in The Cambridge Companion to Electronic Music, ed. Julio d’Escrivan Rincón and Nick Collins, Cambridge Companions to Music (Cambridge: Cambridge University Press, 2007), 7. 2 Joanna Demers, Listening through the Noise: The of Experimental Electronic Music (New York: , 2010), 5. 1 example, a Beethoven played on amplified string instruments would likely fit both of the above definitions, but surely it does not sit within the aesthetic scope of electronic music.

Rather, it is produced via electronic means. Stockhausen attempts integrate aesthetic criteria in his statement that “The term Electronic Music which we used since 1953 was only related to ; not to produced with electroacoustic equipment.”3 Though there are clearly issues that might be raised in defining the boundary between “art” and “pop” music in the first place, Stockhausen seemingly attempts to base his definition in both physics and aesthetics. While his definition appears to exclude certain genres that some might consider to be squarely under the purview of “electronic music” (most notably Electronic , or

“EDM”), Stockhausen at least attempts to narrow down the concept of “electronic music,” even if that narrowing is a result of subjective interpretation.

One might ask if defining the term is necessary in the first place. Tony Myatt argues the exact opposite.

All these classifications for new approaches to music are problematic in some respect; many are misrepresentations or, in fact, meaningless terms, often revealing more about authors’ intentions than their subject. Not surprisingly, many of the leading figures working outside our academic tradition reject the notion of single, unified genres, so why should academic debate seek to impose this?4

Though one might ask whether academics are not seeking to “impose” classification of the genre so much as they are seeking a common definition for the purposes of effective discourse, Myatt’s point is well taken. Indeed, the notion of defining a “single, unified genre” may well be a fool’s

3 Julio d’Escrivan Rincón and Nick Collins, eds., The Cambridge Companion to Electronic Music, Cambridge Companions to Music (Cambridge: Cambridge University Press, 2007), 198. 4 Tony Myatt, “New Aesthetics and Practice in Experimental Electronic Music,” Organised Sound 13, no. 1 (April 2008): 1. 2 errand, especially considering that “the work of many artists in this field has little in common, except for similarities in their combination of artistic, economic and cultural practice.”5

Before continuing, I will address two issues of terminology, the first of which was revealed in the above quote from Myatt. Aside from defining the genre in terms of what music it does and does not include, there is also the issue of what to call it in the first place. Though experts in the field might distinguish terms such as “electronic music,” “,”

,” etc. as unique and distinct, I will generally use the term “electronic music” throughout this dissertation as an all-encompassing term for the sake of consistency as well as for the breadth of its terminological umbrella, as it is not my goal (nor my desire) to define a hierarchy of applicable terminology.

The second terminological issue is the use of the word “music” itself. One might pose a question as to whether or not much of the genre of electronic music is truly “music” in the traditional sense. The debate generally results in one side arguing that music might be considered any sound listened to with intent (thus it is “music”), regardless of aesthetic. Though a wide variety of electronic music is easily identifiable as “music” in the traditional sense, a lot of experimental electronic music scarcely resembles it. This first position is primarily concerned with the intent of the listener rather than aesthetics. The other side, however, might argue that electronic music generally does not have enough aesthetic similarity to traditional forms of music (often a lack of melody, harmonic syntax, etc.), and thus the term “music” is inherently problematic. In other words, it is the corollary of the first position, primarily concerned with musical aesthetic. Of course, what we do and do not admit as music may have more to do with cultural framing than anything else. As Michel Chion notes, “Music based on sounds that do not

5 Myatt, 1. 3 have the proper form in the traditional sense but other textural qualities is obviously possible and is even widely practiced. It is easy enough to do as long as other means – in particular certain formalities, the care with which it is presented to the public, in a hall – create the frame that affirms it as such.”6 Through this lens, what is allowable under the terminological umbrella of “music” has little to do with sonic material. Though I will freely admit that there is an amount of terminological baggage associated with the word “music,” I will nonetheless use it throughout the remainder of this text when referring to the musical traditions and works being examined. My insistence in using the term “music” does not stem from an ideological position about the efficacy or viability of this genre, but rather from a desire to be as succinct and consistent as possible while avoiding unnecessarily difficult or unwieldy alternative terminology such as

“temporal ” or the like.

To return to the issue of defining “electronic music,” then, I will take Myatt’s position that the field is simply too diverse to define the genres and subgenres along anything except the widest and blurriest of boundaries. However, for the purposes of this project, I will be exclusively examining electronic music which might be termed “acousmatic” in nature (save for a few case studies in the final chapter). The term “acousmatic” is derived from the Greek akousmatikoi, who were disciples of Pythagoras that listened to his lectures from behind a curtain, concealing his image while preserving his voice.7 The concept of the acousmatic and the acousmatic situation will play a hugely important role in both my analytical and methodological propositions throughout the text, and will thus have a much fuller and more nuanced discussion

6 Michel Chion, Sound: An Acoulogical Treatise, . James A. Steintrager (Durham: Duke University Press, 2016), 67–68. 7 Brian Kane, Sound Unseen: in Theory and Practice (New York, NY: Oxford University Press, 2014), 24. 4 in subsequent sections. For now, it is enough to say that is music wherein the listener hears sound while the perceived cause or origin of that sound is obscured from view.8

While many specialists in the field of electronic music typically equate acousmatic music to terms like “fixed media” or “tape music,” they are not necessarily the same. However, given the obvious and enormous overlap in repertoire among these terms, most of the works analyzed will fall into all of these categories, and for the sake of the reader’s convenience I will focus on those works that can be played back and listened to through multiple identical iterations.

Aesthetics of Electronic Music

On one hand, some and theorists have expressed the belief that the aesthetics and traditions of electronic music are, at a fundamental level, different from the Euro-centric

Western art music canon. For example, at the outset of her book, Joanna Demers unambiguously states, “Let me begin with an obvious statement: electronic music sounds and behaves differently from nonelectronic music.”9 Her central argument separating these traditions revolves around the treatment of sound itself, arguing that the distinguishing factor “is a concern with the meaningfulness of sound. To an extent unrivaled in all previous forms of music, recent electronic music is obsessed with the question of whether sound, in itself, bears meaning.”10 Demers similarly agrees with Denis Smalley’s assessment regarding the materials of electronic music, that “Electroacoustic music, through its extensive sounding repertory drawn from the entire sound-field, reveals the richness and depth of indicative [sound as message/information]

8 Michel Chion, Guide to Sound Objects: Pierre Schaeffer and Musical Research, trans. John Dack and Christine North (Paris: Buchet/Chastel, 2009), 11. 9 Demers, Listening through the Noise, 21. 10 Demers, 13. 5 relationships more clearly and comprehensively than is possible with other musics.”11 Smalley does not, however, reach the point of suggesting that electronic music is fundamentally different than European art music.

Not all who study, compose, and enjoy electronic music agree with Demers’s sentiment.

Barry Truax, for instance, argues that electronic music, “however it may be defined today, has continued the Western, i.e. European, tradition of music as an form, i.e. an art form where the materials are designed and structured mainly through their internal relationships.”12

Furthermore, he argues that not only is the treatment of and relationship among musical material the same between the traditions, but they are also similar in that “electroacoustic music practice tends to integrate sound and structure through its emphasis on sound design as integral to musical structure.”13 Thus, Truax is drawing a parallel between European art music and electronic music through a perceived similar use of syntactical and functional units. Indeed, if electronic music does place such an emphasis on structure and syntax ( can grant that

European art music does as well), this is a convincing similarity.

Of course, there are specific elements that one can point to that are retained from traditional art musics in electronic music. In identifying various “Something to Hold Onto

Factors,” Leigh Landy and Rob Weale suggest a number of overlapping musical parameters present in both forms of music, such as rhythm, pitch, timbre, dynamic, etc.14 However, one

11 Denis Smalley, “The Listening Imagination: Listening in the Electroacoustic Era,” Contemporary Music Review 13, no. 2 (January 1996): 83. 12 Barry Truax, “The Aesthetics of Computer Music: A Questionable Concept Reconsidered,” Organised Sound 5, no. 3 (2000): 119. 13 Truax, 119. 14 Leigh Landy, “The ‘Something to Hold on to Factor’ in Timbral Composition,” Contemporary Music Review 10, no. 2 (1994): 49–60; Robert Weale, “Discovering How Accessible Electroacoustic Music Can Be: The Intention/Reception Project,” Organised Sound 11, no. 2 (August 2006): 189–200. 6 should also be careful not to conflate the presence of similar structures with these structures having the same function. For example, it is clearly possible that pitch might play a critical role in one’s understanding of a piece of electronic music, but it is unlikely that it will be treated the same way analytically as one might when analyzing traditional Western art music. If there is an aesthetic separation between electronic and acoustic music, even if it is slight, we must be careful to examine the function of these shared musical elements in this different aesthetic field.

The sounding aesthetic of a piece of music can be greatly affected by the listening conditions, both the physical surroundings as well as the cultural conditions surrounding the musical tradition. As Truax points out, “All electroacoustic sound, unless performed live on some electronic instrument, is disembodied – it comes from a hidden virtual source. As such, it creates an immediate link with the imagination, memory, fantasy, the world of archetypes and symbols, essentially the internal world of human consciousness.”15 This link with the imaginary world of the mind inevitably leads to oft-mentioned phrases about the field of electronic music, such as “Electroacoustic music opens access to all sounds, a bewildering sonic array ranging from the real to the surreal and beyond. For listeners, the traditional links with physical sound- making are frequently ruptured.”16 While this is indeed true, it is important to note that Smalley attributes this in part to the disruption of the relationship between the physical source of the sound and the listener – the acousmatic condition. Regardless of whether or not electronic music is fundamentally different from traditional European art music, it is clear that the listening conditions are. Thus, an effective and critical analytical methodology should depart from and be derived from the acousmatic condition, paying special attention to the effects of the listening

15 Truax, “The Aesthetics of Computer Music,” 120. 16 Denis Smalley, “Spectromorphology: Explaining Sound Shapes,” Organised Sound 2, no. 2 (1994): 107. 7 situation on the listener’s understanding of the work. This will be the premise and focus of the following chapter.

Listening to Sound

The core analytical premise of this dissertation is that the most effective and critical way to analyze electronic music is by approaching it through an understanding of the experience of listening to sound. Though I have previously justified using the term “music,” the argument that electronic music is something fundamentally different (at least from an analytical perspective) is well founded. Demers points this out succinctly and effectively above. While elements from traditional methods of music making are undoubtedly preserved in the creation and performance of electronic music (such as rhythm, timbre, pitch, etc.), the functions that these elements serve are not necessarily retained across the aesthetic divide. If, indeed, electronic music behaves and is experienced differently, it logically follows that the analytical methodology should change.

An effective methodology for the analysis of electronic music, therefore, should pay special attention to the sonic characteristics of the piece being listened to. Because of the nature of sound, the phenomenological experience of listening is another important pillar in my analytical conceptions; a sound needs a listener to exist. Until a listener is present to interpret the physical nature of a sound wave, it is purely a theoretical construct.17 Thus, the role of experiential listening is central to my analytical conception; the role of phenomenology in is discussed further below.

17 Consider, for example, the hearing abilities of humans and dogs. Under optimal physical circumstances, humans have a hearing range of about 22-20,500 Hz. Dogs, however, can hear much higher than that, which is why they react to dog whistles other sounds outside of our range of hearing. Though both humans and dogs are subjected to the physical condition (the air pressure created by an airwave above 20,500 Hz), only the nervous system of the dog is able to interpret it. Thus, the dog experiences a sound, while the human experiences nothing, emphasizing the importance of the listener as the interpreter of sound. 8 This is not to say that more “objective” methodological processes have no place in my analyses; in fact, I would argue that they are both equally central. I will seek to find a balance between analytical methods that exist on a type of subjective-objective spectrum, where on one side the listener and the process of listening has been completely removed from all analytical/methodological consideration (objective), and on the other the role of phenomenological sonic experience has been so centralized that the resulting analysis is subjective to the point that it is incapable of engaging in critical discourse.

If my analytical methods are to be based in sonic phenomenological experience, we should examine some of the basic ideological concepts and analytical conundrums involved in the process of listening to electronic music based on its sonic parameters and qualities. Perhaps the most glaringly obvious issue when attempting an analysis of something that is fundamentally sonic in nature is that it resists the ability to be notated or scored in any reliable fashion. As musicians, we are heavily trained in the art of score reading and analysis, and the vast majority of analytical techniques developed for traditional Western art music are heavily aided by, if not reliant upon an accurately notated score. This notation offers up numerous possibilities that stand in stark contrast to the opportunities provided only by audible sound.

For example, consider the issue of what we might refer to as “sonic impermanence.” If I am looking at a score, and I see a note with a pitch, duration, metric location, etc., I can return to that precise moment in the score outside of the temporal space of the musical work. However, listening to sound is not the same; I can return to a sound but only within its temporal structure.

There is no way to experience a sound outside of its temporal window. As Chion notes, “one can say that a portion of that which constitutes our sonic or musical perceptions pertain to an

‘extratemporal’ structure. What this means is that while inevitably unfolding in time, they

9 occupy time in a free or elastic fashion.”18 In this view, a sonic perception “requires time to be enunciated, but occupies time as if it were space.”19 Thus, while musical objects in a score have permanence in the sense that they are notated and can be comprehended from this notation, sonic objects do not behave in the same way. Though many theorists and have attempted to derive notational schemes for sound (discussed below), they often do not recognize the temporal demands of the sonic experience. “Every passing sound is marked with hallucination, because it leaves no traces, and every sound can resound for all eternity in the present perfect of listening.”20

Another key issue when listening to sound, related to the concept of sonic impermanence, is that sounds cannot be stopped in time. Consider the following analogy by Chion: “The capacity for re-listening or what might be called multi-audition that fixed sounds allow is not like filming the countryside and then being able to watch it over in slow motion. Rather, what it allows is taking the same trip over and over at will, but with very precise constraints…I use this convoluted analogy in order to remind us that you cannot on a sound.”21 This analogy truly gets at the heart of what makes sound so analytically elusive. If we were to slow down or stop

Chion’s of the countryside, what we would see is that specific moment frozen in time – one discrete frame. Sound, however, does not work this way.22 We certainly could, if we wanted,

18 Chion, Sound, 34. 19 Chion, 34. 20 Chion, 30. 21 Chion, 34. 22 One might point to as a possibility for capturing a single moment of sound as analogous to a single frame of video. However, a single frame and the video that it comes from both retain the visual mode of interpretation, while a shifts auditory perception to visual perception, and these processes are inherently different. Certainly we do not experience a spectral visualization of a sound in the same way we experience the auditory phenomenon from which the spectrogram is created. 10 slow down the playback speed. However, in doing so we distort the original sound so that what we end up with is a fundamentally different sonic experience, nothing analogous to the stopped frame of Chion’s film. Of course, one might imagine that a single sample of a sound is roughly equivalent to a single frame of a film, and thus could be useful for examining a sound outside of its temporal space. Indeed there are similarities between sound samples and visual frames, but consider the differences in perception between visual and sonic experiences. Visual experiences may take place outside of a temporal span; the individual frame of the film provides a complete visual image of that moment in time. On the other hand, one sound sample could not possibly provide a perception of the totality of the sound for two reasons. The first is that it is simply far too short; at the most common sampling rate (44.1 kHz), one sample would last a mere 0.00002 seconds. This is well below the limit of what any human ear could possibly perceive. Second, as already stated, sounds exist in a temporal span, and by attempting to remove this constraint, a sound is effectively destroyed. Chion refers to this temporal span as the temporal window of mental totalization, “the duration, which varies depending on the nature of the sound, that delimits what we can possibly apprehend as a sound from beginning to end as a global form.”23

Furthermore, this temporal property of sound is one that is placed upon the listener and is fundamentally inescapable. Chion notes that “Contrary to written communication, which we can read at whatever speed suits us…sonic communication, particularly verbal and musical, imposes its duration on us. For centuries this fact has had as a consequence that the ear is often offered a second chance, if one wants the message missed the first time to have an opportunity to get across.”24 Indeed, sonic communications exist in a temporal space that exerts difficulties which

23 Chion, Sound, 33. 24 Chion, 35. 11 are not experienced in written or notated methods of communication. Therefore, it is necessary to understand the ways in which we perceive and understand a communicative medium that exists only in time.

Phenomenology

Since a sound needs a listener, it is important to understand the ways and contexts in which the listener experiences sound. Phenomenological analytical techniques provide ways of thinking about and theorizing this experience. Though music theory often concerns itself with the empirical or the “factual,” Thomas Clifton argues that “There is no reason why music theory cannot feel free to deal with meanings which are significant to one’s consciousness of music, to the way one relates to, and in fact, recognizes music. Music theory need not feel that it is being unscientific by returning the experiencing person to the center stage.”25 Indeed, since music and sound both require an interpreter to be experienced, it may even make the most sense to return at least a portion of musical theoretical discourse to the subjective experiences of the listener.

Lawrence Ferrara similarly notes,

Underlying musical analysis is a fundamental yet obscured premise. This is the implicit belief that the knowledge that is acquired as a result of analytical methods is and ought to be objective. The ‘ought to be’ half of that belief is rooted in generations of scientific methodology in which the a priori separation between subject and object was a tacit axiom. The method utilized by scientists (and by musical analysts) is tacitly thought to cleanse the experiment (or analysis) or the confounding variables that a too involved subject might cause. That knowledge is objective is of course a myth, whether it refers to music, the other arts, or the sciences.26

25 Thomas Clifton, Music as Heard: A Study in Applied Phenomenology (New Haven: Yale University Press, 1983), 37. 26 Lawrence Ferrara, “Phenomenology as a Tool for Musical Analysis,” The Musical Quarterly 70, no. 3 (1984): 355. 12 Thus, we may be appropriately and justifiably wary of venturing too far toward the subjective when analyzing, but it is necessary to understand at the same time that subjective experience is necessarily inherent in the analytical process. The notion that an analysis can be objectively

“true” is irreconcilable with this fact; “the use of traditional methods of analysis in applied theory does not objectify the conclusions drawn by the analyst. Value assumptions and personal decisions are embedded (and obscured) in the constitution and use of the methods employed.”27

In other words, we bring our own individual judgments and values into the analysis process, therefore any analysis will always have a certain amount of subjectivity built into it.

One can even make the argument that the structure of a particular work is in itself subjective and arbitrary, manifested through experience. Clifton states, “the experience of order says nothing about whether order is there in fact. Order is constituted by the experiencing person, who is just as likely to experience it in a collection of natural sounds, as in improvised music or a finely wrought fugue by J.S. Bach.”28 Thus, the process of experiencing is actually what creates the perception of structure and coherence. The concept of “music,” then, “is not a fact or a thing in the world, but a meaning constituted by human beings.”29

What should a good phenomenological analysis look like if structure is to be created through subjective experience? Clifton argues that a good phenomenological description

“concentrates not on facts, but upon essences, and attempts to uncover what there is about an object and its experience which is essential (or necessary) if the object or the experience is to be recognized at all.”30 These “objects” that Clifton references are the phenomena at the center of a

27 Ferrara, 356. 28 Clifton, Music as Heard, 5. 29 Clifton, 5. 30 Clifton, 9. 13 phenomenological investigation. A phenomenon might generally be called a “thing,” something of which we are conscious; more formally, Clifton defines it as “a nonempirical, intuitively grasped essence whose meaning is (1) validated by a consciousness absorbed in the experience of living-through that meaning, and (2) independent of any particular actualization of it.”31

David Lewin’s 1986 article “Music Theory, Phenomenology, and Modes of Perception” attempts to formulate a phenomenological descriptive methodology. Particularly, Lewin criticized the works of non-phenomenologists for not adequately modeling the recursive aspects of experience and identity perception (especially as it relates to Husserl’s theories of phenomenology, discussed below).32 To address this issue, Lewin proposes the formula p =

(EV,CXT,P-R-LIST,ST-LIST), which is evaluated as follows:

Here the musical perception p is defined as a formal list containing four arguments. The argument EV specifies a sonic event or family of events being "perceived." The argument CXT specifies a musical context in which the perception occurs. The argument P-R-LIST is a list of pairs (pi,ri); each pair specifies a perception pi and a relation ri which p bears to pi. The argument ST- 33 LIST is a list of statements s1, . . ., sK made in some stipulated language L.

While an investigation into the implementation and implications of Lewin’s phenomenological model could (and does) take up an entire volume34, there are a few key notions that we can extract. The first is that context is extremely important when evaluating the experience of any certain phenomenon (or “event” in Lewin’s model). For example, the pitch C5 is typically impressively high when sung by a competent tenor, but the same note is hardly worth a mention when sung by a soprano. Of course, if the notes surrounding the soprano’s C5 are all far lower or

31 Clifton, 38. 32 David Lewin, “Music Theory, Phenomenology, and Modes of Perception,” Music Perception 3, no. 4 (July 1986): 330. 33 Lewin, 335. 34 David Bard-Schwarz and Richard Cohn, eds., David Lewin’s Morgengruß: Text, Context, Commentary (New York: Oxford University Press, 2015). 14 higher, the musical context may now mark it for consciousness. Depending on how close or broad a musical context one considers, the perception of a sonic event or a series of sonic events is almost guaranteed to change.

Another key element of Lewin’s formula is the (pi,ri) pairs. These pairs define the way in which the perception being considered (p) is related to previous perceptions, the recursive aspect of phenomenology that Lewin had sought to incorporate. In other words, we experience and identify the nature of any given perception by relating it to previous perceptions. By analyzing, varying, and repeating the process of perception, we understand the essence of what a sonic object “is.” As Brian Kane argues, “As each new percept is connected to the one just past, and grasped as a whole, an object emerges which can be identified as the same through a variety of acts of consciousness. Carried along by the flow of experience we have only a series of indubitable qualities, but through the synthesis of these qualities, we are able to posit the identity of the object, as transcendent to perception.”35 Thus, identity is an emergent property of a given sonic object mediated through perception.

James Tenney presents a number of factors which he argues lead to auditory cohesion and structural coherence of discrete musical or sonic units (what he refers to as clangs) throughout the experience of listening to sound. He argues that two primary factors are those of proximity and similarity, which are based in the theories of gestalt psychology of Koffka36 and

Köhler.37 The factor of proximity states that sound elements that are simultaneous or contiguous will tend to form clangs while those that are more temporally separated will be segregated.

35 Brian Kane, “L’Objet Sonore Maintenant: Pierre Schaeffer, Sound Objects and the Phenomenological Reduction,” Organised Sound 12, no. 1 (April 2007): 16. 36 Kurt Koffka, Principles of Gestalt Psychology (London: Routledge & Kegan Paul Ltd., 1962). 37 Wolfgang Köhler, Introduction to Gestalt Psychology (New York: New American Library, Mentor Books, 1959). 15 Similarly, the factor of similarity proposes that sound elements that are similar in respect to some parameter will form clangs, and those that are dissimilar will produce segregations.38 Though this may seem obvious in the abstract, these factors have important for the ways in which electronic music might be segregated. Whereas traditionally notated Western music has identifiable musical units (notes, chords, measures, etc.), electronic music does not share these elements – one must rely on the experience of listening to identify these units.

Tenney also identifies four secondary factors, those being intensity, repetition, the objective set, and the subjective set. Intensity relates to the “value” of a given parameter, forming clangs where more intense elements are perceived as being the focal and starting point.

Repetition produces a subdivision of a whole series into units which are separated at the point

“just before the repeated element.” The objective and subjective sets are expectations that arise in the experience of listening to musical events within a piece and external to the piece, respectively.39 While not all of these factors will necessarily be involved in the process of auditory cohesion, “One or more of these factors will be decisive in the delineation of the boundaries of any clang or sequence, and the – whether he does so consciously or not

– must inevitably bring these factors into play in the organization of his sound-materials.”40

While the analyses presented throughout this document are not strictly phenomenological in the same way the Lewin or others’ are, many of these same factors and concepts that inform their analyses still play a fundamental role. Again, we must consistently be reminded that sound requires a listener and that the only way to describe or understand sound is necessarily through

38 , Meta + Hodos: A Phenomenology of 20th-Century Musical Materials and an Approach to the Study of Form; and META Meta + Hodos, 2nd ed. [1992 printing] (Hanover, NH: Frog Peak Music, 1992), 29–32. 39 Tenney, 41–44. 40 Tenney, 53. 16 the subjective experience of that listener. By embracing the fact that analyses must be mediated through a subjective analyst, and by understanding the ways in which this process might occur, we may arrive at an analytical conclusion that is not a statement of “facts” but rather a dynamic, critical record of musical experience.

New Forms and Temporalities

The vast majority of electronic music, especially that which might typically fall under the category of “art” music or “sound art,” often eschews traditional idioms of syntax and form in music. Even if the composer may have intended a normative syntax, the resulting composition may not convey this syntax to the listener because of the nature and unfamiliarity of electronic music. Thus, an effective analytical methodology for electronic music should take into account those formal and temporal procedures that break with the traditions of the Western art music canon, and it should allow for greater flexibility when undertaking an analysis.

Moment Form

Writing in 1978, Jonathan Kramer observed that “continuity is no longer part of musical syntax, but rather it is an optional procedure. It must be created or denied anew in each piece, and thus it is the material and not the language of the music.”41 Christopher Hasty notes a similar phenomenon: “With the disappearance of the rhythmic continuity of pulse, measure, and periodic phrase structure, and the abandonment of the organizing force of a single tonal center, twentieth- century music has raised fundamental issues of .”42 Indeed, this seems to ring true to most listeners with even a slight level of familiarity with music of the 20th and 21st centuries.

41 Jonathan D. Kramer, “ in Twentieth Century Music,” The Musical Quarterly 64, no. 2 (1978): 179. 42 Christopher F. Hasty, “On the Problem of Succession and Continuity in Twentieth-Century Music,” Music Theory Spectrum 8 (April 1986): 58. 17 If, then, continuity and teleology are no longer a part of musical form and syntax, an important issue to address is how we might understand form in the absence of these traditional elements of musical structure.

One potential response to this problem is the concept of “moment form.” This idea, first articulated by in reference to the electronic piece (1959-60)43, allows for each individual moment, each “now,” to be regarded as something individual and not as a consequence of something previous or a preview of what is to come. In other words,

Stockhausen’s moment form allows for the possibility of a “moment” to be removed from the structures of teleology and temporality.44 Peter Manning calls this a “self-regulated freedom of material within the overall ordered structure. Sounds thus could evolve as if part of a separate existence, evaluated for the instantaneous value…”45 Thus, moments, “self-contained entities, capable of standing on their own yet in some sense belonging to the context of the composition,”46 can theoretically appear in any order without affecting the overall form of the work; indeed, this arbitrary order is the form of the work. That is not to say that the moments of a work in this form must be completely free to be moved throughout the structure, but rather that

“the order of moments must appear arbitrary for the work to conform to the spirit of moment form.”47 This is a key assertion, that moment form is itself the spirit of a form. In other words, moment form may be manifested as a perception by the listener, not necessarily an intention of

43 Karlheinz Stockhausen, “Momentform: Neue Beziehungen Zwischen Aufführungsdauer, Werkdauer Und Moment,” in Texte Zur Musik, vol. 1 (Cologne: DuMont Schauberg, 1963), 189–210. 44 Kramer, “Moment Form in Twentieth Century Music,” 179. 45 Peter Manning, Electronic and Computer Music (New York: Oxford University Press, 2013), 66. 46 Kramer, “Moment Form in Twentieth Century Music,” 181. 47 Kramer, 181. 18 the composer. If the composer intended a form that is teleological or syntactical in nature, but the listener perceives an arbitrary and discontinuous series of musical moments, which are capable of standing on their own yet belonging to the musical context, there is a strong argument that the work can be understood through the lens of moment form regardless.

It should also be noted that repetition does not necessarily imply a piece is not in moment form. For instance, Stockhausen’s own Klavierstück XI, a work in moment form, actually requires repetition in order for the work to end. Although this may initially seem like a contradiction, as a listener may understand one moment of a work referencing another, thus creating a temporal-referential structure, Kramer argues that “there should be no reason why a previous moment cannot return, provided such a return is not prepared by a structural upbeat…For, if no moment ever returned, the requirement of constant newness would in itself imply a kind of progression, because the listener could predict that the next moment would always differ from all previous moments.”48 In this scenario, the perception of progression, even if that progression is to constant newness, makes the perception of moment form impossible.

(The issue of repetition in modern and electronic music is discussed below.)

The concept of moment form also raises questions about the larger temporal span in which a moment form piece exists. Consider, for example, a Classical dance movement. Most listeners would agree that the work has a sort of beginning, middle, and end, and that the piece

“exists” for the duration of the temporal span that it takes to get from beginning to end.

However, Kramer argues that “Since moment forms verticalize time, render every moment a

Now, avoid functional implications between moments, and avoid climaxes, they are not

48 Kramer, 181. 19 beginning-middle-end forms. Although the piece must start for simple practical reasons, it may not begin; it must stop, but it may not end.”49

It is probably no coincidence that Stockhausen references moment form in regard to

Kontakte, a work for tape (also adapted for tape, piano, and percussion), as the technological revolution in the world of recorded audio that was taking place at the time afforded much greater possibilities to the composer of electronic music. As Kramer notes:

Extreme discontinuities became readily available with of the . A simple splice can transport the listener instantaneously from one sound world to another. Discontinuity is heightened by the unpredictability of precisely when a splice might occur or into what new world it might send the listener. Not all tape music, of course, avails itself of the potency of extreme discontinuity, but the possibility is there to be used or not used.50

Thus, one might argue that the very nature of the instruments being used to record, process, and organize sounds affords opportunities that composers working with traditionally notated music would not have had access to. This is not to say that moment form cannot exist in traditionally notated or acoustic music, as it certainly does, but simply that the act of recording and splicing tape lends itself to the development and implementation of freer forms of musical syntax and structure.

Not all theorists agree with the possibility of free or moment forms, however. Wallace

Berry, writing eight years after Kramer, argues that “no form is really free, since a plan which gives order and coherence to a musical work must incorporate many of those concepts and principles which are at the roots of all of the traditional forms of music.”51 Of course, Berry does not define “order” and “coherence,” so it is difficult to ascertain exactly what would be required

49 Kramer, 180. 50 Kramer, 192. 51 Wallace Berry, Form in Music: An Examination of Traditional Techniques of Musical Form and Their Applications in Historical and Contemporary Styles (Prentice-Hall, 1986), 436. 20 of a form to attain either of these in the first place. More than this, though, Berry’s arguments seem to be coming from a place of aesthetic judgment and valuation rather than a music theoretical position. For example, he later says that “little if anything is more vital in musical form than the controlled maintenance, and effective change, subsidence, and direction of motion.

Failure to move with conviction and direction is one of the most common and crippling defects of ineffective music.”52 Again, Berry never explains what he means by “ineffective” music, but the reader can safely assume that non-teleological forms are an impossibility in his view, or at the very least are aesthetically weak forms of musical structure. He goes on to say that

Without order, the musical material, however sound and vigorous, may be reduced through its aimless diffusion to an impotent stammer whose impression dissolves as it is issued, lacking the exercise of whatever potential may exist in it for assimilable unity, and renouncing all possibility of intellectual appeal.53

Berry seems to miss the key point that was made above, that form in music might be best understood as a construct perceived by the listener and not as an intention of the composer. In other words, we may argue that form is not so much something that music “has” or “is,” but rather something that the listener perceives. Berry clearly disagrees with this proposition, stating that “If order is valued, we can dismiss the aesthetic philosophy that would leave the responsibility of formulation to the listener (or spectator); it is clear that order in art is not so glibly achieved.”54 Hasty, however, disagrees with Berry’s proposition that the listener is incapable of creating structure or relationships.

The relation of events may be more or less comprehensible depending on our experience, the level of our attention, the type and degree of organization we are presented with, or, more accurately, the interaction of all these factors. But, in principle, the possibility for making connections is always there, whether we are

52 Berry, 447. 53 Berry, 449. 54 Berry, 449. 21 listening to traffic or for the hundredth time to a Mozart minuet as a new experience.55

Nevertheless, moment form clearly provides one avenue through which works of electronic music can be understood. We should be careful, however, not to reduce all music with non-standard form or temporality to moment form. If the listener is able to hear teleology or structural relations between moments, perhaps another musical structure is actually at play within the work. As Hasty states, “The assertion that in new music events are necessarily disconnected and that this discontinuity is so absolute as to negate temporal succession is…unfounded.”56 So while it is possible to understand the discrete events in a piece of electronic music as inherently disconnected from one another, we should be careful not to assume that this will be the case.

Repetition

It stands to reason that if formal and syntactical structures in the world of electronic music function and behave differently than in traditional Western art music, other common musical elements may do so as well. Such is the case with the use of repetition. Typically, when one thinks of the function of repetition in more traditional art music, one might imagine a repeated phrase, a cadence being reiterated, or the exposition of a sonata form being played twice, to name a few instances. These types of repetition primarily serve a rhetorical function, whether reiterating a structural close, presenting musical material again, or the like.

In contrast, the technological advantages and conditions provided by the techniques of electronic music allow repetition to be used as a means of rhetoric but also as a way to express identity and function of musical material. Consider the range of sonic and temporal possibilities

55 Hasty, “On the Problem of Succession and Continuity in Twentieth-Century Music,” 61. 56 Hasty, 72. 22 inherent in the act of recording and splicing tape, let alone the opportunities afforded to a modern electronic composer with access to the production techniques of . In this musical world, syntactical or grammatical units are rarely, if ever, defined for the listener; there is no clear expectation about what a discrete unit of musical meaning will be in an art form that generally has no overarching or governing musical aesthetic. It is in this space that repetition can, but does not have to, play a key role in segmenting and parsing a sonic stream. Brian Kane argues that “As each new percept is connected to the one just past, and grasped as a whole, an object emerges which can be identified as the same through a variety of acts of consciousness.

Carried along by the flow of experience we have only a series of indubitable qualities, but through the synthesis of these qualities, we are able to posit the identity of the object, as transcendent to perception.”57 In other words, Kane is arguing that through repetition and variation, the elements that form the musical material of a work (or the sound materials of everyday experience) manifest themselves to the listener.

To illustrate this, consider the example of Husserl’s table, which describes the way that the identity of an object is understood through a series of perceptions.

Constantly seeing this table and meanwhile walking around it, changing my position in space in whatever way, I have continually the consciousness of this one identical table as factually existing “in person” and remaining quite unchanged. The table-perception, however, is a continually changing one; it is a continuity of changing perceptions. I close my eyes. My other senses have no relation to the table. Now I have no perception of it. I open my eyes; and I have the perception again. The perception? Let us be more precise. Returning, it is not, under any circumstances, individually the same. Only the table is the same, intended to as the same in the synthetical consciousness which connects the new perception with the memory…The perception itself, however, is what it is in the continuous flux of consciousness and is itself a continuous flux: continually the perceptual Now changes into the enduring consciousness of the Just-Past and

57 Kane, “L’Objet Sonore Maintenant,” 16. 23 simultaneously a Now lights up, etc. The perceived thing in general, and all its parts, aspects, and phases…are necessarily transcendent to the perception.”58

In this example, the viewer is constantly viewing a table from new angles and positions, examining its relevant characteristics and forming a perception of the table in the mind.

But what is being created in the mind is not the physical table, but the conception of the physical table manifested through the perception of repetition and variation. By perceiving the table over and over, from different angles, the viewer is able to comprehend what the table “is.” In the same way, by perceiving sonic events multiple times in different positions and contexts, the listener is constantly reforming his or her mental conceptions of these events. As Judy Lochhead states, “by replicating some features of a prior unit or event, the repetition makes more salient those features of both occurrences. Thus, repetition not only acts as a temporal marker through its reference to a prior unit or event but also retrospectively shapes the earlier occurrence as well as itself.”59 Considering the ubiquity and idiomatic nature of repetition in many forms of electronic music, the role of repetition in defining musical units cannot be overstated.

Repetition and variation should therefore have a place in any successful methodology for the analysis of electronic music, especially at analytical levels close to the musical surface. We will return to the image of Husserl’s table and Husserlian phenomenology in the following chapter.

58 Edmund Husserl, Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy: First Book: General Introduction to a Pure Phenomenology (The Hague, : M. Nijhoff, 1982), 41. 59 Judy Lochhead, “Joan Tower’s Wings and Breakfast Rhythms I and II: Some Thoughts on Form and Repetition,” Perspectives of New Music 30, no. 1 (1992): 136. 24 New Temporalities

If sound can only exist within the framework of a temporal span, we must understand the nature and behavior of the temporal spaces which sound in electronic music inhabits. Traditional

Western art music, especially tonal music, is unquestionably linear in terms of its temporality.

Indeed, “in music, the quintessential expression of linearity is the tonal system.”60 It follows, then, that the analytical methodologies that musicians and music theorists have developed throughout musical history likewise rest on an assumption of a linear or goal-oriented temporality. Listeners of Western art music often experience music under the premise that one musical event somehow leads to the next, or that a given musical event is a consequence of the one that preceded it. In many ways, this type of thinking may be a consequence of learned musical syntaxes and idioms such as “the dominant goes to the tonic.” This, then, implies that these two events (dominant and tonic) are temporally adjacent and can be understood as having a causal relationship. In the tonal era, this goal-directed motion between musical elements was a factual paradigm; the amount of time that it may take to traverse the temporal space between these elements may differ, but the motion from one to the other was nevertheless understood.61

Electronic music (along with other musics of the 20th and 21st centuries) does not necessarily follow this paradigm, due in part to the technological advancements that make it possible. As Kramer notes, “The of the tape recorder in particular has had a profound impact on musical time. Tape can be spliced; thus, events recorded at different times can be made adjacent. A splice may produce a continuity that never existed prior to recording, but the opposite effect has interested composers more: the musical result of slicing can be overpowering

60 Jonathan D. Kramer, “New Temporalities in Music,” Critical Inquiry 7, no. 3 (1981): 539. 61 Kramer, 555. 25 discontinuity.”62 Thus, the techniques of electronic music production allow the composer the ability to explore and alter the temporal space of a work not simply in terms of duration

(stretching and shrinking the temporal space of individual sections or units), but by fundamentally dissolving the concept of teleological musical form. This is not to imply that all electronic music necessarily eschews traditional goal-oriented temporalities, but simply that the opportunity is there should composers or listeners choose to engage with it.

Various authors have suggested a number of new and alternative temporalities that have emerged through an examination of contemporary music. Jonathan Kramer offers a number of possible temporalities especially pertinent to the experience of listening to electronic music, especially acousmatic electronic music. Nondirected linearity is one such temporality, which describes music that “is in constant motion created by a sense of continuity and progression, but the goals of the motion are not unequivocally predictable.”63 This temporal mode retains the linearity of traditional Western art music; we might have the experience that there is a progression from one musical event to the next. However, unlike tonal music, nondirected linear music does not create the sensation that one event leads to another or that any specific event is the result of a previous event. Events follow one another, but we might not have a sense that we know where we are going.

Above we have already examined the idea of moment form. Related to moment form is the concept of moment time. A work in moment time does not feel as though it begins (in terms of its formal design), but rather it feels like it simply starts. We may feel as if this work has been going on for a long time without us, and we just happen to be listening to it at this moment in

62 Kramer, 544. 63 Kramer, 542. 26 time.64 It is possible that a work that exists in moment time may be a series of unrelated, non- directed moments (as described above), or perhaps the entire work itself may be one standalone moment. By definition, these works must start and stop, but that does not mean they have a clearly-defined beginning or end in the formal sense. This is especially true of “process” music, both electronic and acoustic. For example, many of ’s phasing pieces like Piano

Phase and Music must necessarily start and stop for practical reasons in the concert setting or on recording. However, their cyclic nature defies goal-oriented motion; one could start the work at any point in the piece and the process would unfold the same, just from a different point. Thus, they are self-contained entities, moments that are complete unto themselves but lead nowhere.

Kramer’s conception of “vertical” time is also pertinent to the aesthetics of electronic music. He defines it, in a somewhat esoteric fashion, as “a single present stretched out into an enormous duration, a potentially infinite ‘now’ that nonetheless feels like an instant.”65 The form of these works revolves around the relationship between layers of the sound world rather than a succession of events. In other words, a vertical composition is a piece that simply is: “we can listen to it or ignore it; if we hear only part of the performance we have still heard the whole piece; and we can concentrate on details or on the whole.”66 Again, many electronic works clearly exhibit temporalities in the spirit of vertical time, especially considering certain types of works like .

Michel Chion similarly notes five types of “temporal tonalities,” which he uses to describe the way in which time unfolds within a sound sequence. Traversing time, as its name

64 Kramer, 547. 65 Kramer, 549. 66 Kramer, 551. 27 implies, is used for those sequences in which the listener (or viewer) feels as though they are moving through the space of a temporal span, getting from point A to point B. Chion notes that this is a particularly idiomatic temporal tonality to musique concrète as well as film. Frieze time refers to a relatively homogenous distribution of sound over a span of time, especially active and dynamic sounds. In this temporal tonality, the listener is a type of “objective observer.”

Shimmering time is much like frieze time in its distribution of sonic material, but contrasts it in that shimmering time features “rather pleasant” sounds. Furthermore, whereas in frieze the listener is placed in an objective position, a listener in shimmering time is “captured and personally implicated – the relationship is one of seduction.” In many ways, both frieze and shimmering times seem to be related to Kramer’s vertical time, but Chion further divides the ways in which the listener is implicated. And-so-forth time refers to sound sequences featuring concise, perhaps underdeveloped temporalities or ideas, often punctuated by pauses and silences.

These silences give the opportunity for the listener to imagine the unrealized possibilities of each brief segment. Finally, hourglass time refers to “hearing as a drop-by-drop flow, as a process of disaggregation and evolution toward entropy.” This time often features surprising twists and turns as well as temporal punctuations meant to evoke the counting of time.67

Regardless of what temporal behavior the listener imagines, it is clear that sound has the ability to affect the function and perception of time, and likewise time has the ability to affect how we understand sound. These two elements have an unusually codependent relationship in that sound cannot exist without time, and thus an accurate understanding of the temporal processes underlying our perception of sound must be a part of the methodological framework of an effective analysis.

67 Chion, Sound, 40–41. 28

Timbre in Electronic Music

Although the goal of the present text is not to provide a methodology for the analysis of timbre, nor deeply critical discussions of sound color, it is scarcely possible to examine the field of electronic music without addressing the issue. As Denis Smalley argues, “Electroacoustic music has made possible the expansion and development of certain generic which have become its idiomatic property.”68 The word “idiomatic” is key here; the electronic musical aesthetic rests in many ways on a foundation of timbre as a musical and sonic material in ways that other genres do not. Although it is by no means unheard of for acoustic works to be timbre- oriented (Schoenberg’s conception of , for instance), the use of timbre as a driving musical force is quite the norm in electronic music. Stephen McAdams even suggests that timbre can have a functional role in a musical context, often aiding the listener in cognitive and perceptual processes such as identifying and formulating auditory streams and perceiving changes in musical tension.69

It is thus necessary to briefly examine existing work on the conception and analysis of the timbral characteristics of music and sounds. This is, however, no small feat. As Cogan and Escot explain, “Tone color is perhaps the most paradoxical of music’s parameters. The paradox lies in the contrast between its direct communicative power and the historical inability to grasp it critically or analytically.”70 This difficulty in understanding timbre may even stem from

68 Denis Smalley, “Defining Timbre — Refining Timbre,” Contemporary Music Review 10, no. 2 (January 1994): 44. 69 Stephen McAdams, “Contribution of Timbre to Musical Structure,” Computer Music Journal 23, no. 3 (Autumn 1999): 95–99. 70 Robert D. Cogan and Pozzi Escot, Sonic Design: The Nature of Sound and Music (Prentice- Hall, 1976), 327. 29 disagreements about what timbre even is. A “textbook” definition, as explained by Carol

Krumhansl, might be something like “timbre is the way in which musical sounds differ once they have been equated for pitch, loudness, and duration.”71 While this definition does have surface appeal, Krumhansl notes three critical problems with it. The first is that this type of definition really only defines what timbre is not, as opposed to explaining what it is. In essence, it only says that timbre is not loudness, pitch, and duration, which allows for every other sonic property. (For example, this would permit spatialization of a sound within a stereo field to fall under the purview of “timbre.”) Second, Krumhansl argues that this definition includes an implicit assumption that timbre actually functions independently of the other parameters of a sound and that these parameters can be separated without affecting the other. Finally, she argues that the above definition of timbre is usually made with reference to instruments, implying that these references function as conceptual crutches to fill in the “gaps” left behind by the definition.72

Another issue, besides understanding what timbre is in the first place, is that it cannot be quantified so easily as other parameters of sound. Pitch, for example, can be quantified using a single number that measures the cycles per second (Hz) of a sound wave. Thus, “there is no essential way in which one pitch differs from another except along the continuous dimensions of pitch frequency. In contrast, timbres may have multiple characteristics in terms of which they differ…timbres may be distinguished by properties specific to each, such as distinctive offset characteristics or unique patterns in the harmonic spectra.”73 Since timbre cannot be so easily

71 Carol L Krumhansl, “Why Is Musical Timbre So to Understand?,” in Structure and Perception of Electroacoustic Sound and Music, ed. Sören Nielzén and Olle Olsson (Amsterdam: Science Publishers, 1989), 44. 72 Krumhansl, 44. 73 Krumhansl, 49. 30 quantified and compared, any system designed for the purpose of timbral analysis will necessarily be more complex than systems used for pitch.

In spite of these issues, numerous writers have proposed systems and techniques for analyzing musical timbre. One of the earlier attempts to describe tone color within what we might call the “modern” era of musical analysis was put forth by Cogan and Escot. These authors sought to describe tone color through an analysis and explanation of the physical and acoustic properties that underlie sound. Beyond that, however, they also argued that an effective “analysis of tone color requires an explanation of the choice and succession of the tone colors of a musical context. Such analysis must explain the principles that interrelate the diverse sounds of a given work. Put another way, the analysis cannot limit itself merely to the description of single sounds, no matter how technically sophisticated that description may be.”74 Although in the end one might argue that their resulting analytical comments are essentially descriptions of the sonic content of a sound or sequence of sound, the contention that timbral analysis should not be limited to individual sounds is important. Rarely do we hear sound or music that consists of only one perceived “layer,” and thus it is important to consider the entire sounding context of a piece when attempting to describe its timbral qualities.

In 1984, Robert Cogan retained many of these principles in his landmark work New

Images of Musical Sound. These “new images” were, of course, spectrograms, and Cogan was able to replace the handmade spectral graphs of the 1976 text with these infinitely more accurate and discrete representations. Cogan argues that spectrograms can fill two important functions.

“First, they provide a notation for the content of synthesized music – a notation that clearly specifies, as no other does, the orientation, motion, duration, and spectral makeup of each

74 Cogan and Escot, Sonic Design, 328. 31 element of the music.”75 One might argue, however, that this is not entirely true. Spectrograms are unable to account for “each element” of the music unless those elements are separated out of the musical context. Consider, for example, a bassoon and a trombone playing the same pitch. A spectrogram of this sound would superimpose the spectra of each instrument on top of one another; all of the relevant spectral information would be present, but the reader would have no way of segmenting it without knowing what was producing the sound.

The second function of spectrograms that Cogan suggests is that “spectrum photos provide an analytic base, with data and evidence, for conclusions about the sonic character and structural function of the sonorities and features that they picture.”76 I would disagree that spectrograms provide “evidence” for function. In contrast, the only thing that spectrograms are able to provide is information about frequency energy over a given length of time. Thus, any potential analysis grounded in the spectrogram is only able to account for these two factors.

Wayne Slawson argues that “Even if it can be granted for the sake of argument that the graphs of these spectra do not distort perceptual reality significantly, it is hard to develop sound-color relation out of these representations of entirely acoustic phenomena.”77 When this issue is combined with the above issue of vertical segmentation, one may be skeptical about the ability of spectrograms to do what Cogan suggests. It seems that what the author achieves is not so much a description of the sonic characteristics or functions of a musical work but rather a description of the spectrograms themselves. (This is to say nothing of the technical limitations of the spectrogram, especially in 1984.) I will return to Cogan’s analysis of ’s Ensembles for below.

75 Robert Cogan, New Images of Musical Sound (Cambridge: Contact International, 1998), 103. 76 Cogan, 103. 77 Wayne Slawson, Sound Color (Berkeley: University of California Press, 1985), 13. 32 Slawson, himself writing in the year following Cogan, proposed his own system of timbral analysis to address the following three questions: 1) “How can the color of a sound be held constant when its pitch, loudness, and other aspects of its timbre are changes?” 2) “How can one aspect or dimension of sound color be held constant as other dimensions of sound color are varied?” and 3) “What operations on sound color can be identified that will hold invariant certain relations among sound colors while varying other relations.”78 We can see from these three questions that Slawson is especially preoccupied with understanding how dimensions of sound color can be changed and manipulated. He identifies four dimensions of sound color that can be manipulated through filtering a sound through vowel formants. These dimensions are openness, acuteness, laxness, and smallness.79 Not only does Slawson define each of these and plot the vowel shapes on a contour mapping, but he actually allows for transpositional and inversional processes to be performed within these spaces.80

Slawson is clearly aware that the transformational sound-color operations that he has outlined are artificial in nature. He states:

Even if we had no biological predilection, we should not be overly concerned about introducing “unnatural” operations into music. In the history of European concert music, essentially all the developments in musical structure have been artificial. They have grown out of previous developments by a certain logic, but those logical developments were not inevitable. The varieties of musical phenomena in the cultures of the world argue strongly in favor of artifice as the driving force. Just as the notion of inverting a melody and, later, a pitch-class set grew artificially out of previous musical practice, the inversion of sound color can be said to grow artificially out of previous practice – the inversion of pitch. Even if we hold sound-color operations to be less than natural (in the sense, that is, of following from biological necessity) we must come to the same conclusion about most of the well-established operations upon which musical structure depends.81

78 Slawson, 20–21. 79 Slawson, 55. 80 Slawson, 69–79. 81 Slawson, 81. 33 Though one might grant his argument that many of the structural developments in European music grew from something artificial, I am skeptical of the implicit assumption present throughout this passage that pitch and sound color transformations are equally artificial. Though it is impossible to literally transpose and invert pitches (which are simply frequencies), human beings typically embody pitch and changes in pitch through spatialized metaphor.82 We understand the concept of “up” and “down” in regard to pitch, but we do not typically think of timbres as being spatially oriented. Thus, the suggestion that sound-color operations are just as artificial as pitch and pitch-class operations may not be entirely accurate.

Of course, even Slawson himself concedes that “No clear-cut illustrations of conscious manipulations of sound color reflecting the dimensions [of openness, acuteness, laxness, and smallness] have been found. Even if cases could be cited, we would have evidence simply that the composer of the passage in question conceived of the dimensions…”83 This is hardly surprising considering the admitted artificiality present within his system. Even though there are no observable, verifiable instances of these manipulations taking place, however, an important precedent proposed by Slawson is the creation and development of “analytic methods that preserve a measure of objectivity and can be replicated.”84 The goals of objectivity and repeatability will remain central to the formulation of my own methodology.

Denis Smalley’s conception of timbre is intertwined with the perception of the sound source: “a general, sonic physiognomy whose spectromorphological permits the attribution of an identity.”85 In other words, a sound’s timbre is that quality which identifies the

82 Zohar Eitan and Roni Y. Granot, “How Music Moves: Musical Parameters and Listeners’ Images of Motion,” Music Perception: An Interdisciplinary Journal 23, no. 3 (2006): 221. 83 Slawson, Sound Color, 175. 84 Slawson, 189. 85 Smalley, “Defining Timbre — Refining Timbre,” 38. 34 sound as being “itself.” This could be a real-world sound (i.e. what makes a flute sound like a flute), but it might also be an identifiable sonic object that is created through purely artificial means. Smalley then identifies two types of musical discourse relevant to his conception of timbre as identity. The first type, transformational discourse, occurs when an identity is transformed while retaining vestiges of its root/source (what Smalley would call a strong extrinsic identity).86 One could easily imagine a situation where some sound is being timbrally manipulated and yet the listener continues to identify the sound as still being itself. The second type of discourse is typological, wherein “identities are recognized as sharing timbral qualities but are not regarded as being descendants of the same imminent identity – they do not possess a common identity-base.”87 In other words, typological discourse involves sounds that the listener perceives as being related through their timbral qualities but different in terms of their identity; these sounds are fundamentally different sounds to the listener.

More so than other authors, Smalley’s definition of timbre is based strongly in perception as opposed to acoustics or harmonic spectra. He believes that one consequence of this is that listeners often have difficulty understanding electronic music. He states that “In electroacoustic music where source-cause links are severed, access to any deeper, primal, tensile level is not mediated by source-cause texture. That is what makes such types of acousmatic music difficult for many to grasp. In a certain physical sense, there is nothing to grasp – source-cause texture has evaporated.”88 Because electronic music is does not engage modes of perception that are linked with physical sound-making and source-causes, listeners may thus find it difficult to understand or appreciate any structure or syntax.

86 Smalley, 43. 87 Smalley, 44. 88 Smalley, 39. 35 Perhaps the most important passage from Smalley’s article is in regard to what a good system of timbral description should do:

The everyday language of qualitative description is accessible to everyone. It is closely allied to the “matter” of sound. Terms like bright/dull, compact/spread, hollow, dense, may be vague, and vague they are destined to remain since they are qualities, but they have the advantage of an immediate, comprehensible identity and are therefore not to be scoffed at. They are verbal signs that essential qualities have been recognized.89

In my own descriptions of timbre and timbral qualities in the chapters that follow, I will strive to maintain a descriptive language that is easily accessible. Because cultures generally share metaphors for describing the experience of listening to sound and music, I will avoid any attempt to derive technical descriptions of timbre from harmonic spectra and spectrograms and will instead rely on descriptive language.

Previous Approaches to Electronic Music

As previously mentioned, we might think of the entire array of possible analytical methodologies as existing within a subjective-objective spectrum. Though there is certainly a lot of area in the middle of this field, analyses that fall too far toward either end of this spectrum may prove to be unsatisfying in a number of ways. For instance, an analysis that is too subjective or too rooted in the phenomenological experience of music may be too personalized from the frame of the analyst that the reader is unable to replicate or internalize it. These types of analyses may resonate for the analyst, but it is often hard to engage in critical discourse about these types of analyses or compare multiple analyses to one another precisely because they are so individualized. Of course, if one ventures too far to the objective side, the resulting analysis may prove to be very sterile in nature, or at the worst, it may simply be a description or transcription

89 Smalley, 36. 36 of musical/sonic events disguised as an analysis. Though it is certainly easy to compare data sets, spectrograms, etc., these types of analyses often stop short of making critical statements or engaging in meaningful musical discourse.

Before engaging in analysis of our own, we should first consider some of the work previously done in the field of electronic music analysis. We will examine these methodologies in terms of how they lie on the subjective-objective spectrum, presenting examples from both extremes as well as examples that fall closer to the middle of the spectrum. This is, of course, not a comprehensive overview of all of the existing work in this field (a task which could itself fill volumes), but rather a targeted selection of representative methodologies.

Subject-Oriented Analyses

Subject-oriented analyses are certainly more prevalent in the classroom than they are anywhere else. Anyone who has taken a class or seminar on electronic music has, as some point, probably done an analysis that could fall into this category. These analyses can take a number of shapes, but two of the most common types are “draw what you hear” and “describe what you hear.” The immediate positive and negative attributes of these types of analysis are obvious: on one hand, they require little or no expertise with the genre, and they are approachable ways to engage with electronic music. On the other, it is extremely difficult to compare these analyses or engage in critical dialogue because they are so individual and subjective. Furthermore, these types of analyses often toe the line between analytical and descriptive. Though more of the analytical work published in the field of electronic music is object-oriented, we will briefly examine some examples from the subjective side.

Lawrence Ferrara’s 1984 analysis of Edgard Varèse’s Poeme électronique represents an extreme case of a subject-oriented analytical methodology. Broadly, his methodology requires

37 five distinct steps that occur over a number of listenings. The first step is to engage in multiple

“open” listenings of a piece. The purpose of these listenings is to orient the analyst to the work; he refers to these listenings as “open” because the analyst may respond “to any level of meaning in the work.”90 Though it may seem like an obvious first step, I would argue that any good analysis should begin from the standpoint of a listener’s experience and reaction to the piece of music being studied, whether or not that analysis is intended to be subject-oriented or not. Each of these open listenings is followed by a reflective description, which reports “in narrative form what was heard and the analyst’s mode of orientation toward the work.”91 The next three steps in the methodology are to listen for syntactical, semantic, and ontological meanings in the work. In other words, Ferrara encourages the listener to attempt to listen for structural and extra-musical meanings in the sound materials themselves. Finally, the process ends with another series of

“open” listenings and descriptions once the syntactical, semantic, and ontological listenings have been completed.92

It seems, however, that what Ferrara is interested in creating is not an analysis of musical structure or syntax; rather he is interested in in bringing to life the “voice” or the “story” of the composer. He argues that “the sounds of some musical works enable the ontological world of a composer’s lived ‘time’ to be grounded in those sounds. The musical work thus makes a ‘new space’ in sound for the composer’s knowledge and experience of his or her world…The work stands as a living dynamic within the context of a clear and perhaps at times compelling gestalt.”93 While this description of the work may be compelling in terms of its extra-musical

90 Ferrara, “Phenomenology as a Tool for Musical Analysis,” 359. 91 Ferrara, 359. 92 Ferrara, 359–61. 93 Ferrara, 361. 38 meaning, it does little to emphasize structural elements of the work that may be present like formal design or syntactical units.

The potential shortcomings of a methodology that is so subject-oriented are made apparent in the analysis of Poème électronique. Ferrara’s methodology leads him to argue that

“Poème électronique crystallizes what it means to be in the modern era. In our actual lives, technology (, automobiles, or electric can openers) surrounds our existence. In this piece, the sounds of technology penetrate, permeate, and surround all other sounds. Human existence, presented by the men’s voices and the woman soloist, is marked in this work by disorientation, alienation, and fear…”94 Ferrara’s comments continue on from there to describe the sonic content of the work and how it relates to his extra-musical narrative. Thus, his analysis is really more of a description of his personal understanding and hearing of the work, and while I personally find it to be a compelling and interesting hearing, his comments fall short of critical.

Furthermore, one might argue that this analysis could never be repeated by anyone else listening to the work (and perhaps not even by Ferrara himself). This is not to say that music analysis should be held to the same standards of repeatability that scientific experiments are, but I would argue that by grounding his methodology in a process so subjective, and by providing no tools for analysis, the result is a sort of analytical “wild west” – anything goes, all results are equally valid, and analytical claims can simply be backed up by arguing “that’s how I hear it.” Certainly, the listening and perceiving process should be valued in analysis, but the aim of this project is toward a methodology that integrates the subjective part of musical analysis with an equally clear and methodical objective part. “That’s how I hear it” may be a reliable criterion, but it would also be advantageous to be able to compare hearings through a shared methodology.

94 Ferrara, 369–70. 39 In many ways, Judy Lochhead’s analysis of Kaija Saariaho’s Lonh reflects the methodology put forth by Ferrara. Although she does not follow his exact process, nor does she provide written reflections of her listenings, Lochhead is concerned overall with the syntactic and semantic usage of musical materials in much the same way that Ferrara is. Just as he was concerned with articulating the composer’s voice and intent, Lochhead states that

The analytical articulation of Lonh’s design offered here reflects a wide variety of factors: the trace of compositional intent as inscribed in the score, critical commentary…the composer’s comments about Lonh and other works…knowledge about relevant music technologies used in composition…knowledge of medieval musical practices, and my own analytical background, preferences, and goals. And while several of these factors may not be explicit in the analysis, all of them figure in the analytical process itself.95

However, Lochhead’s analysis is unlike that of Ferrara in that she does attempt to engage with concepts of musical structure and design in a much deeper and critical way.This structure is manifested through the subjective interpretation of the quality of “radiance,” which she argues emerges from the interaction of three simultaneous musical planes: “1) moments of sonic luminance, a quality arising from pitch range, spectral attributes, and culturally derived timbral associations, 2) moments of formal “flickering,” an emergent quality arising from musical processes of association and uniqueness, and 3) moments of intensity arising from the culmination of transformational process.”96 Ultimately, Lochhead argues that, though salient formal instances occur in the work, “these moments of formal salience are not the goal of the processes of interaction but rather they materialize as enhanced moments of radiance…The technê of Lonh then is a revealing of this lived experience of radiance.”97

95 Judy Lochhead, Reconceiving Structure in Contemporary Music: New Tools in Music Theory and Analysis (New York: Routledge, 2016), 106. 96 Lochhead, 111. 97 Lochhead, 120–21. 40 While I find Lochhead’s analysis of Saariaho’s work to be interesting and compelling, it shares the potential pitfall of a subject-oriented analysis in that it is difficult to compare the results of her analysis with another analysis of the same work. One can certainly argue about methodological instances within Lochhead’s analytical process, but it is extremely unlikely that another analyst looking at this work would come to any conclusion that resembles Lochhead’s.

Thus, these two analysts, completed analyses in hand, might find it difficult to engage in meaningful discourse with one another because of the high level of subjectivity that is built in to their work.

Object-Oriented Analyses and Transcription

Many analysts have approached the task of analyzing electronic music by removing the subjective and experiential variables inherent in the process of perceiving sound and replacing them with raw data. This data can take the form of transcriptions, detailed notes, spectrograms, etc., but the overarching theme seems to be the same: if you have enough data, and that data is thoroughly detailed and accurate, you can create an effective analysis. This thought process is very similar to the concept of “big data” in statistics and prediction, the idea that simply having enough quality raw data makes the need for a theory obsolete. However, as statistician Nate

Silver points out in his fascinating 2015 book, “The numbers have no way of speaking for themselves. We speak for them. We imbue them with meaning…It is when we deny our role in the process that the odds of failure rise. Before we demand more of our data, we need to demand more of ourselves.98 Likewise, having as much “data” as possible in musical analysis can be incredibly helpful, but we should not be lulled into a false sense of security that somehow this

98 Nate , The Signal and the Noise: Why So Many Predictions Fail--but Some Don’t (New York, NY: Penguin Books, 2015), 9. 41 trove of data is the analysis, when in fact the data is a path toward an analysis. The data is unable to help if the analyst is removed too far from the analysis.

Brian Fennelly was one of the early proponents of a notation system to allow for the quantification of every sonic element of electronic music. This notation system took the form of a “formula whose terms separately represent the components of sound in rank of their perceptual

99 importance.” This formula takes two forms. The first is a “basic” version, XsYcE, and the

t i second is an expanded version of this basic formula, XSr YCd E. Each of these formulas is structured around timbre (X), envelope (Y), and “enhancements” (E), which Fennelly describes as “further defining characteristics, as beating, amplitude oscillations of certain spectrum components, or use of reverberation.”100 Of course, one can clearly and obviously see a number of issues arising from this system. Perhaps the most ominous issue is that of the learning curve; certainly many analytical methods require time to familiarize oneself with their processes and idioms, but the sheer volume of possible combinations are daunting to say the least. Similarly, if the goal of this system is the “systematic, straightforward means for the concise identification and characterization of sounds,”101 it is hard to imagine that any two people might notate a sound the same way, perhaps defeating the purpose of the system in the first place. Fennelly makes clear that there is no room for error or interpretation, as well, stating that “Any laxity in notation that might lead to misrepresentation of events and erroneous evaluations must be avoided.”102

While one might appreciate the goal of a comprehensive sonic language that analysts can use to engage in discourse with one another, the prospect that everyone can agree on a sonic label such

99 Brian Fennelly, “A Descriptive Language for the Analysis of Electronic Music,” Perspectives of New Music 6, no. 1 (1967): 82. 100 Fennelly, 82. 101 Fennelly, 80. 102 Fennelly, 94. 42 c as “ILDm RFB” is perhaps wishful thinking. Furthermore, Fennelly does not seem to appreciate that this type of sonic transcription is only possible through the cognitive framework of the listener. No matter how much one might want to remove the subjectivity of the listener from a systematic language such as this, the act of listening is a mandatory step. The idea of creating an objectively “correct” representation of a sound through a subjective interpretation seems to undercut the entire premise.

Évelyne Gayou clearly articulates this point in her article about the transcription of electronic music, which is essentially the same task that Fennelly undertook (in all but name) 40 years prior. She argues that the code she is developing for transcription “must allow for plurality, insofar as the perception of relevant sonic events is likewise interpreted in different ways from one listener to another and even by the same listener during a single hearing. And, as the listening experience is fluid, this mobility permits, and even demands, invention.”103 This seems to stand in direct contrast to Fennelly in that she is seeking a transcriptive methodology that is oriented to experience of sound rather than to the sound itself.

Gayou further states that “transcriptions can have several functions, such as being used as a working draft, a basis for analysis, or even an object of analysis, a guide to interpretation, a pedagogic tool to help reveal the work to music lovers, and even provide a medium for working out creative ideas. It can also be used to memorize, and to preserve – like a score.”104 While transcriptions can indeed function as all of these, one key point is that the act of transcription is in itself already a part of the analysis. In other words, transcribing a sound or a piece of music requires interpretive decisions on the part of the listener; the idea that any transcription is

103 Évelyne Gayou, “Analysing and Transcribing Electroacoustic Music: The Experience of the Portraits Polychromes of GRM,” Organised Sound 11, no. 2 (August 2006): 128. 104 Gayou, 125. 43 objectively “correct” is flawed in that it fails to account for the intermediary between sound and notation. This is not to say that transcription should be avoided. On the contrary, the transcription of sonic events will be a key step in my analytical methodology. We should simply be mindful that any transcription is by default at least partly subjective due to the necessary presence of an interpreter.

Pierre Couprie was one among many who sought a similarly “accurate” and descriptive methodology for electronic music analysis, but he sought this through graphic representation instead of alphanumeric notation like Fennelly. His reasoning for graphical representation seems to be grounded in technological development, arguing that “Nowadays, analysis can be found on media such as the CD-ROM or the Internet, and the simultaneous combination of sound, graphics and texts is very common. In this context, graphical representation seems to constitute a real tool for analysis and for the publication of electroacoustic music: henceforth, analysis and representation will be inseparable.”105 Though Couprie is likely correct about the proliferation of graphic representations of , with the idea that analysis and representation are necessarily becoming intertwined. Certainly a representation of the sonic events in a piece of electronic music can aid the analyst in grasping the total sounding body of a work as well as its constituent syntactic units, but it is not difficult to imagine analyses of electronic works that do not rely on these representations. I am not advocating against the use of representational transcriptions, but rather I am simply saying that they are not necessary to produce a satisfying analysis.

105 Pierre Couprie, “Graphical Representation: An Analytical and Publication Tool for Electroacoustic Music,” Organised Sound 9, no. 1 (April 2004): 109. 44 Couprie also points out a key inverse relationship present in the transcription of electronic music: “If symbolic representations enable significant analytical accuracy, they also entail a highly reduced potential public, as the complex decoding of each symbol is principally of interest to specialists.”106 To put it more simply, the more complex a notational or representational system becomes (even if the complexity is necessary for the sake of

“accuracy”), the less accessible it is. Thus, it stands to reason that any analytical methodology designed for broad use among musicians and theorists, not just specialists, should allow for a certain amount of flexibility. I have already argued against the concept of a “correct” representation of an event mediated through human perception; the lack of approachability is simply one more reason that complex representational schemes are ineffective.

Cogan’s 1984 analysis of the opening to Milton Babbitt’s Ensembles for shows some of the analytical dangers when using a methodology reliant on “big data.” As noted before, Cogan uses spectrograms as the core part of his analytical process, leading to observations about the passage that are inherently limited by what a spectrogram can actually show: time and pitch. Thus, he asserts that “the introduction is dominated by four separate sonic pillars” and that “taken together, these four introductory sonic pillars open up virtually the entire human audible range.” Furthermore, he states that “the entire passage reveals a sonic shape of accumulating contrasts in which each uniquely crafted and conceived sonority and each sonic transformation plays a contributing role” in the filling in of this sonic space.107

This analysis seems to make sense if one looks at only the spectrograms and reports on what is seen. Indeed, we can see four marked moments of stacked frequencies that Cogan

106 Couprie, 110. 107 Cogan, New Images of Musical Sound, 104–8. 45 describes, and likewise we can see that if the spectral energy of each is combined the entire range of human hearing is essentially covered. The problem, however, is that there is so much more to this passage that Cogan necessarily misses because the tools he is using to get “all of the data” do not capture the experience of listening. As Michael Clarke points out, spectrum photos like those that Cogan uses

cannot be read in detail by the human eye – it is difficult to see harmonic relationships from the page and impossible to get more than a general sense of the color/grey-scale gradations relating to amplitude. When it comes to the very spectromorphological details that are so important for electroacoustic music, such as subtle changes in the timbral envelope of a sound, a printed sonogram is often of little use…Even distinctions that may be very clear to the human ear may not be obvious in a sonogram.108

This is not to say that spectrum photos are useless; certainly, they can provide the analyst with a tool to visualize sonic events. However, we should remember that all that a spectrum photo is capable of showing is pitch and time. For instance, though spectrograms might show

“everything,” they do not help the listener group data perceptually or segment sonic streams from a vertical sonority.109 In the case of Cogan’s analysis, he makes some interesting observations about the introduction of the piece from the standpoint of pitch, but he does not account for the other musical elements at play that may be obvious from the standpoint of listening but not from looking.

Many publications that bear the word “analysis” somewhere in the subject or title seem to stop short of actually providing a methodology or an actual analysis. There is a clear focus on tool building but not on the implementation and execution of critical analysis using those tools.

These object-oriented analyses, as I have called them, can play a vital role in meaningful and

108 Michael Clarke, “Analysing Electroacoustic Music: An Interactive Aural Approach,” Music Analysis 31, no. 3 (October 2012): 352–53. 109 Clarke, 352–53. 46 insightful musical discourse, but only by integrating the subjective with the objective. Just as a sound needs a listener, a piece of music needs an interpreter. While technical analyses and representations of sonic content might prove useful, we should remember that they are fundamentally tools to be used throughout the process of analysis; the tools themselves are not suitable substitutions for critical interpretation.

Combined Analyses

Though many attempts at the analysis of electronic music tend to fall toward either extreme on the subjective-objective spectrum, some prominent theorists and analysts have proposed systems that employ elements of both subjective experience and objective systematic methodology.

Perhaps one of the most well-known theories within this category is Denis Smalley’s system of spectromorphology. Put simply, spectromorphology is a set of tools to allow the user to describe and analyze a listening experience. The term is created from two constituent parts: spectro-, referring to the spectra of sounds, and –morphology-, the ways in which these sounds are changed and shaped through a temporal span.110 Thus, Smalley intends for the spectromorphological approach to be apprehended by all listeners, even those who are not experts within the field of electronic music. He further implores the reader to “try to ignore the electroacoustic and computer technology used in the music’s making,” because he feels electronic music is fundamentally different from the type of listening we do when we relate an experienced sound to its physical cause.111 In other words, Smalley is advocating for the treatment of electronic music as sound in its own right, not as sound for the purposes of

110 Smalley, “Spectromorphology: Explaining Sound Shapes,” 107. 111 Smalley, 108–9. 47 transmitting semiotic information. (We will return to the ways in which we might listen to sound in the next chapter.)

A critical summary of Smalley’s entire spectromorphological theory would take far more space than the current discussion allows for. However, we can generalize by stating that because

Smalley’s theory presupposes a listening situation in which sound is listened to for its own sonic merits (not for its extra-musical meaning), the descriptive tools that Smalley provides similarly focus on sound quality and process. Some examples of these tools include the concept of

“gestural surrogacy” (a way to describe a sound’s relationship to a perceived physical initiating gesture), structural functions of sound based on spectral envelope and expectation, and descriptions for sonic motion, growth, texture, and behavior, as well as a complete descriptive/analytical system for sound spectra.112 What all of these have in common, of course, is that they bind together a listener’s experience in listening to a sound or group of sounds with an objective methodology for description. Spectromorphology requires that a listener make critical assessments about what is being heard (as opposed to relying on technology to transcribe the sounding content of a piece), and then they are given the tools to describe it.

Spectromorphological concepts will be one of the key ways that sounds objects and small-scale formal units will be discussed and analyzed. A fuller description of spectromorphology as well as examples of its implementation and impact on musical analysis can be found in chapter four.

As John Young points out, “Smalley’s writing is widely regarded as significant, but, paradoxically, there has been relatively little analytical work in electroacoustic music built directly on his terminology and also little critical development of some of the questions raised by

112 Smalley, 111–24. 48 the work.”113 Though Young may disagree, the reason for this seems obvious: Smalley’s system simply has too steep of a learning curve. While certain elements of his theory are relatively simple to comprehend (gestural surrogacy, for instance), Smalley’s article is full of tables, charts, and terms that at best are confusing and at worst seem to contradict one another. The sheer amount of new terminology present in Smalley’s article alone would likely take even an experienced listener many months, if not years, to be able to implement and comprehend. Not only this, but the community as a whole would have to adopt the system for critical discourse to take place. This situation does seem to be completely at odds with Smalley’s assertion that

“Spectromorphological thinking is based on criteria which can potentially be apprehended by all listeners.”114 While all listeners have the capacity to hear what the spectromorphological system is describing, it may be incorrect to paint the system as one that is approachable by experts and novices alike. Thus, a preferable system may be one that allows for Smalley’s descriptive power without the need for lots of new terminology.

Michael Clarke has proposed an analytical process he calls “interactive aural analysis.”

This process uses to aid in the analysis, wherein the listener aurally segments the work into meaningful units then arranges segments into an “interactive paradigmatic chart.”115 This chart, while not clearly defined, appears to be a sort of linear sound board featuring the segments that the user has identified. While this “interactive approach” certainly marries experience and methodology, Clarke does not seem to have a clear idea about what the analytical results should be. He says that “the principal goal was to find an appropriate means of and

113 John Young, “Sound in Structure: Applying Spectromorphological Concepts.” (Electroacoustic Music Studies Network, 2005), https://www.dora.dmu.ac.uk/handle/2086/4756. 114 Smalley, “Spectromorphology: Explaining Sound Shapes,” 109. 115 Clarke, “Analysing Electroacoustic Music,” 357–59. 49 presenting the analysis of a work that exists primarily as sound and in ways that could relate the technical and the analytical to aural experience,”116 but he does not really show how this happens. The results of his methodology, like many proposed analyses, seem transcriptive rather than critical or analytical.

Results aside, Clarke’s proposed system is prohibitive to a large number of potential analysts. For example, he clearly states that some programming skills in /MSP (the visual language that the software was written in) is necessary.117 Considering that this language is known mostly to composers within the field, we again find the problem that the system is unwelcoming to a larger musical community that might wish to undertake the analysis of an electronic piece using this methodology. Ultimately, Clarke’s efforts to reconcile experiential listening with objective analysis are well-founded, but the methodology lacks a clearly-defined analytic end goal.

Through the above summary of previous analytical methodologies, we can see a clear theme emerging of the necessity to balance accessibility with analytical power. It is inherently true that the more technical and demanding the analytical process is, the smaller the population will be that feel comfortable or adept at using it. Likewise, by making the system too non- technical or ad hoc, one runs the risk of producing analyses that are similarly arbitrary and incapable of engaging in meaningful discourse with one another. This is not to say that one situation is objectively preferable to another, however the system I will propose strives to maintain a clear balance between general approachability and meaningful, critical analytical discourse.

116 Clarke, 375. 117 Clarke, 375. 50 CHAPTER 2

METHODOLOGICAL CONCERNS

The Challenge of Listening and Analysis

As I have discussed, my primary analytical goal is to incorporate elements of both the subjective experience of the listener and the objective observations of the analyst. One of the key issues that must be overcome, however, is creating an analytical result that is both reflective of one’s personal listening experience and at the same time capable of being expressed and conveyed to others. As Brian Kane points out, “It is an understatement to say that listening is a challenging field to theorize, for there is no direct material artifact produced by listening. It is often extraordinarily challenging to convey to others what is being heard in some stretch of sound such that they can reproduce the intended experience.”118 While we may want to maintain as much of the subjective experience as possible in the resulting analysis, we must also attempt to create something with a certain amount of objectivity if we want it to be repeatable and comparable to the analyses of others.

The main task of this chapter will be to outline some of the theoretical and conceptual processes that underlie our hearing of sound and to examine the ways in which we might organize these sounds. First, we will discuss the listening situation in which the experiential process of electronic music typically takes place, the acousmatic situation. We will also explore the different modes of listening that one might engage while perceiving everyday sounds and music alike, and we will examine the process of utilizing these listening modes to form sound

118 Kane, Sound Unseen, 26. 51 objects. Finally, I will propose a methodological framework that blends these subjective experiences with objective analysis.

The Acousmatic Situation

Nearly all of the music to be examined in this dissertation (and a large portion of the entire field of electronic music) is heard in what Pierre Schaeffer, a French composer and theorist of electronic music, called the acousmatic situation.119 Put most simply, the term “acousmatic” refers generally to the experience of hearing a sound whose source or cause is unseen.120 Thus, it is “the opposite of direct listening, which is the ‘natural’ situation where sound sources are present and visible.”121 In many forms of traditional music making, acousmatic listening is certainly not the norm. Consider the usual modes of presentation for an orchestral concert, for instance, and we quickly realize that traditional Western art music is typically presented both aurally and visually. In fact, visual cues can give the trained listener an astounding amount of predictive and evaluative information about a particular performance. A performer’s gestures, posture, expression, etc. all provide information about the nature of the sounds being heard and perhaps even their syntactical functions in the music.

While it may initially seem like the acousmatic listening situation is not something that most people encounter in their everyday lives, a brief consideration reveals that it is actually quite prevalent in modern society. We can certainly point to earlier instances of concealed or hidden sounds and music prior to the 20th century, but the development and general availability of recording technologies in the 1940s and 50s made acousmatic sound a fact of modern life.

119 The term “acousmatic” originates from the Greek akousmatikoi, who were disciples of Pythagoras that were said to listen to him speak from behind a concealed curtain. 120 Pierre Schaeffer, Treatise on Musical Objects: An Essay across Disciplines, trans. Christine North and John Dack (Oakland, California: University of California Press, 2017), 64. 121 Chion, Guide to Sound Objects, 11. 52 Technologies like the , tape recorder, telephone, records, compact discs, , etc. are all inherently linked to and heavily depend upon the acousmatic listening situation in order to function. However, even without considering technology, acousmatic listening still plays an important role in our everyday lives. For instance, imagine you are about to cross an intersection in your car, but suddenly you hear the sound of an ambulance’s siren. Naturally, this causes you to come to an immediate stop and examine your surroundings to find the emergency vehicle so that you might give way. Of course, you come to realize that the siren was being played from the radio in your vehicle the whole time, and there never was any physical ambulance trying to get through. Kane states, “Although the acousmatic experience of sound still allows for the possibility of speculating or inferring causal sources, it bars direct access to visible, tactile and physically quantifiable assessments as a means to this end. The acousmatic experience reduces sounds to the field of hearing alone.”122 This is one important result of the acousmatic situation; we are forced to rely on our subjective experience of sounds and interpret them entirely separately from visual stimuli.

Because of this separation of auditory and visual perception, certain difficulties (and opportunities) arise for the musical analyst.

For the traditional and the acoustician one important aspect of sound recognition is the identification of sound sources. When this takes place without the help of sight, musical conditioning is thrown into disarray. Often taken by surprise, sometimes uncertain, we discover that much of what we thought we could hear was in reality merely seen, and explained, by the context. This is why it is just about possible to confuse some sounds produced by instruments as different as strings and woodwind.123

122 Kane, “L’Objet Sonore Maintenant,” 17. 123 Schaeffer, Treatise on Musical Objects, 66. 53 Indeed, our eyes can easily play tricks on our ears, a phenomenon that people like Foley artists

(who produce sound effects for film and television) exploit every day.124 While Schaeffer may be correct in that the acousmatic listening situation can throw our musical conditioning into a state of “disarray,” it also affords the analyst opportunities to theorize about the sonic experience.

Because we are unable to link the auditory and the visual in the acousmatic situation, we must resign ourselves to the fact that we rely entirely upon our subjective experience of the sound, and that this experience is, in fact, meaningful. As Kane states, “Hearing, whether imagined or real, presents us with indubitable evidence or data.”125 While different listeners will certainly hear varying things about each sound due to nuances in individual listening practices as well as the subtle variations in each person’s auditory faculties, it is important to realize that the listener’s perception of a sound is completely true and valid each time each time it is heard. In other words, the act of listening is the only way to experience a sound in a completely lossless sense.

It is critical for this entire project to understand the questions that the acousmatic listening situation seeks to ask and answer. Schaeffer himself cautions against conflating the acoustic phenomena with the acousmatic:

It would be wrong use of this experiment if we submitted it to a Cartesian analysis by differentiating the “objective” – what is behind the curtain – from the “subjective” – the listener’s reaction to these stimuli. From this viewpoint it is the so-called objective factors that contain the references for the sought-after elucidation: frequencies, durations, amplitudes…; the curiosity aroused is an acoustic curiosity. Compared to this approach, the acousmatic is a reversal of the pathway. It asks parallel questions: the question now is not how subjective listening interprets or distorts “reality” or of studying reactions to stimuli; listening itself becomes the phenomenon under study. The concealment of causes

124 For instance, the sliding door sound in the original Star Trek series were nothing more than a recording of a piece of paper being slid in and out of an envelope. Because we see the visual cue at the same time we hear the auditory cue, we have no problem believing that the door is the source of the sound. In fact, it may even take effort to convince someone that the sound is not the sliding door. 125 Kane, Sound Unseen, 32. 54 is not the result of technical imperfection, nor is it an occasional variation: it becomes a prerequisite, a deliberate conditioning of the individual. Now the question: “What can I hear?...What exactly can you hear?” Is turned back on him in the sense that he is being asked to describe nog the external references of the sound he perceives but his perception itself.126

Perhaps the most salient point to take away from this incredibly important excerpt is that the primary question we might seek to address when listening musically in an acousmatic environment is not “What sound is that?” Rather, we should be asking “What do I hear?” The investigation of acousmatic phenomena is concerned primarily with the sonic aesthetic of sound objects and emphatically not with their origins. It may be useful for us in analysis and discourse to use descriptive terms that might refer to a perceived sound origin (“the plucked guitar sound,”

“the bird sound,” etc.), but this is not the goal. As Brian Kane points out,

For Schaeffer, the natural standpoint [direct listening] must be overcome if we are ever to uncover the grounding of our musical practices. By bracketing out the physically subsisting fact-world, by allowing us to make no judgements in relation to it, and by leaving us only with perceptual experience in itself, hearing can no longer be characterized as a subjective deformation in relation to external things…Listening becomes a sphere of investigation containing its own immanent logic, structure and objectivity.127

Thus, our aim when analyzing music in an acousmatic listening situation should be to describe and analyze the experience of hearing itself. The implications of the acousmatic experience result in a number of important sonic-perceptual processes that we will examine throughout the rest of this chapter.

Modes of Listening

Though we may imagine that listening is a singular process that is executed similarly in response to each auditory stimulus, listening actually involves a number of interlocking

126 Schaeffer, Treatise on Musical Objects, 65. 127 Kane, “L’Objet Sonore Maintenant,” 17. 55 perceptual and cognitive processes. Schaeffer proposed a series of four “modes” (fonctions) of listening that correspond to the various ways we perceive and interpret incoming sounds, and each of these has a significant impact on the ways in which we might analyze electronic music, especially when considering it from a sonic perspective. Schaeffer labeled the modes from one to four, but this numbering does not denote a chronological sequence in which the modes are engaged. Rather, Schaeffer envisioned the four modes as a type of auditory-perceptual circuit, wherein the four modes are often engaged simultaneously and perception may switch from one mode to another freely.128 We will discuss each mode in turn, but it will be important to remember that there is no defined order to the listening process.

Mode 1: Listening (écouter)

Écouter is perhaps the most commonly engaged mode when listening in an acousmatic situation, and thus by default when listening to electronic music. The first listening mode, usually translated as “listening,” is the conceptual process of listening to a sound and, using the sonic characteristics as well as cultural identifiers of the sound, attempting to link it with a source cause or sonic generator. In other words, it is treating a sound as a signifier of its source in the semiotic sense.129 This happens all the time in our everyday lives without even thinking about it. When we receive a phone call, our first task is usually to identify who is on the other end of the line. (At least this was true before the days of caller ID.) We listen to the voice, and we attempt to link it with its source so that we may identify whom we are speaking to. If this process takes too long, we may actually miss what is being said, emphasizing that we are truly not listening for the meaning of a sound, but rather the source. As Brian Kane puts it, écouter is

128 Chion, Guide to Sound Objects, 20. 129 Schaeffer, Treatise on Musical Objects, 82. 56 “an information-gather mode in which sounds are used as indices for objects an events in the world.”130

While mode one is certainly useful in our everyday lives, I will generally seek to avoid mode one listening when I am experiencing a work of electronic music. It may be an interesting mental exercise to ask the question “where does that sound come from?” when listening to electronic music, but it will rarely be of musical value. Even if we could know the source cause of a sound (which in many cases, we cannot), this knowledge tells us nothing about the sound itself. Just because we might know that a sound in a work of electronic music is a sampled bird call, for instance, this does not help us grasp its sonic characteristics. Rather, we should focus on the sound for its own sake. We will revisit this concept later in this chapter.

Mode 2: Perceiving (Ouïr)

Whereas mode one might be thought of as an active investigation into an auditory stimulus, mode two is the involuntary perception of a sound. Michel Chion says that mode two is

“perceiving by ear, being struck by sounds, the crudest, most elementary level of perception…we

‘hear,’ passively, lots of things which we are not trying to listen to or understand.”131 This mode is the pure bodily function of hearing. We do not necessarily choose what sounds we hear, nor do we take an active role in the listening process. Rather, we simply experience undifferentiated auditory stimuli in the world around us. For example, we may be walking down the sidewalk when suddenly a car backfires as it passes by. We cannot help but be immediately struck by this sudden auditory intrusion, and at that moment we are involuntarily drawn to it. We will most certainly be drawn into mode one hearing and try to locate the source and location of the sound,

130 Kane, Sound Unseen, 27. 131 Chion, Guide to Sound Objects, 20. 57 but for that to occur we must perceive the sound through mode two in the first place. (This is a good illustration of movement through the circuit created by the four modes.)

It is somewhat more of a pedantic task to determine that mode two is necessary for the analysis of electronic music, so long as we accept the proposition that a listener is needed to act as an intermediary between sound and analysis. Simply put, you have to be able to hear a piece of music to analyze it. This is especially true for electronic music, a genre that so very often resists effective transcription. Even if we could grant that non-auditory transcriptive tools like spectrograms could act as an effective stand-in for the auditory experience of a sound or of a piece of music, the removal of mode two creates a break in the free motion throughout the four- mode circuit. If we cannot perceive the primordial essence of a sound through mode two (what

Schaeffer calls the fond sonore), we miss a necessary step for engaging further modes of listening.

Modes one and two are differentiated by Schaeffer as being objective and subjective listening situations, respectively. At the base of these terms is the idea that ouïr focuses on the individual who is perceiving a sound (the subject), whereas écouter focuses on the sound itself that is being perceived (the object). In many ways this is analogous to the subject-oriented and object-oriented analyses that I have outlined in the previous chapter, which focus on the experience of the listener and the objective “reality” of the phenomenon being examined. Just as we have sought a balance between these two types of analysis into a type of hybrid analysis, we should seek to engage both subjective and objective modes of listening. However, mode four (the other mode of objective listening) will prove far more useful in the analysis of electronic music than mode one. Furthermore, mode three (the other subjective listening mode) is perhaps the most important of all.

58 Mode 3: Hearing (entendre)

The third mode of listening is “the mode of listening to a sound’s morphological attributes without reference to its spatial location, source, or cause; we attend to sounds as such, not to their associated significations or indices.”132 Though “hearing” (entendre) and “listening”

(écouter) certainly have similar definitions in everyday language, it is extremely important to differentiate the two. The clearest way to distinguish them is to examine their perceptual processes and goals. Écouter, as discussed above, is primarily concerned with information gathering so that a sound’s source cause can be determined. On the other hand, entendre is not concerned at all with a sound’s source, but rather a listener operating under mode three is occupied with making subjective descriptions and observations about a sound itself. An anecdote from Schaeffer will help to make this clear.

Different listeners gathered round a tape recorder are listening to the same sound object. They do not, however, all hear the same thing; they do not choose and evaluate in the same way, and insofar as their mode of listening inclines them toward different aspects of the sound, it gives rise to different descriptions of the object. These descriptions vary, as does the hearing, according to the previous experience and the interests of each person. Nevertheless, the single sound object, which makes possible these many descriptions of it, persists in the form of a halo of perceptions, as it were, and the explicit descriptions implicitly refer back to it.133

Thus, we can see the subjective nature of mode three; we are concerned with describing our own personal experience of the sound as opposed to determining the objective “reality” of the sound’s source cause, the charge of mode one.

Clearly, mode three is so far the most relevant mode for analyzing electronic music (save for the less-than-noteworthy necessity of hearing the music in the first place through the second

132 Kane, Sound Unseen, 28. 133 Schaeffer, Treatise on Musical Objects, 82–83. 59 mode). As I have stated, one of my chief concerns when examining electronic music is to engage with the subjective experience of the sonic characteristics of a piece, a process that falls squarely under the purview of entendre. Furthermore, the third mode is in itself inherently analytical, as it requires the listener to choose sounds and aspects of those sounds to focus on in order to make subjective, descriptive observations. As Schaeffer mentioned above, it is highly unlikely that any two listeners will form the exact same observations about a given sound, but their unique individual descriptions may give rise to competing and equally valid analyses when manifested through mode three.

Mode 4: Understanding (comprendre)

Michel Chion succinctly defines the fourth mode, comprendre, as “grasping a meaning, values, by treating the sound as a sign, referring to this meaning through a language, a code…”134 While in many ways this seems similar to écouter, the key difference is that comprendre is primarily concerned with the communicative/informative function of a sound, and

écouter seeks to determine nothing more than an auditory source cause; écouter listening does not care what information the sound carries. To return to the ambulance analogy, it is the difference between “I hear an ambulance” (écouter), and “I need to pull over to the side of the road” (comprendre). Kane views the comprendre more broadly, stating that,

Comprendre extends beyond linguistic utterances to systems like music that employ quasi-linguistic auditory signs. Much of what gets taught in elementary harmony classes institutes this kind of listening, showing students how to compose, evaluate, and understand a well-formed tonal phrase, one that demonstrates the requisite musical grammar, proper use of musical topoi, or correctly reproduces a given musical style.135

134 Chion, Guide to Sound Objects, 20. 135 Kane, Sound Unseen, 27. 60 In other words, comprendre extends beyond the question of linguistic definition and into the realm of syntax and function within a system. In this case, since the signification system in question is music, mode four engages with the structural and syntactical implications of musical sounds. For example, when we listen for a cadence, we are engaging comprendre.

Thus, it is obvious that the fourth mode of listening is inherently valuable to our musical analytical goals. We will, of course, seek to describe our subjective experiences of sounds and pieces of electronic music (entendre) throughout this dissertation, but we will also need to listen through the fourth mode in order to determine the syntax and function of the sound objects we are hearing. Though we can speak of an “objective reality” of a sound object (the fact that it is a measurable, physical entity), we should not assume that there is an objectively “correct” syntactical functional property of any given sound. Any meaning generated through comprendre in regard to a specific sound object is entirely dependent on both the context in which it is heard as well as the cultural experience of the listener. Each listener brings his or her own specific listening background to the experience, therefore each listener’s subjective hearing of a sound is, for them, correct. Certainly, we can compare hearings and analyses with one another, but positivistic analytic goals are at odds with the methodology and with the perceptual and cognitive process of listening in the first place.

Just as the four modes can be distinguished between subjective and objective, Schaeffer also separates them in relation to being concrete (modes one and two) and abstract (modes three and four).

For both qualified listening at the subjective level and the values and knowledge that emerge collectively, the whole effort in 3 and 4 [abstract listening] is toward stripping down and consists in retaining only qualities of the object that enable it to be linked with others or to be referred to signifying systems. In 1 and 2 [concrete listening], on the contrary, whether we are dealing with all the perceptual possibilities contained in the sound object or all the causal references

61 contained in the event, listening focuses on a concrete given, which as such is inexhaustible, although specific.136

We can see that generally the abstract modes of listening will prove to be the most fruitful for the analysis of electronic music (and music in general). They allow the listener to both subjectively describe a sound object (entendre) as well as analyze that sound object’s syntactical function within a signification system, to define whatever extramusical “meaning” it may have

(comprendre). We will thus concern ourselves primarily with these abstract modes of listening throughout this dissertation, asking and answering questions about the sonic nature of sound objects themselves and their function within the sounding field of a piece of music, and we will avoid questions about the origin or source cause of a particular sound object.137

It is apparent that the objective/subjective and concrete/abstract dualities are thus present in all listening activities. Schaeffer states that, “in every mode of listening there is, on the one hand, a confrontation between an individual, who is receptive within certain limits, and an objective reality; on the other hand, abstract value judgments and logical descriptions detach themselves from the concrete given, which tends to organize itself around these without ever being reduced to them.”138 In other words, all modes of listening are in constant give-and-take with both their subjective/objective and abstract/concrete pairing. Therefore, although our primary concern will be an engagement with the phenomenological experience of listening to sound, we cannot help but simultaneously engage the objective “reality” of that sound.

136 Schaeffer, Treatise on Musical Objects, 85–86. 137 It may be convenient at times for the sake of discussion to refer to sounds by their perceived source (e.g. the “plucked string” sound), but this will in no way play a role in the resulting analysis. 138 Schaeffer, Treatise on Musical Objects, 86. 62 Before continuing, let us briefly summarize the four modes of listening, keeping in mind that they function as a type of listening circuit (not an ordered process), and thus they are often all engaged simultaneously. Mode 1, écouter (objective/concrete), is only concerned with identifying the physical source cause of an auditory phenomenon. Ouïr (subjective/concrete), the second mode, is the passive act of hearing and perceiving sound. The third mode, entendre

(subjective/abstract), is the listening space wherein we make descriptive judgments about that quality of a sound on its own merits. We do not concern ourselves with its source or its function, but rather we listen to it as it is. Finally, comprendre (objective/abstract) is the mode of listening that prioritizes extra-sonic meaning and syntax, communicating both function and information.

We should again be reminded of the simultaneous existence of these modes in the acousmatic listening experience. As Brian Kane points out, “all modes are available within the acousmatic situation. The acousmatic situation is not a constraint on modes of listening; it is a way of bringing those modes into focus.”139 To understand the simultaneous nature of these modes, consider a possible experience you might have when listening to the end of a piece of music. You hear (ouïr) music coming from a piano (écouter). Suddenly, you understand that you have reached a cadence (comprendre), and you hear this moment cascade from the lowest, most resonant notes of the instrument up to the highest, most strident strings, utilizing the full sounding breadth of the instrument (entendre). Of course, all of this happens at exactly the same time, and in no specific order. We can also see that some of these statements have much more value from an analytical standpoint than others. The fact that we might have heard music is not interesting, nor does the choice of instrument tell us anything about the music itself. However, the functional information gained through fourth-mode listening and the qualitative information

139 Kane, Sound Unseen, 30. 63 gained through the descriptive nature of mode three provide us with potentially valuable musical and sonic insights that can be used in the process of analysis.

The Acousmatic Reduction and the Formation of Sound Objects

To this point, we have somewhat loosely thrown around the word “sound object” when discussing auditory phenomena. However, before we continue into a discussion of analytical methodology, we should briefly discuss the Schaefferian conception of the sound object as well as the phenomenological and conceptual processes that underlie their formation. While one may or may not be conscious of these processes, they may have implications for analysis. The ways in which we conceive of sound objects and their transformations might easily play central roles in our analyses of particular pieces, and thus we should pay special attention to them. An exhaustive discussion of the nature of the sound object could easily fill an entire volume on its own (and has done so). What I will provide here is a short overview.

We might begin, much as Schaeffer does, by suggesting an opposition between the object and the subject. If the “sound subject,” if we can use that term, is the receiver of the action, the sound object is clearly that sonic perception which is received. The issue, however, is determining what constitutes the body of the object in the first place. In other words, how do we determine the identity of an individual sound object? How do we break down the experience of hearing, a completely continuous, non-discrete process, into individual objects?

Schaeffer argues that a sound object emerges through a process called “reduced listening,” (écoute reduite) which is the practice of listening to a sound for its own sake by focusing on its specific sonic qualities and bracketing out any extra-musical or extra-sonic information such as source cause or semiotic signification. Furthermore, as Chion points out, reduced listening “reverses the twofold curiosity about causes and meaning (which treats sound

64 as an intermediary allowing us to pursue other objects) and turns it back on to the sound itself. In reduced listening, our listening intention targets the event which the sound object is in itself (and not to which it refers) and the values which it carries in itself (and not it suggests).”140 In other words, the sound is not treated as referential, but instead it itself is treated as the object under investigation. This is clearly a reversal of what we might call “normal” listening practices

(at least from a biological perspective), where the function of listening is that of information gathering. Because of this reversal from the norm, it is important to note that reduced listening is a willful action on the part of the listener; it is not a state that the listener naturally inhabits, but rather reduced listening is a conscious choice. It is through this paradigm reversal that the identity of the sound object emerges in the mind of the listener.

It is important to clarify that although the process of reduced listening shares many similarities with the third mode of listening, entendre, they are distinctly separate concepts. We can clearly understand them as related, of course; both require the listener to bracket out considerations of a sound’s extra-sonic information (like gesture and source cause), and both focus on the sound for its own sake. Reduced listening, however, is a complete conceptual process that starts from the process of auditory perception and results in the formation of a sound object in the mind of the listener. Entendre is certainly a part of this process (mode three will certainly need to be engaged), but it does not end with the formation of sound objects. Rather, the third mode typically results in making subjective descriptions about the sonic qualities of the sound being perceived. It is undeniably a subtle distinction, but it is one that we should be clear about.

140 Chion, Guide to Sound Objects, 30–31. 65 Still, we have yet to define the sound object. Schaeffer himself stated in the Treatise on

Musical Objects (Traité des Objets Musicaux) that “We must confess that after fifteen years of research, we are scarcely able to [define a sound object].”141 We have said that it may emerge through the process of reduced listening, and thus we can say that reduced listening and the sound object are correlates of one another. In other words, “they define each other mutually and respectively as perceptual activity and object of perception.”142 Schaeffer similarly states that

“The sound object is the coming together of an acoustic action and a listening intention.”143 This, however, only tells us how we form the sound object, but it does not tell us what it is.

Let us begin from Chion’s definition: “a sound unit perceived in its material, its particular texture, its own qualities and perceptual dimensions. On the other hand, it is a perception of a totality which remains identical through different hearings; an organized unit which can be compared to a “gestalt” in the psychology of form.”144 While the first part of Chion’s definition clearly references reduced listening, the process from which the sound object emerges, the latter half adds an additional qualifier: sound objects are inherently fixed conceptions. This is not to say that sound objects cannot be transformed or manipulated, but rather that the process of transformation turns the original sound object into a new one. Furthermore, Chion’s conception of the sound object as a gestalt-like unit carries significant implications for analysis. We may wish to consider the sound object to be the fundamental perceptual unit when discussing auditory experience. I would also add that a sound object should clearly conform to Chion’s temporal window of mental totalization (discussed in chapter one). A sound object must be contained

141 Schaeffer, Treatise on Musical Objects, 205. 142 Chion, Guide to Sound Objects, 31. 143 Schaeffer, Treatise on Musical Objects, 213. 144 Chion, Guide to Sound Objects, 32. 66 within a temporal span small enough to allow for the perception of a singular sonic entity (as opposed to a sequence of entities).

Before I propose a definition of my own, we should briefly examine a short list of things that sound objects emphatically are not. A great amount of confusion surrounds the concept of the sound object, and rightly so; it is, in many ways, nebulous and difficult to grasp. First, the sound object is clearly not the physical sound source of an auditory perception. If this were the case, we would no longer be experiencing a sound through reduced listening because we would have failed to bracket out extra-sonic information. A sound object is also not the measurable, physical properties that cause the auditory perception in the listener. Though we can observe the conditions that create the perception, these conditions are not emblematic of the actual experience. Similarly, the sound object is also not the physical media onto which a sound might be recorded (like a , record, tape, etc.). Finally, we might add that the sound object is not “the mood of the listening individual and is not to be handed over to subjectivity. It remains identical across various acts of listening…”145 In other words, the Schaefferian concept of the sound object posits that there is an objective reality to its existence; though it requires individual experience in order to comprehend, the identity of the sound object itself is not subject to personal listening.

I would also add another (possibly controversial) item to the list of what the sound object is not: it is not the audible sound. A sound object may be signified or represented by audible sound, but it is a fundamentally conceptual construction. As evidence for this, I would ask the reader to imagine any sound, whether you have heard it before or not, real or imagined. The fact that we are able to conceive of sounds without requiring the actual auditory phenomena that

145 Chion, Sound, 171–72; Chion, Guide to Sound Objects, 32–33. 67 produce them is direct evidence that the sound object is inherently conceptual and not a sound.

While listening to sound is necessary for the formation of the conception of the sound object, it is not the object. Much in the same way, we might imagine that a physical tree is different than the mental construct of “tree;” the former is a referential sign of the latter. A sound can certainly reference the sound object in a semiotic sense, but the auditory perception is not the same as the sound object.

Thus, I propose a succinct yet comprehensive definition of the sound object: a fixed sonic gestalt arrived at through the reduced listening of an auditory perception. Inevitably, as we discuss sound objects and their transformations in our analyses, we will necessarily have to refer to sound objects in relation to their auditory perceptions. For obvious reasons, it is difficult to discuss purely mental constructs without referencing them. Similarly, it will be necessary to listen to the sound object numerous times to derive a meaningful sonic identity (very much in line with Husserl’s conception of phenomenological reduction as discussed in chapter one). By re-auditioning the sound object through multiple perceptions, we are able to understand the essence of what a specific sound object “is” in terms of cognition and perception.

Sound objects will play an important role in analyzing electronic music, especially when examining small-scale structures and the ways in which sound objects are transformed and used to create structural coherence. There will certainly be opportunities to engage analytically with some of the works of other theorists, especially with the morphological half of Denis Smalley’s theory of spectromorphology. Though we will be defining the spectro- portion of the theory (the sound object to be examined), we will need to understand the ways that sound objects might be transformed through time. Sound objects will also provide us with a chance to theorize about

68 syntax and function (where applicable) in electronic music, as they constitute the smallest conceptual units of sound that we will work with.

Analytical Methodology

We have now discussed Schaefferian conceptions of the sound object and listening conditions under the acousmatic condition at some length. However, we have yet to investigate how these concepts inform and influence the process of analyzing electronic music. The remainder of this chapter will be dedicated to an explanation of some of the various analytical methodologies I will utilize in my analyses of acousmatic electronic works throughout the rest of this dissertation. It should be noted that many of these processes are independent of one another; they could easily be undertaken separately depending upon the particular piece of music being examined, and thus each analytical process might not be utilized for each piece. Similarly, there is room in the system for the analyst to adapt it to his or her needs as called for by the work in question.

The first thing we will do when analyzing electronic music, like any analysis, is to listen to the piece in its entirety. The function of this initial listening (as well as any subsequent initial listenings) is to get a broad, general sense of the piece. At this stage, we will not be actively analyzing; we may not be able to help but form conceptions of what the analysis might look like, but above all we will seek to engage the work in a fashion as true to the concept of reduced listening as possible. In other words, we will listen to the piece for its own qualities without taking into account any sort of formal or syntactical systems at play. At this point in the analysis we may want to make some descriptive notes based on our hearing and experience with the work, but we will make analytical judgments later. If we were to permit analytical concerns to enter into our listenings, it could easily (and likely would) color the judgments that we are to

69 make later. Though what I am proposing is a hybrid form of analysis, in the sense that it fuses subjective experience and analytical observation, it is important not to let either side encroach on the territory of the other. The phenomenological experience is meant to provide the raw data that we will be working with, and thus we should allow the subjective experience to happen without being tainted by the expectation of objective confirmation at a later stage in the process.

Once the initial listenings have been completed, and a general sense of the piece has been gained, we will undertake a series of self-reflective listenings. By self-reflective, I mean that we will listen to the piece while paying attention to our own experience of the work and the ways in which it directs our listening. For example, we might notice that we hear certain moments of a work as particularly marked for consciousness, or similarly that particular stretches of the piece require more effort to actively focus our listening on than others. It is not important at this stage of the process to ask or answer the question as to why our experience of the piece is such, but rather just to note that it is. Again, it is crucial not to inject the objective side of our hybrid process into the subjective. So, for example, we might say, “At 3:30 in the piece, there is a sudden change in the quality of the piece,” or “that moment really draws our attention.” These statements reflect our experience in listening to the piece. However, a statement like, “At 3:30 in the piece, there is a clear formal articulation” presupposes the function of that moment. At this point in the analytical process we are only reflecting on the phenomenological experience of listening, therefore we should still avoid imposing analytical evaluations that are under the purview of later steps.

Self-reflective listenings would also be an appropriate time to begin making note of and considering important sound objects that are present within the work. While at this point the listener will likely not have a clear conception of the fundamental sonic identity of any

70 individual sound object, it is possible that he or she will nevertheless be able to identify a number of potential candidates. In other words, there may be strong indications of the existence of prominent sound objects, but the identity of those sound objects is only revealed through a narrowly-focused reduced listening. Again, while it is expected to acknowledge the existence important sound objects at this point in the process, we should still from speculating about the function of said sound objects. At later points in the analysis we will certainly attribute functional or syntactic status to some of them, the focus of self-reflective listening is to emphasize and investigate the experiential process of listening to the work.

Once the initial and self-reflective listenings have been completed, the next step is to identify the work’s salient sonic parameters that may have guided the listening experience in the previous steps. For instance, if we heard a striking moment that caught our attention, we might now ask what it was about that moment that drew us to it. Perhaps there was a sudden change in a specific timbral quality, maybe the texture became much more dense or sparse, there may have been a sudden change in register, etc. Of course, there is no comprehensive list of what sonic parameters are possible. We might identify some convenient starting points, like Landy and

Weale’s “Something to Hold on to Factors,”146 or Jan LaRue’s SHMRG categories: Sound,

Harmony, Melody, Rhythm, and Growth, which are umbrella groups for many different identifiable sonic parameters.147 Ultimately, it is up to each analyst to determine what sonic parameters he or she is able to hear and track. Some listeners may simply not be attuned to

146 Landy, “The ‘Something to Hold on to Factor’”; Robert Weale, “The Intention/Reception Project: Investigating the Relationship Between Composer Intention and Listener Response in Electroacoustic Compositions” (De Montfort University, 2005); Weale, “Discovering How Accessible Electroacoustic Music Can Be.” 147 Jan LaRue, Guidelines for Style Analysis, 2nd ed. (Sterling Heights, MI: Harmonie Park Press, 2011). 71 listening for harmonic density, for instance, and it will thus never be something that draws their attention when listening to a work. This is an acceptable and expected situation that arises from an analytical methodology rooted in subjective experience. I will personally use (and suggest that others use) colloquial/informal terms when referring to audible sonic parameters or characteristics. Because we necessarily speak of all musical and auditory experience through embodied metaphor in the first place, it makes sense to speak of sounds in terms of their relative brightness and dimness, for example, rather than to speak about their harmonic spectra in technical terms. We intrinsically understand descriptions of the subjective experience of hearing much more easily than we do the technical.

Once we have identified any sonic parameters that appear important, salient, or functional in the work being examined, we will then undertake a series of parametric listenings. A parametric listening involves listening to the piece while only paying attention to one specific sonic parameter that has been identified in the previous step. It is a sort of even further reduced listening procedure, mentally bracketing out not only physical or semantic concerns, but also any sonic qualities that we might call extra-parametric. If we are doing a parametric listening focused on harmonic density, for instance, that is the only sonic quality that we will be concerned with; we would not pay attention to any of the other salient parameters that we have identified, as they will each get their own series of parametric listenings. As the parametric listening is occurring, we will continually note the relative parametric intensity on a scale of one to five. So, in the previous example of harmonic density, a parametric intensity of one might be our perception of the least possible harmonic density within the confines of this piece, while five would be reserved for the most harmonically dense possibilities. The remaining middle three intensity values thus represent the exact perceptual middle and values that lean toward one or

72 another. (It should be noted that the reversal of these poles will have no effect on the actual outcome of the analysis, so they could easily be swapped for one another.) As we hear changes in the parametric intensity of the particular parameter that we are listening to, we will plot the changes on a timeline that has a horizontal line for each value from one to five. Although it resembles a musical staff, the relative “highness” and “lowness” does not necessarily correspond to pitch; rather, it indexes relative proximity to the polar extremes of the sonic parameter. While it is certainly possible to adapt the system such that it has more discrete levels of parametric intensity (seven levels, nine levels, etc.), I find that five levels allows for accurate measurement

(insofar as it agrees with my subjective hearing) without having to labor over which intensity value is really “correct.” In other words, it quite simple to have to make a choice between a polar extreme, leaning toward one extreme, or exactly in the middle (the five intensity options).

However, in a system with more discrete intensities, we may have to debate with ourselves about the perceived value. For example, whether or not a specific moment was really a value of five or six on a seven-point scale is much more difficult to quantify. Thus, subjective interpretation becomes more difficult as the range of possible values increases.

We will also have to make a decision as to how many time intervals are on the graph and how compactly they are spaced, as there is only so much horizontal space that we can reasonably comprehend. The goal, after all, is to get a relatively comprehensive image of each sonic parameter that can be digested with little study. For example, if the piece being examined is only four minutes long, it might make sense to have a time marker every ten seconds. However, if the piece is not four minutes but maybe 40 minutes long, tracking parametric intensities at intervals of ten seconds is probably not practical. Furthermore, it makes logical sense that longer and larger pieces might have larger structural segments; it would not at all be unusual for a large

73 work to contain formal units that are many minutes in duration, while a very short work necessarily must contain smaller units. Ultimately, all choices will have a different balance between resolution and practicality, and therefore it may be preferable to work with different time intervals depending upon the piece being examined.

Of course, there is no way to guarantee that any two analysts will arrive at exactly the same parametrical graph, even if they had chosen to examine the same parameter with the same timeline intervals. Just as two listeners may grasp onto vastly different sonic parameters when listening to a piece, so too would we expect differences in their listenings to the same parameter.

This is inherent in the subjective nature of the procedure. One would certainly expect any two listeners to hear movement in the same “direction” toward or away from one of the parametric poles, however. So, if Listener A perceives a relative intensity change from one to four, and

Listener B hears the same change as moving from one to three, two to four, two to five, etc., they essentially agree on the fundamental behavior of the parameter. If two listeners were to disagree on the direction of the parameter, then at that point they might each wish to re-listen and evaluate their hearing again.

The completion of the graphs initiates the full extent of the objective phase of our hybrid analytical procedure. While the process of listening, reflecting on our listening, choosing sonic parameters, and graphing those parameters engages with the subjective experiential side, once that is complete we will begin to make observations about the raw “data” we have collected. This data represents the listener’s subjective experience in listening to the music, therefore any analysis of it necessarily integrates the subjective with the objective, the entire goal of the process. One of the most obvious uses for these graphs is to look at each of them individually to see if we can identify recurring trajectories of parametric intensity, what I will call a parameter-

74 motive. We may find, for instance, that specific formal units of a piece are defined in relation to the initialization and completion of a specific parameter-motive, allowing us to make observations not only about the cohesion of formal segments but also about the function of specific sonic parameters. Furthermore, we could also speak about the form of a work in relation to each specific parameter based upon what we observe in the intensity data. Formal segments might be strongly defined on individual parameters based on their intensity values or parameter- motives, for example.

We are also able to look beyond individual parametric intensity graphs and compare them with one another. One of the most important things that we will be able to see by comparing multiple graphs is the extent to which they agree in terms of a piece’s form and segmentation. If we see many different graphs that imply formal breaks or marked moments within the same temporal span, we might be able to speak with greater confidence about the form of the work, or at least support our formal readings. However, if we see intensity graphs that are considerably different from one another, this may lead us to suspect that either the form of the work is not so clear or else that perhaps the work being considered manifests its form in a different way. This is by no means to say that, even if one were to get the forms of all the parametric intensity graphs to agree with one another, that this is the form of the work. Rather, the implication would be that the particular listener doing the analysis experiences the form of the work in this way. Remember that the raw data we begin the observation process with already has subjectivity priced into it, so any analysis that we do on that data is inherently interpretive.

Not only can we look at intensity graphs individually and comparatively, but it will also be useful to create composite graphs that take into account some or all of the individual graphs completed for a particular piece. Depending upon the piece, this could provide us with a variety

75 of information. One thing we will track, for instance, is the composite change in parametric intensities throughout any given span of time, which could indicate moments that are likely points of formal segmentation or that at the very least are strongly marked for consciousness.

When we see multiple parameters that undergo sudden shifts in intensity levels all at the same time, this might lead us to conclude that such a point has occurred. Similarly, when we observe musical segments that are relatively static in the composite, this will usually indicate a cohesive unit or segment. Ultimately, what any composite analyses reveal about a piece is largely dependent on the piece itself and upon our experience in listening to it.

Once we have determined where possible formal segments might lie within a piece, we can examine how not only how these sections function in terms of their parametric intensities but also in regard to smaller structural units within them. It will be especially useful to define and examine the use of sound objects as syntactical units and observe the ways in which they are transformed. We will continue to observe this parametrically (at a lower level of structure than the parametric intensity graphs of the entire work), but at this stage we can also utilize sonic transformational descriptions as espoused by others, especially Denis Smalley’s spectromorphological system. Even more so than examining the entire work, analyzing smaller functional units will necessarily be a case-by-case process that depends upon the nature of the piece itself as well as how its formal plan is depicted through the parametric intensity graphs.

Some pieces may be based upon a single sound object, others may be based upon the transformation of a sound object, the interaction of multiple sound objects, etc. There is no clear

“one size fits all” methodology that can be prescribed specifically because the nature of the larger individual piece will help define the smaller syntactical units.

76 The rest of this dissertation will be devoted to analyzing and discussing a number of pieces of acousmatic electronic music using the methodology as described above. I have specifically selected pieces that are easily available on recorded media and through musical streaming services, and I strongly encourage the reader to listen to each piece multiple times prior to reading my analyses. Because I cannot provide score examples or transcriptions of these pieces that will provide anything of musical to the reader, this step is especially necessary.

Furthermore, we should recall one more time that my analyses represent objective observations about a subjective listening experience. In no way do I profess that my analyses are “correct” for all listeners, but simply that they are accurate representations and analytical statements about the way I personally hear these pieces.

77 CHAPTER 3

SEGMENTATION AND FORMAL DESIGN

Electronic Music and Form

In this chapter, we will focus on the experience of large-scale formal design and the apprehension of formal segments within electronic music. While it is undoubtedly true that electronic music is difficult to describe in terms of traditional notions of form and structure (as is a lot of music written since the early 1900s), a parametric approach as I have proposed in the previous chapter provides one possible solution to the issue. In order to show how a parametric approach might applied to a discussion of large-scale form, we will examine four works of acousmatic electronic music through this methodology: Artikulation (1958) by György Ligeti,

Sud (1985) by Jean-Claude Risset, Stria (1977) by John Chowning, and Threads (1998) by

Elainie Lillios. Each of these works presents its own unique challenge to traditional notions of form and structure which might be overcome by the use of parametric analysis. Before we discuss the formal designs and the experience of those forms as it relates to these four pieces in particular, however, we should first examine the breadth of formal archetypes that might be possible within the wider world of experimental electronic music itself.

Formal Archetypes

If the question is “What types of formal interpretations are possible in electronic music?”, a rather tongue-in-cheek answer might simply be “all of them.” I certainly do not mean to imply that there are no formal commonalities that might be discerned throughout the output of electronic composers, but rather that the breadth of possibilities is far greater than in earlier practice periods. Part of this phenomenon might simply be due to the fact that our conceptions of formal archetypes in regard to these earlier styles are much stronger and more canonized. In 78 other words, when we hear a piece of music that we identify as “Classical” in style (whether or not it was actually composed during that period), we immediately conjure up a series of expected formal archetypes into which it might fall based upon our previous experience with classical music as well as the theoretical literature about that music. We may expect the work to be a binary form, a rondo, a sonata, etc., but it would be less likely that our immediate expectation would be moment form, for instance. Thus, part of the issue may be a lack of familiarity with electronic music, but another part may be that the eschewing of traditional notions of form and structure has left a cognitive hole in terms of expectation.

However, if we can accept that form is rooted not only in extramusical knowledge but also in our own experiences listening to music, we may be able to point out some recurring formal elements throughout electronic music. We will certainly not expect to see formal archetypes that are so clearly defined as in the music of the common-practice era. Likewise, it is doubtful that every piece to be examined will have rigid, clearly-defined formal boundaries, and often decisions about where salient formal boundaries lie may have to be made on a case-by-case basis. I believe we are not yet at the point where overly-broad generalizations about form and formal function in regard to electronic music are warranted or wise. Regardless, it is still possible to comment on some recurring formal aspects that one might encounter when analyzing acousmatic electronic music.

Moment Form in Electronic Music

Moment form, which was discussed in the first chapter, is one possible formal archetype that seems to recur in electronic music. In fact, it should probably come as no surprise that the piece Stockhausen used to describe moment form, Kontakte, was a piece utilizing electronics.

Recall that one of the most important elements necessary for a work to be perceived as an

79 example of moment form is that the succession of moments is experienced as arbitrary. In other words, the perception of a work as a moment form has nothing to do with whether or not a composer intended individual moments to be arbitrary or not, but rather it has to do with whether the listener perceives them as such. The moment the listener perceives that one section causes the next (or that a section is a result of the previous one), no longer can we understand the work as being in moment form. Since electronic music and its paradigms are inherently unfamiliar to many listeners, it is not unreasonable to assume that cause-effect relationships are less easily heard in this listening situation, allowing a higher possibility that a moment form might be invoked. On the contrary, a strong understanding of formal function and syntactical units within the musical system of a given work (whether tonal or not) might inherently lead to a decrease in the possibility of a work being in moment form.

Furthermore, we should be very careful not to invoke the concept of moment form as a sort of “catch-all” for difficult analytical situations that may arise. It may seem alluring to simply dismiss the formal design of a work that is not easily perceived or analyzed as a work in moment form for the sake of convenience. However, in doing so, we run the risk of whitewashing over whatever it is about the formal design of a work that may make it interesting or unique in the first place. Thus, we should invoke moment form only when truly appropriate and not because a work does not seem to fit into any other archetype. The goal of analyzing form should not simply be cataloging a great number of pieces and putting them in a drawer, but rather engaging with the music and describing the ways in which form and design interact with the listener’s experience.

The ABA’ Paradigm

Another type of formal design that can recur in electronic music is the “initial state, contrast, return” paradigm. This could be a simple ABA’ type design, or the intervening section

80 between the initial state and the return to the initial state could be much longer and more intricate. (In this dissertation, the use of bolded capital letters such as A and B denotes formal sections.) Though it may seem like this formal design might be overly reductive, I believe that a great number of contemporary and electronic pieces could be effectively represented with this formal schema. While this type of analysis could again run the risk of becoming a catch-all solution (as I described for moment form), we might be able to devise a number of subcategories that model the general formal behavior of different types of pieces.

For example, I believe that perhaps the most intriguing part of this form is the middle section, and the way that this middle section is treated could give rise to number of recurring sub-archetypes within the ABA’ paradigm as a whole. These sub-archetypes revolve around the way in which the B section stands in relation to the A sections that surround it in terms of similarity, contrast, duration, motivic development, etc., and of course also in regard to the parametric intensity relationships between the A and B sections. For example, an ABA’ form could feature a B section that strongly contrasts with the surrounding A sections, but this would be a different formal subtype than a B section that contrasts with the initial A section but features musical processes that create a smooth transition to the return of the opening material in A’.

Again, I do not mean here to propose the entire possibility of B-subsets, but rather I am simply suggesting that the ABA’ formal archetype is deceptively broad (and formally important) depending especially upon how the B section is treated.

I would argue that the ABA’ paradigm may be an even more prevalent formal design in electronic music than in common-practice period music simply because of the fact that nontonal composers are unable to utilize tonal constructs to signal formal function and rhetoric. This is especially important when signaling the close of a formal design. When we listen to a Classical

81 movement, for instance, we understand the tonal rhetoric at the end of the piece and the way it functions to close off the form. However, in contemporary and electronic music, that closing rhetoric must be accomplished with something else since the tonal system is, in most cases, no longer in play. A return to the piece’s initial musical material is a clear formal convention that might signal a formal close. I am certainly not suggesting that all electronic music relies on the scheme in order to signal the end of a form, but simply that it is a convenient replacement for tonal closure that is also present in past musical convention. Again, we should be careful not to be too quick that a work exhibits the ABA’ paradigm, but if we perceive salient markers that cause us to perceive the piece as such, it might prove an analytically rewarding exercise to examine the piece in relation to this formal design.

Conventional Forms

I will also grant the possibility that many conventional forms that were prevalent during the period of common-practice tonality may still appear in electronic music. Many strains of modern music can be traced back to conventional musical practice, so it should not be overly surprising to find pieces that genuinely seem to be a binary, ternary, rondo, etc. They might also appear to us simply because this is the music that many of us are inherently familiar with.

Furthermore, if we remember that we necessarily bring our own experiences into the analytical process, it is not unreasonable to imagine that we might want to hear the unfamiliar in relation to the familiar. Of course, in the absence of tonal structures, the markers that signify these types of conventional forms will necessarily be different. That should not stop us from making seemingly valid observations about the nature of a work’s form, however. For instance, if an analyst is able to show that a work consists of two large sections, with the second section initially presenting contrasting material followed by a return of the material from the first section, we should not

82 avoid calling it something that already know, such as rounded binary. “Rounded binary” may have tonal implications, but if we take the stance that nontonal pieces cannot have any conventional form simply because they lack the expected tonal structures, we are perhaps setting up an unfair standard. We should be careful that we are not mentally imposing a conventional form where it does not fit, but if we are able to find convincing reasons to understand the form of a contemporary piece in a traditional way, I do not believe it is necessary to avoid invoking a form that we hear simply because the musical aesthetic is different.

Ad-hoc Designs

Finally, we must accept the inevitability of formal designs that are ad-hoc in nature, especially if we desire to represent form as a manifestation of the listener’s experience with the music. There will surely be instances where forms are not clearly defined in terms of a generalized formal archetype or are ambiguous not only in regard to the overall formal design of a work but also as to where specific formal boundaries might be. In these instances, we may have to appeal to our own individual hearings of a specific work and attempt to describe the formal units that present themselves as well as the syntactic relationship between them. In these situations, it may be desirable to develop unique formal vocabulary for an individual piece or to define discrete formal units based upon the experience of listening to a work. I have warned of imposing a conventional form on a piece where there may be none, but we should also be careful of being too quick to jump to an ad-hoc approach to musical form. One of the values in describing musical form is that we can compare the form of a piece to its perceived formal norms as well as to the formal designs of similar pieces. By continually approaching the formal analysis of a piece in an ad-hoc manner, it may become difficult to appreciably compare the formal structures of different pieces. Thus, we should be careful to strike a balance between the

83 avoidance of imposing formal archetypes where they do not fit and formal analyses that are consistently too ad-hoc to the point that the inherent value in analysis is diminished.

Segmentation

My approach to segmenting electronic music into discrete formal units will largely utilize two different types of graphs, the Parametric Intensity Graph and Value-Change Graph. Before we begin analyzing works of electronic music using these graphs, an explanation of their derivation is first necessary. Remember that the data in these graphs is not necessarily raw or empirical in the scientific sense, but rather the graphs attempt to model the listening experience of a given piece. Thus, the observations made about them as well as the data itself should be understood as based in an inherently subjective process.

Let us examine how these graphs might be derived and utilized in analytical practice.

Figure 3.1 shows what I call a Parametric Intensity Graph and a Value-Change (VC) Graph for the first minute of a fictional piece of music. These graphs could have been made in relation to any given parameter that the listener hears as salient, such as harmonic density, utilization of the stereo field, timbral effects, etc. The Parametric Intensity Graph (the top graph) shows the listener’s perception (in this case, my perception) of the intensity of a specific parameter over a given span. In this example, I perceived an initial intensity value of two for this parameter, an increase to an intensity of three around 0:15, a sudden drop to intensity one around 0:20, etc. (I find that an intensity range from 1-5 strikes a nice balance between usability and experiential fidelity. I will always use five intensity levels throughout this dissertation.) Thus, we can see intensity trajectories that could be grouped as units and understood as gestures, and we can also see potential breaks in the intensity of a given parameter that may provide insight as to how smaller formal units are constructed and perceived.

84 The Value-Change (VC) Graph, seen below the Parametric Intensity Graph, shows the absolute change in value over a given span of time. (In other words, it does not show the direction of the change in value, but rather it only shows the amount of the change.) The labeling system of the VC graphs involves two numbers. The first number tells us how large of a timespan we are considering, and the second tells us how often we are checking the aggregate change in value. In this example, we are utilizing a 15-5 VC graph, which means that we are looking at 15-second windows of change, and we are checking that change in value every five seconds. (Some sampling windows in this example are shown above the parametric intensity graph.)

Figure 3.1: Example of Parametric Intensity Graph and 15-5 Value-Change Graph

Thus, as we can see by the position of the sampling windows, they overlap each other when the second number of the VC graph label is smaller than the first, that is, when the sampling rate is smaller than the window size. (A 15-15 VC graph would be a 15-second sample every 15

85 seconds, eliminating any overlap.) This has a general smoothing effect on the value-change graph itself and helps to account for sudden changes in parametric intensity without always sending up analytical red flags. Remember that any point on the graph represents an absolute change in value for the 15 seconds leading up to that point. So, in our example, the value “1” at

0:15 means that from 0:00-0:15 (the 15-second window leading up to that point), there has been an absolute change of one in parametric intensity. Likewise, the value of “3” on the 15-5 VC graph at 0:40 means that from 0:25-0:40, there has been an absolute change in parametric intensity of three. I do not prescribe any rules for what window sizes and sampling rates are most effective in any given situation; depending upon the nature of the piece, different values for each might yield different results. I have found that 15-5 seems to give an accurate modeling of my own listening experience while balancing accuracy with readability and accessibility for the listener.

We will also utilize composites of these graphs, especially the value-change graphs. The native listening experience of any given piece of music is not simply focusing on one parameter, but rather it is experiencing all sonic parameters simultaneously. It is analytically useful for us to separate out one parameter at a time to study its behavior, but by making a composite of all salient parameters we create a model that is closer to the experience of hearing a work. There are certainly inherent problems with “adding” intensities across multiple parameters. While logistically it makes sense to say that a perceived change in intensity of one in two different parameters might equal a composite intensity change of two, the process of perception is not so black-and-white. For instance, would a composite intensity change of two across multiple parameters be perceived the same as an intensity change of two in a single parameter? The answer to this question might be “probably not,” but we will not be comparing single parametric

86 intensities to composite parametric intensities. Rather, I believe that the composite graphs will show where different parameters are working together to effect a strong sense of change (and thereby formal coherence) in the mind of the listener.

To this end, I propose that local maximum values in composite parametric intensity change may signal formal boundaries, while moments of little/no change or consistent change might signal formal units or segments. This may seem like a foreign concept, but in reality, we have utilized these sorts of descriptions of formal units and boundaries even in common-practice tonal music.148 The difference is that, in discussing traditional music, we have had the tonal system to and to serve as a marker of form and structure. Without the presence of this system, we are thus forced to use other methods of discerning formal boundaries and units. This concept of perceptual formal delineation strongly aligns with principles of phenomenological and cognitive grouping. For instance, consider Fred Lerdahl and Ray Jackendoff’s fourth

Grouping Preference Rule as related to larger-level grouping: “Where the effects picked out by

GPRs 2 [proximity] and 3 [change] are relatively more pronounced, a larger-level group boundary may be placed.”149 James Tenney also observes similar principles in the formation of units in gestalt psychology: “In a collection of sound-elements (or clangs), those which are similar (with respect to values of the same parameter) will tend to form clangs (or sequences), while relative dissimilarity will produce segregation – other factors being equal.”150 Thus, greater

148 Consider the second theme group of Beethoven’s Waldstein Sonata, for example. There is, of course, the tonal marker that it is in a different key than the primary theme. However, we also typically associate other characteristics with this secondary theme, such as a change in dynamic, articulation, style, register, etc. We may not refer to these as “sonic parameters,” but the function is essentially the same. The changes in these salient characteristics help us to identify formal boundaries and functions. 149 Fred Lerdahl and Ray S. Jackendoff, A Generative Theory of Tonal Music (Cambridge, MA: MIT Press, 1996), 49. 150 Tenney, Meta+Hodos, 32. 87 changes in individual parametric intensity as well as in composite parametric intensity may suggest the perception of a strong formal boundary. If we tend to group similar elements and segregate differing elements, it stands to reason that without some change in any sonic parameter, it would be difficult to hear a formal boundary. Even a change in sound object or sound material would necessitate a change in some sonic parameter, or else there would be no perceived change of sound object in the first place. Let us now consider some actual works of electronic music and examine them through the methodology I have proposed.

Jean-Claude Risset: Sud, Second Movement (1985)

Jean-Claude Risset’s 1985 work Sud is an interesting case study for the analysis of electronic music in that it seems to project a relatively clear surface-level extramusical narrative, while at the same time it has a deceptively complex underlying musical structure. Thus, it brings up the issue of what the analyst might do when the essence of a piece, what the listener might perceive that the piece is “about,” collides with the desire to talk about musical form, structure, and syntax. Giselle Ferreira offers a semiotic analysis of the work:

…the listener is not presented with an idyllic , beyond the reach of human intention, untouched. The contact between a natural element and imagined human artefacts suggests nature modified by human will and action. Indeed, Sud may be perceived as an encounter between human imagination and nature in two of its most powerful symbols: the sea, which is alluded to through the sounds of the waves, and the forest, which is alluded to through the sounds of its inhabitants…These symbols are brought together not according to an aesthetics of , but through an exploration of the essence of the environmental sounds, which are either modified or recreated with different substances…In appealing to symbols so widespread in different cultures, Sud is pregnant with ontological meanings and symbolic connotations.151

151 Giselle Martins dos Santos Ferreira, “A Perceptual Approach to the Analysis of J.C. Risset’s Sud: Sound, Structure and Symbol,” Organised Sound 2, no. 2 (1997): 104. 88 It is true that Sud is inherently full of symbolic meaning, and I believe that just about any listener would hear this or could be easily directed toward it without much convincing. However, what does this extramusical interpretation say about the form or structure of the work? Ferreira uses this extramusical information to inform her analysis of the structure of the first movement, basing each section on the types of sounds present. While looking at the work in this manner may produce satisfying analytical results, we might wonder what would happen if the analysis were to be based in a Schaefferian reduced-listening approach rather than an extramusical approach. In other words, how would our understanding of the piece change if we decided to bracket out all extramusical information and focus only on the sounds present in the piece for their own sake?

Let us consider the results of a parametrical approach regarding the second movement of

Sud. I have selected this movement not only because it is much less written about than the first and third movements, but I also I believe that it is more difficult than these movements to analyze, especially in terms of extramusical association. As Ferreira notes, “The second movement explores several elements introduced previously and incorporates new aspects inherent in a predominantly aural discourse…Unlike the introductory movement, environmental recordings are seldom perceived, and the whole movement is endowed with a distinctly abstract quality.”152 Thus, in the absence of any sort of extramusical program, we must rely on our own experiences and perceptions when listening to the music in order to discern possible formal segments and syntactical functions.

After multiple listenings to the second movement of this work, I took note of a number of sonic parameters that seemed to guide my listening experience and direct me toward marked moments in the work. For the purposes of discussion, I will select four of these parameters for

152 Ibid., 100. 89 the analysis of the work. I will also select four parameters for each of the remaining works that will be analyzed in this chapter for the sake of comparing the composite graphs of various pieces to one another. There is theoretically no limit to how many parameters can be examined, and the analyst may choose more or fewer depending upon the work being considered. In the case of

Sud, the most salient sonic parameters in terms of my listening experience were the amount of overall noise present, harmonic density (the amount of sounds and sound objects layered on top of one another), the tessitura into which most of the salient sound objects fall, and the overall quality of brightness present throughout the work. (As I have previously stated, I will generally favor informal or colloquial terms over technical terms for the sake of convenience and accessibility.) First, let us consider each sonic parameter individually and how it might contribute to the formal segmentation of the piece. Remember that the values I am tracking are my own personal experiences. I do not propose that that there is an objectively “correct” value at any given time for any particular sonic parameter; rather, these values represent my own subjective listening.

The first parameter that we will examine is the amount of noise perceived in the work.

Figure 3.2 shows the parametric intensity graph as well as the 15-5 value-change graph for this parameter. Note that a higher intensity denotes more overall noise, while a lower intensity denotes less overall noise. As we can see, the parametric intensity graph suggests that there are three possible discernible segments present within the second movement of Sud. These segments span the time frames from approximately 0:00 – 1:30, 1:30 – 4:30, and 4:30 – 5:45. The 15-5 VC graph similarly suggests boundaries around 0:30, 1:30 and 4:30 due to the local maximum values at those points. In other words, these moments show much higher overall change in parametric intensity than the moments surrounding them. Before we continue, remember that these VC

90 graphs reflect the aggregate change for an entire block of time. In other words, the VC value at

0:30 does not necessarily mean that this change takes place precisely at 0:30, but rather it means that the level in parametric intensity has changed by an absolute value of two sometime within the span of 0:15-0:30.

Figure 3.2: Risset: Sud – Parametric Intensity Graph and 15-5 VC Graph for Overall Noise

Regardless, the overall noise parameter also suggests three discrete sections because of the distinct trajectories that can be seen in the parametric intensity graph. We can clearly see that the second section contrasts the first and third in that it features a long, drawn-out descent through the parametric space. In contrast, the other sections each have a clear and relatively quick uptick in parametric intensity at the beginning of each.153 Of course, this segmentation is

153 Remember that the actual direction of parametric change is arbitrary. One could easily reverse the poles so that high parametric intensity values represented pure sounds instead of noisy sounds as I have represented it. Ultimately, we would still see a contrast between the middle section and the surrounding sections, and the value-change graph would remain unaffected since it tracks absolute value. 91 based only in this single parameter, and we will need to examine other parameters to see how they interact and what other segmentations might be supported through our experience in listening to the piece.

Before moving on to other parameters, let us take a moment and examine the effects of different window sizes on the value-change graph. Figure 3.3 value-change graphs for the same parameter in Sud that we just examined, but with a variety of different window sizes. (Remember that the window size is the first number in the labeling system, representing the span over which the aggregate change in parametric intensity is measured.)

Figure 3.3:Risset: Sud – 10-5, 15-5, 20-5, and 30-5 VC Graphs for Overall Noise

The main effect of window size on the resulting graph is obvious: longer window sizes will naturally extend any local maxima because they are aggregating change over a longer time span.

The peak at 1:30 is a clear example of this; for each increase in window size, this maximum point is held for longer. (Similar phenomena can be seen at other local maxima in the graph.)

While this may seem benign, it results in two potential unwanted situations. The first of these is that the analysis loses any sense of specificity. If we are searching for potential formal boundaries, the more precise the local maxima on the graph are, the more accurate we can be in suggesting segmentation points. For example, both the 10-5 and 15-5 lines in Figure 3.3 show a 92 relatively tight maximum around 1:30, but the 20-5 and 30-5 extend this maximum to such an extent that it loses its usefulness. In other words, it is necessary to use a large enough window size that all perceivable change is actually captured, but a small enough window size that some level of specificity still remains. The second (more egregious) issue is that “false” peaks can occur due to the increased aggregation time. The maximum just before 5:00 in the 30-5 graph is a good example of this; because change is being aggregated over 30-second intervals, both the sudden drop in intensity at 4:30 and the turnaround to 5:00 are being captured at this point.

Logically, the sudden change is more likely to be perceived as important, and the shorter window sizes reflect this. Thus, a window size that is too large can seem to “blow past” logical segmentation points and display peaks at inappropriate points. For this reason, I generally favor a shorter window size like 10-5 or 15-5. I will use the 15-5 value-change graph for the remainder of this project.

Next, consider the harmonic density parameter, shown in Figure 3.4. Again, the parametric intensity graph suggests a division into three distinct formal units with division points around :00 mark and the 4:30 mark. Thus, while the harmonic density and overall noise parameters agree that there is a boundary around 4:30, there is some ambiguity about the formal boundary between the first and second sections (if only considering these two parameters.) As with the previous parameter, the parametric intensity graph also suggests a division into three units based on three distinct value trajectories throughout the work. The first is a slight rise and fall from a value of one, the second is a more pronounced rise and fall from a value of three, and the final trajectory is a rise from one to three.

The 15-5 value-change graph again has a local maximum at 4:30 (agreeing with the overall noise parameter), but has two local maxima close to each other at 1:00 and 1:45 that do

93 not coincide with the maximum at 1:30 from the previous parameter. With only these two parameters in mind, this might tell us that the section of the work from 1:00 – 1:45 might be transitional, or at least that it is more volatile in terms of parametric intensity than other sections.

Further comparisons with other parameters as well as with composite graphs will help to resolve this issue.

Figure 3.4: Risset: Sud – Parametric Intensity Graph and 15-5 VC Graph for Harmonic Density

The graphs for the third parameter that was examined, tessitura, can be seen in Figure

3.5. When listening for the tessitura of the work, I have taken into account the salience and novelty of individual sound objects. Thus, while it is possible for both very high and very low sounds to be occurring simultaneously, I focus on those sounds which strike me as most important and that caught my attention when listening to the piece. For instance, if a piece has been occupying a very low range, but suddenly a series of mid- or high-range sounds are markedly introduced and direct my listening, I will note the parametric intensity as a higher

94 value. Likewise, if the low-level sounds seem to be more salient, I will lower the perceived parametric intensity.

The parametric intensity graph clearly articulates the process going on in this movement in regard to tessitura. Throughout the work, we can hear a series of descents from higher sounds to lower sounds, each time followed by a sudden increase in parametric intensity back to the initial intensity value of four. (I noted the last time this occurs around 4:30 as a perceived intensity value of five.) However, each of these sections are not necessarily equal in their deployment of this scheme. The initial registral descent at the beginning of the work is tiered; it holds specific intensity values and then changes them suddenly. In contrast, the rest of the gestures (especially the second and last gestures) feature a salient transition between states of parametric intensity. I do not mean to suggest what this might mean for the structure at this time, but rather I am simply stating that even though the overall motion is a registral descent, this is not always achieved in the same way. This may allow us to think of the tiered descents and the gradual descents as entirely different formal units.

Figure 3.5: Risset: Sud – Parametric Intensity Graph and 15-5 VC Graph for Tessitura 95 As far as how many distinct formal segments there are, this parameter is more ambiguous than those that have already been examined. I would argue there are at least three present, with the possibility of a fourth. The issue is how to treat the segment from around 3:30 – 4:30. On one hand, the graphs seem to imply that it might be its own formal unit; it has clear formal boundaries in terms of changes in parametric intensity on either side of it. The value-change graph shows that 3:30 is actually the moment of highest change regarding this parameter, further marking that moment for consciousness. However, when listening to this section, I do not get the sense that it is its own formal unit, but rather it seems to serve to transition from the second descending gesture to the final descending gesture at 4:30. This reading, however, requires that the analyst consider other factors outside the realm of tessitura, so I will leave the discussion of this section until we begin examining the composite graph.

The final parameter that was identified, brightness, conforms much more strongly to the formal outlines supported by the overall noise and harmonic density parameters, as can be seen in Figure 3.6. As with the first two parameters, there seems to be a clearly articulated three-part structure that features a contrasting middle section. The middle section is both much lower in parametric intensity, and also it generally features more elongated and less pronounced changes in intensity. Whereas in the initial section I perceived a quick change in intensity from five to four and back again over the span of roughly 15 seconds, in the second section a similar intensity displacement from two to three and back to two takes approximately 2.5 minutes.

The 15-5 VC graph is similarly clear. There is a maximum at 1:30 and another at 4:30 that delineates the work into three discrete segments in terms of its overall brightness. As we can see, this is generally in agreement with the formal segmentation of the first two parameters.

However, it also shares some similarities with the tessitura parameter, namely an increase in

96 volatility from 3:30 to 4:30 prior to the initiation of the final section. Before making any statements or propositions as far as what the structure of the work might be or its functions, we should first examine the composite graph that shows the interaction between all parameters that have been examined. While individual parameters can themselves project a certain type of structure, the primary concern of this methodology is to synthesize them in an attempt to model the listening experience and to determine how the structure of the work might be manifested through our own perceptions.

Figure 3.6: Risset: Sud – Parametric Intensity Graph and 15-5 VC Graph for Brightness

Figure 3.7 shows the composite value-change graph for all four parameters that I have examined throughout this discussion. By examining the composite graph, we can see how each of these parameters interact with one another to suggest formal segmentation points as well as which parameters are most involved in those points.

The composite 15-5 VC graph for these four parameters shows strong agreement that the moments surrounding 1:30 and 4:30 are the strongest candidates for formal boundaries. In

97 addition, it also shows other local maxima across all four parameters that may suggest marked moments of transition or interest within a segment such as the seconds following 1:00 or the span from 5:00-5:30. These are lower-level maxima and therefore are not considered as primary candidates for segmentation points either due to their relatively low value compared to overall maxima (the maximum at 2:30, for example) or because of their close proximity to another higher value (such as the maximum right after 1:00).

Figure 3.7: Risset: Sud – 15-5 VC Composite

Similarly, while a local maximum exists right before 3:30, we should be careful to note that this value is almost entirely comprised of intensity change in the tessitura parameter, whereas the other maxima present throughout the graph are all comprised of all four parameters. This is not to say that the change in tessitura around 3:30 should disqualify it from any sort of formal consideration, but simply that this is a fundamentally different type of composite change than the rest of the maximum values. While the composite VC graph may help us to determine how many distinct formal units we might hear, it tells us nothing about the function of those units, how they

98 relate to each other, which ones might be similar or different to one another, etc. For that, I propose that the composite graph is most useful when combined with an examination and comparison of individual parametric intensity graphs as we have already done.

In terms of formal design, my interpretation and experience of Sud’s second movement is a clear example of the ABA’ paradigm as discussed above. However, it is possible to augment this description with some additional modifiers. Ferreira offers two descriptions of the way formal units are articulated that may prove useful to us here, those being juxtaposition and transformation. She states, “When sonic structures are perceived in juxtaposition, perceptual focus is predominantly directed to their contrasting aspects…On the other hand, a transformation directs perceptual focus centrally towards similarities between sonic structures.”154 Given these basic (yet useful) descriptors, I would further call the formal design of this movement ABA’ with a juxtaposed B. First, let us examine that parametric support for this interpretation. Across each parameter, we can see varying degrees of similarity in the parametric trajectories of the beginning and ending sections. Some, such as tessitura and brightness, are extremely similar to one another. Other parameters, like harmonic density and overall noise, feature the same general gestural shape with some modifications. Overall noise, for example, has opening and closing sections that both feature an overall increase in parametric intensity, but the way that increase is achieved is slightly different. Similarly, an examination of the parametric intensity graphs shows that three out of the four parameters have an extremely clear B section that contrasts the surrounding A sections and features its own articulated parametric trajectory. Because the endpoints of B feature high degrees of composite parametric change across all parameters, it stands to reason that it is juxtaposing and not transformative. A transformative B section (which

154 Ferreira, “A Perceptual Approach to the Analysis of J.C. Risset’s Sud,” 99. 99 could be transformative on only one side) would likely instead feature a much more gradual change in composite parametric intensity to a local maximum. (Notably, depending on how many discrete time values are used, this type of transition may be difficult to detect on a composite VC graph and will need to be confirmed by comparing individual parametric intensity graphs.)

Of course, the listening experience itself also could provide confirmation that this is indeed one way that the work might be perceived. I would never suggest that my methodology could stand in place of the listening experience; it is simply a model of how an individual’s listening experience unfolds in relation to a few very specific parameters. Thus, it may leave out certain indicators that are non-parametric (such as the use of certain sound objects) that may indicate formal units or function. For example, in the second movement of Sud, the A and A’ sections are not only marked by sudden changes in parametric intensity and similar parametric trajectories, but they are also indicated by the implementation of the same sound object at the very beginning of each. At both 0:00 and 4:30, we can hear variations on the same high, bright, metallic sound. Not only is this sound object present, but it is incredibly salient at both moments, cutting through the texture of the piece, and it is clearly the focal point for aural perception at those moments. Thus, while the individual parametric intensity graphs and the composite value- change graph all suggest formal segments that begin at 0:00 and 4:30, it is important to realize that the use of specific sound objects as well as the listening experience itself reinforces this view.

To this point, we have only proposed points for formal segmentation of the work as well as suggested a potential formal design based upon the experience of listening to certain sonic parameters and examining their interaction. However, we have not yet suggested ways in which

100 these individual sections function at lower levels of musical structure. In other words, though we have identified where A, B, and A’ sections are, we have not defined the ways in which those sections are coherent as a formal unit unto themselves. For example, we might ask, “What formal processes or structures underpin the B section?” or “What is it about the behavior of specific elements of the A’ section that differentiates it from the A section?” Questions like these will form the basis of the following chapter, wherein we will return to Sud and examine the behavior of individual sound objects within formal segments.

John Chowning: Stria (1977)

John Chowning’s 1977 stereo acousmatic work Stria presents a vastly different challenge to the analyst than Risset’s Sud. While Sud has a clear extramusical program, and the sounds used in the piece are generally easily relatable to their source-causes (especially in the first and third movements), the sound world of Stria is completely foreign. Other than a vague metallic quality to the sound objects utilized, there is little to guide the listener in terms of possible physical or gestural causation, and there does not seem to be any clear surface-level narrative at play. Instead, Stria features the same relative texture and similar sound objects throughout the course of the work. This may lead one to initially posit that the formal design of the work might simply be one-part, through-composed, or moment form. However, by examining the piece from a parametric perspective, we may be able to identify other possible perceived formal structures that underlie Stria.

As with the previous piece, four parameters were chosen for analysis that were deemed to be most salient in regard to markedness throughout the course of the work. These four parameters are brightness, dynamic, harmonic density, and onset density, each of which will be

101 discussed in turn. Figure 3.8 shows the parametric intensity graph as well as the 15-5 value- change graph for the brightness parameter.

The parametric intensity graph for this parameter shows a clear process throughout the piece, a general descent from higher parametric intensity values (representing brighter sounds) toward lower values in parametric intensity (an overall sound that is more muffled) as the piece reaches its conclusion. The moment at 3:25, however, is an extremely marked outlier within the confines of this specific parameter. If we disregard it for a moment and imagine that the parametric trajectory was left uninterrupted at an intensity value of four at 3:25, we might be able to infer a one-part form due to the seemingly continuous parametric trajectory over the course of the work. However, the drastic change in intensity at 3:25 demands that we consider the function of its salience. In this instance, I believe that it functions to segment the work (in regard to its overall brightness) into two distinct units.

Figure 3.8: Chowning: Stria – Parametric Intensity Graph and 15-5 VC Graph for Brightness

102 The first of these units features stable, high-value intensities from 0:00 to 3:20, whereas the second segment features a relatively quick decrease in parametric intensity starting around 3:30.

Experientially, this marked moment at 3:25 serves to delineate these two segments and to signal to the listener that the arrival back to the beginning intensity value after 3:00 is different than the music that is to come after it. In other words, this moment tells us that our single-trajectory inference is not correct, but rather there are two distinct formal units in regard to the brightness parameter. The 15-5 value-change graph suggests this as the most likely possibility as well, based on our conjecture that moments of maximum change in parametric intensity signal formal divides.

The second sonic parameter that will be considered in regard to Stria is the dynamic level parameter. While perceived variations in dynamic level are undoubtedly present in every single piece of music ever composed, dynamics seem to have an especially salient function in this particular work. They do not seem to play a supporting or subsidiary function in tandem with other sonic parameters, but I perceive them as being one of the driving forces behind the experience of the work’s form. Figure 3.9 shows the parametric intensity graph and the 15-5 VC graph for the dynamic level parameter. Like the brightness parameter, we can see elongated, clearly defined parametric trajectories present throughout the graph. Each of these gestures seems to be a variation of one another. The first trajectory shows a long, steady climb from the lowest parametric intensity value up to the highest, followed by a sharp drop to complete the trajectory. The second has a much smaller increase in parametric intensity but again falls off by a value of three at the end of the trajectory. The final gesture lacks the initial increase in intensity, but again it drops in intensity to complete the work. Thus, we see three different versions of a generalized gesture that might be defined as an elongated ascent followed by a rapid descent.

103 Again, we also see a saliently marked moment right before 3:30 where the parametric intensity rapidly drops from five to two over the course of approximately 10 seconds to close out the first parametric trajectory. This rapid change in parametric intensity (as seen in the 15-5 VC graph) suggests this moment as a salient candidate for formal segmentation, along with a similar moment right after 4:30 which begins the final parametric gesture. The question of which of these divisions most align with our experience in listening to the piece can best be addressed by examining additional salient parameters and the ways in which this parameter interacts with them.

Figure 3.9: Chowning: Stria – Parametric Intensity Graph and 15-5 VC Graph for Dynamic Level

Figure 3.10 shows the parametric intensity graph and 15-5 VC change graph for the harmonic density parameter. As we can see, this parameter is much more active throughout the course of the piece than the two previously-examined sonic parameters. Not only is it more active than the parameters of brightness and dynamic level, but also the harmonic density

104 parameter displays a striking amount of gestural unity. We can easily see four distinct repetitions of the same intensity trajectory, which features an initial upward leap in parametric intensity followed by a moment of stability and then a gradual increase in intensity toward the end of the gesture. Furthermore, the final two iterations of this gesture which begin around 3:15 and 4:30 roughly line up with the final two instances of the gesture present in the dynamic level parameter, suggesting that they might work in tandem. We may also be able to group these gestures into two larger units.

Figure 3.10: Chowning: Stria – Parametric Intensity Graph and 15-5 VC Graph for Harmonic Density

The first two gestures each feature a large leap in parametric intensity and cover the full perceptual breadth of the parameter. In contrast, the last two gestures each have an initial intensity change of only one level, and they both occupy the same intensity space between two and four. Thus, we might group the first two trajectories together as one conceptual unit while we group the latter two as another. In this way, one might argue that the formal design based

105 upon this parameter is really a large two-part design, with each of these parts capable of being broken down further into constituent segments.

The 15-5 value-change graph similarly suggests a division into four discrete segments via the local maxima near the beginning, 1:30, 3:20, and 4:40. Since the first two gestures are each initiated with a large sudden change in parametric intensity, the striking double maximum between 1:00 and 1:30 shows both the boundary between the first two iterations of the gesture as well as the large leap that begins the second gesture. Also, the 15-5 VC graph shows the lower overall change in parametric intensity present in the second large section; whereas the first two gestures project a change in parametric intensity of two or greater on four separate occasions, the final two iterations of the gesture produce a change of this magnitude only once (the boundary separating them).

Onset density, the rate at which new sounds are articulated, was the final parameter chosen for this analysis. Like the some of the other parameters that have been discussed, it projects a clear division into at least two formal units and possibly more at lower levels of structure. Again, we can also see salient parametric gestures that are repeated and varied throughout the course of the first half of the work. The parametric intensity graph and 15-5 value change graph for this parameter can be seen in Figure 3.11. The parametric intensity graph shows a series of relatively rapid changes in perceived parametric intensity, generally from sparser to denser textures (except for the third gesture beginning around 1:30). Furthermore, we can also observe an overall increase in parametric intensity over the course of these four initial gestures, mirroring the shape of the gesture over a longer span of time in a sort of recursive structure. What is perhaps most striking about the parametric intensity graph for onset density is that we can see a clear point at 3:30 where there the volatility of the first part of the piece gives

106 way to complete stability, suggesting a very clear division into two large formal sections in terms of this parameter.

Figure 3.11: Chowning: Stria – Parametric Intensity Graph and 15-5 VC Graph for Onset Density

The 15-5 VC graph also shows this sudden change in volatility around the 3:30 mark, as we can see a completely flat line at a value of zero beginning shortly after this moment. The four parametric gestures that initiate the piece are also clearly shown in this graph; once again, the double maxima serve to represent both the rapid transition in parametric intensity that ends the first two gestures as well as the sudden leap that separates the end of one gesture from the beginning of another. Furthermore, it is worth noting that the moment featuring the highest change in parametric intensity, the initiation of the third gesture at 1:30, is probably not the beginning of a large formal section. Certainly, this moment is strongly marked for consciousness in that it begins a reversal of the parametric trajectory of the first two gestures, but both my experience in listening to the piece as well as an analysis of the graphs suggests that this moment

107 is not the beginning of a new section. Rather, this maximum highlights an important transformation of the parametric gesture used to generate the first large section of the work. This demonstrates the importance of considering each graph both in comparison to other graphs of the same parameter as well as within the context of the experience of the work itself; the 15-5 VC graph alone could easily lead the analyst to posit that this parameter projects three large formal segments over the course of the work. However, the stark change in volatility at 3:30 combined with the seeming gestural similarity of the section leading up to it suggests that 0:00-3:30 may indeed be only one large section that features salient subsections.

The 15-5 VC composite for Stria, shown in Figure 3.12, not only helps to clarify some of the formal ambiguities brought about by analyses of individual parameters, but it also clearly articulates some of the driving forces that form the foundation of these formal units. First, let us consider the segmentation as well as the overall formal design or designs that this composite may support. Perhaps the most obvious and convincing segmentation of this work is into two large pieces, one from 0:00-3:30 and another from around 3:30 until the end of the piece. This is clearly supported by the maximum value at 3:30 that is comprised of a strong change in parametric intensity across all parameters. The smaller local maxima at 1:30 and 4:30 again suggest lower-level segmentation into discrete units or gestures within the larger two-part structure. The composite graph reveals that these two parametric gestures are mostly driven by onset density/harmonic density and dynamic/harmonic density, respectively. Furthermore, we can clearly see the role that onset density plays in articulating the first segment of this work; it is responsible for most of the perceived change in composite parametric intensity up until the formal divide at 3:30, but is entirely absent from the graph past this point (as we would expect based upon our analysis of the individual parameter). In the next chapter, we will examine the

108 lower-level structures based on notions of onset density and harmonic density that are behind the syntactical and functional processes driving the first half of the work.

Figure 3.12: Chowning: Stria – 15-5 VC Composite

In terms of overall formal design, the two-part structure AB seems to be the most fitting.

We do see similar gestures present in the dynamic and harmonic density parameters as well as similar intensities of local maxima in the 15-5 VC in both segments, perhaps suggesting an AA’ design. However, the second segment also features a reversal in parametric trajectory in the remaining two parameters, most notably in the onset density parameter, where the parametric gesture is inverted. Considering that onset density plays such an important role in the musical syntax of this piece, we might consider it to be more important than the other parameters, and thus we should more strongly consider it in the determination of similarity or contrast of formal units. In other words, if onset density is as salient as it seems to be in the listening experience, it might be more responsible for driving the perception of formal design than other parameters.

109 Finally, one might argue for an AB design over an AA’ design simply because, although both segments feature nearly the same local maximum value, the amount of overall change in the second section is much lower than in the first section. Furthermore, even when there is change in the second section, it usually features only one or two parameters, while in the first section three or even all four parameters regularly take part in lower-level change. The relative stability of the second section compared to the volatility of the first section thus further supports an analysis of a contrasting two-part formal design, AB.

György Ligeti – Artikulation (1958)

Although Artikulation is one of only two known works of electronic music composed by

György Ligeti (the other being Glissandi in 1957), it stands as a prime example of the musical style and aesthetic of the elektronische Musik school in Cologne. Because of this, the work is widely utilized in electronic music classrooms and textbooks, meaning that students routinely examine this piece. Thus, although the work does not present the same aesthetic challenges that

Stria did, an altogether different issue arises. Because we have both Ligeti’s own notes about how the piece was composed as well as a widely-disseminated graphic score made by Rainer

Wehinger, one often gets the sense that this work has largely been “figured out.” In other words, if we know the composer’s process, and we have an agreed-upon transcription, we essentially understand the structure of the piece. However, if we recall that form might be thought of as existing or being manifested at the intersection of subjective experience and objective analysis, this is not necessarily true. Ligeti’s notes do not describe or replicate the act of listening to the work; the technical descriptions of the processes employed do not create a one-to-one analogue with the experience of hearing these processes. Likewise, Wehinger’s graphic score is not in itself an analysis, but rather it is a highly-subjective transcription. His graphics have proven

110 useful to many people in providing a sort of road map for the work, but they say nothing about the structure or function of formal units or where those units might be found. By examining the work through a parametric approach, however, we are able to answer many of these questions, and we will attempt to understand not what the formal design and structure of the work is in the objective sense, but rather what we perceive it to be through the act of listening to the music.

As before, I have selected four salient sonic parameters that guide my listening and are responsible for the markedness experienced at various points throughout Artikulation. The parameters that I will examine here are onset density, stereo field utilization,155 glitchiness, and perceived aural distance. I have chosen these parameters not only because of the aural salience when listening to the work, but also because they represent a number of different parameters than have been examined in previous works. Many of the sonic parameters have been discussed in reference to Sud and Stria might also be included here, depending on the experiences of the listener. We will again examine each parameter individually followed by a discussion of the interaction between them.

Let us begin our analysis of Artikulation with an examination of the onset density parameter, which is one of the more saliently active parameters throughout the roughly four- minute duration of the work. The parametric intensity graph and 15-5 value-change graph can be seen in Figure 3.13. From the parametric intensity graph, we can see a clear initiating gesture

155 It should be noted that Artikulation was originally conceived of and composed as a quadraphonic piece, meaning that the listener would hear sound from two front and two rear speakers. However, recordings of the work are generally mixed down into for the sake of convenience. Since most listeners will experience this version of the work, this is the version I have chosen to analyze. Listening to a realization of the work in four channels might actually result in a change in perception of the formal structure of the work, since both stereophonic space and quadraphonic space might considered salient parameters exclusive to their respective versions of the piece. 111 that oscillates around the intensity value of three (almost like a “parametric double neighbor” figure) followed by an increase in parametric intensity toward a brief moment of stability at the highest intensity value of five. A similar gesture follows immediately after (around 2:10), but although it features the same increase in parametric intensity, it is missing the initial oscillation.

Following this second gesture is a sudden drop in intensity to a very stable, low level that remains unchanged for the remainder of the piece. Based on this parametric intensity graph alone, we may be left to infer either two or three large formal units.

Figure 3.13: Ligeti: Artikulation – Parametric Intensity Graph and 15-5 VC Graph for Onset Density

Based on our criteria for formal boundaries, the moment just after 2:00 is clearly a candidate for segmentation. However, we might wonder whether or not 3:30 is an equally appealing candidate for segmentation; one might argue that the descent to the lowest level of parametric intensity is simply a continuation of the slight downward trajectory that ends the second gesture just before

3:30, thus the final 30 seconds of the work should be included as part of the second formal segment. Furthermore, a total change in parametric intensity of two does not inherently demand a

112 formal boundary, but rather it must be considered in context. The 15-5 VC graph suggests that this last 30 seconds might be segmented as its own unit for two reasons. The first reason is that there is, indeed, a salient local maximum of parametric intensity change at this moment in relation to the surrounding music. Even if it is not an enormous change, I would argue that any salient local maximum at least warrants consideration as a formal boundary. Second, recall that one of our primary criteria for cohesion as a formal unit is parametric stability; compared to the rest of the work, the final 30 seconds is remarkably stable in terms of this specific parameter.

This, combined with the local maximum that precedes the section, suggests that it might be its own formal segment. Comparison of this section across all salient sonic parameters may help to reveal additional support for this idea.

Figure 3.14 shows the graphs for what I am calling the stereo field utilization parameter.

In these graphs, a high parametric intensity indicates that more of the available stereophonic field is being utilized, and a low parametric intensity signifies less use of the stereo space. It is possible that the perception of parametric intensity within this parameter could be affected by the type of system used to listen to the piece. A high-quality set of studio monitors that are meticulously placed in the listening environment would provide a far superior listening field than a set of speakers, for instance.156 Thus, it is conceivable that the experience of the form of the work may be different if the playback system is altered in some way.

We can make two key observations about the parametric intensity graph. The first of these is that there does not seem to be any coordinating or generating parametric trajectory that governs the behavior of this parameter. Though we can see clear processes , such as

156 This analysis was done twice, once on a set of studio monitors and once on a high-quality set of . After each analysis, I arrived at the same parametric intensity graph. 113 the transition from intensity one to four between 1:00 and 2:00, there are very few recurring gestures. Perhaps more salient is the general stability of this parameter; whereas the onset density parameter was in an almost constant state of flux, the stereo field parameter is far more stable.

The result of this stability is twofold. First, it means that transitions are much more strongly marked for consciousness than they might be in other parameters. If the normative way to change parametric intensities seems to be by juxtaposition rather than transformation, any time a transformation actually takes place might be considered important. Second, because stability is the usual behavior for this parameter, even small, sudden changes in parametric intensity are marked for consciousness.

Figure 3.14: Ligeti: Artikulation – Parametric Intensity Graph and 15-5 VC Graph for Stereo Field Utilization

The 15-5 value-change graph shows this phenomenon quite well; in the other graphs that we have examined, value changes of one or less have generally been overlooked because they are either not a local maximum or else they are in too close of a proximity to another, more salient local maximum. In this case, however, even small changes in parametric intensity stick

114 out due to the relative stability of the surrounding music. Consider the 15-5 VC graph at 0:30 and

1:00; these points are salient both perceptually and visually because they present a notable contrast within their given contexts. Due to the relative stability of the parameter, it is not surprising to see a VC graph with a lot of local maxima. However, we should be careful not to assume that each of these will produce a segmentation point. Certainly, many of these points may prove good candidates, like the maximum at 3:30 and the other three maxima of two at

1:40, 2:15, and 3:00, but it is important to compare each of these with other parameters to determine whether or not these points support other parameters in articulating the formal design of the work. While it is possible that these six local maxima could lead to six points of segmentation and create seven discrete segments, this seems like a lot of perceptual units to form within a work that is just under four minutes.

The third parameter that we will examine is glitchiness, the extent to which the sound quality of the piece as a whole as well as individual sound objects exhibit aural characteristics of low audio fidelity (“glitchy”). This is a prime example of a parameter that could certainly be described in a much more technical fashion, but appealing to an intrinsically-understood description rather than an overly technical description results in a much shallower learning curve for analysts. Furthermore, it closes the discursive gap between experts and non-experts in the field of electronic music. The parametric intensity graph and the 15-5 VC graph can be seen in

Figure 3.15.

We can see two similar upward parametric trajectories that increase to a parametric intensity of four as well as a clear boundary that signals the end of the first gesture and the beginning of the second gesture just after 2:00. Again, a question arises about what to do with the last 30 seconds of the work. In terms of the glitchiness parameter, it could easily be

115 interpreted either as its own section or as a continuation of the second gesture. Regarding the former, since the gesture was initially presented as an increase from intensity one to four, one might argue that the decrease in parametric intensity at 3:30 signals the end of the second gesture and that something new is beginning. On the other hand, we might also argue that since one salient feature of the end of the first gesture is an extended period of stability, the section from

3:30 to the end of the work really serves to project the stability that closes the second gesture.

Figure 3.15: Ligeti: Artikulation – Parametric Intensity Graph and 15-5 VC Graph for Glitchiness

The VC graph also clearly shows the dividing point between the two gestures just after

2:00. It also shows the salient characteristics of each gesture, an initial section of increasing volatility followed by a segment of parametric stability. Regarding the final 30 seconds, visually, it seems to suggest that this section might be segregated from the end of the second gesture and function as its own independent unit. However, we can also see that this local maximum is more extended than the maxima of other parameters in this piece (as well as other pieces that we have examined). This is due to the fact that there is a relatively quick and localized increase and

116 decrease in parametric intensity of one, meaning that the aggregate change over the course of this span remains more stable. Thus, looking only at this parameter, we are unable to determine exactly how this segment functions. By examining it within the context of the composite graph and the other parameters, we can determine whether or not this moment might serve to support the perception of a formal structure that is present in overall work.

The final parameter that was examined regarding Artikulation is the perceived aural distance. A number of factors can influence how we perceive the location of a sound in recorded playback, including not only the positioning of the sound within the stereo space but also the amplitude of discrete sound objects, their harmonic spectra, and the specifications of the playback system itself. As with some other parameters, the experience of this specific parameter may be altered depending upon the technical details of the system being used to realize the work.

Figure 3.16 shows the parametric intensity graph and 15-5 VC graph for the perceived aural distance parameter. Note that the parametric intensity graph shows a clearly-defined process occurring throughout the work, a gradual decrease in intensity from five to one representing the perception that the sound objects utilized are slowly getting closer as the piece continues. We can also clearly see that within this larger overall process are individual gestures that seem to recursively mirror the larger process taking place. From the beginning of the piece until 2:10 we see one complete gesture featuring gradual declines in parametric intensity with an overall change in intensity value of four throughout the course of the gesture. Similarly, the following gesture starting at 2:10 also gradually decreases in parametric intensity, but is a tiered descent instead of the gradual descent presented in the first trajectory. Furthermore, we again have to decide what the function of the last 30 seconds of the piece might be in regard to the second gesture. Like the previous parameter, this final segment may function as the stable arrival point

117 of the parametric trajectory initiated at 2:10, aligning with the overall trajectory of the first gesture. On the other hand, the sudden leap in intensity of two at 3:20 is saliently different than the gradual (although still quick) transition from intensity four to two beginning around 1:30.

Thus, as before, either option might be argued and defended based on this parameter.

Figure 3.16: Ligeti: Artikulation – Parametric Intensity Graph and 15-5 VC Graph for Perceived Aural Distance

The VC graph helps to visualize the differences between the first and the second gesture.

Note the long, sustained change value of 0.5 for the entirety of the first minute of the work; even though there is change occurring, it is steady, stable change. In contrast, the beginning of the second gesture around 2:10 shows much more volatility. There are far more local maxima followed by abrupt halts in overall change. The VC graph also may suggest that the final segment beginning just before 3:30 is, in fact, its own unit due to the pronounced maximum present at this point. While every candidate point for segmentation should be examined within its musical context, it is notable that the change in intensity present over this span is matched by only two other points in the piece, one of which is undoubtedly a strong candidate for a formal

118 boundary (2:10). As always, further context will help to clarify the syntactical and functional roles of any possible formal unit.

Finally, let us consider the 15-5 VC composite graph, shown in Figure 3.17. The composite presents a very clear candidate point for segmentation around the span represented by

2:15. This point is salient not only in that it represents a large change in parametric intensity across all four parameters, but also this point was roughly the dividing boundary between parametric trajectories in each of the parameters that was examined. Thus, it is a strong candidate both because it fulfills the expectation that formal boundaries feature a high degree of parametric change and because it aligns nicely with our analyses of individual parameters. If we believe that this point is indeed a formal boundary, we are left with a work comprised of at least two large sections. However, we must again examine the final span of the work just following 3:30 to determine whether or not this span may itself be a formal unit.

Figure 3.17: Ligeti: Artikulation – 15-5 VC Composite

119 Based on our earlier criteria regarding formal cohesion and segregation within the parametric system, the final segment from just after 3:30 until the end of the piece is most likely its own section. First, it is preceded by a local maximum value which is second only to main segmentation point discussed earlier. This meets the criteria that formal boundaries are best defined by large changes in parametric intensity across multiple parameters. Second, this segment is clearly the most stable portion of the work, featuring almost no change in parametric intensity throughout this span. This fulfills the main criteria for cohesion, that perceptual units will be formed around more stable segments of a work. Thus, this final span of the piece strongly adheres to the criteria for both cohesion and segregation, suggesting that it functions as its own formal unit.

The last issue we must address is how to understand the function of each of these segments both in terms of each other and within the context of the work. Before we address the first two segments, I would argue that the final segment just discussed functions as a sort of

“coda” to the work. While there is undoubtedly some terminological baggage that accompanies the use of this term, I believe it accurately describes the general function of the final segment.

The last 30 seconds seem to be outside the rhetorical space of the structure of the work, wrapping things up and reiterating the sound objects used throughout the course of the piece. Also, the relatively short duration of this final section compared to the first two sections further suggests that it might not receive that same formal status.

The two main sections of the work are strikingly similar to one another across all parameters. Within each parametric intensity graph, we can see at least a similar overall trajectory being initiated at the beginning of the work and around the 2:10 mark, and in multiple cases we can see nearly identical parametric gestures occurring (such as in the glitchiness

120 parameter). In other parameters, we see a clear reiteration or variation on the first gesture, but based upon the parameters that have been examined, there is hardly enough dissimilarity to suggest that the two sections contrast one another. Also, we can hear the reemergence of the initial sound-object field at the beginning of the second gesture, which further supports an argument that these sections are similar to one another. (This is the same sort of process that we observed in Risset’s Sud, except that in this case it is an entire field of sound objects being recalled as opposed to one single salient sound object.) Therefore, based upon the parameters that were examined, the formal design of the work might be expressed as AA’+coda.

Before moving on, I should note again that I have intentionally avoided invoking any exploration of Ligeti’s compositional process or use of serial techniques regarding this piece. It is tempting to use these documents along with other extramusical aids to form the basis of an analysis, but this is in direct conflict with the notion that form is something that is fundamentally experienced and not analyzed. By focusing on the phenomenological experience of listening to

Atrikulation rather than on the compositional process used to create it or on the transcriptive processes that have been used to represent it, we have arrived at an analysis that more effectively demonstrates the interaction between perception and form. Before continuing on to engage with small-scale formal units and sound objects, let us consider one final piece.

Elainie Lillios: Threads (1998)

The final acousmatic stereo work that we will examine, Threads by Elainie Lillios, is probably the most challenging piece in terms of formal design presented thus far. It is much more active both parametrically as well as on the musical surface, as can be easily heard when listening to the piece and observed when examining the graphs that will be presented.

Furthermore, while the first three analyses in this chapter might fit relatively nicely into different

121 formal archetypes, we will see that Threads does not conform so easily. We will observe clear processes emerging within each sonic parameter as well as between multiple sonic parameters, but the determination as to how we might experience the formal design of this work will require additional connections with the listening experience. Our analysis will be based on the perception and tracking of four distinct sonic paremeters: tessitura, harmonic density, pitched vs. unpitched sound objects, and dynamic level. As before, we will analyze each parameter individually followed by an examination of the larger context of the work and the interaction between salient parameters.

The parametric intensity graph and 15-5 value-change graphs for the tessitura parameter can be found in Figure 3.18. In terms of salience, this parameter features the most volatility in parametric intensity within the first 40 seconds of the work than we have seen thus far. As we can see, this first span of the work is itself comprised of three clear transitions in parametric intensity, resulting in an aurally distinct opening gesture and series of sub-gestures. Even without the aid of a parametric intensity graph, the listener can hardly help but hear this section as a single cohesive unit simply because of the volatility present throughout, especially considering the drastic change to a much more stable segment around 0:50. It is an interesting case where the perceptual cohesion criteria of this formal unit might be met through a sort of “stability of volatility.” In other words, we might group 0:00-0:50 together as a unit precisely because it is unified through its volatility. Therefore, the formal boundary candidate that we can clearly see around 0:50 in the parametric intensity graph is in this case not defined by high parametric change (the normal criteria for formal segregation) but rather by two contrasting stabilities on either side. We will examine this section in more depth in chapter four.

122 The main body of the work is fairly defined as an extended descent through the sounding breadth of the piece and from the highest parametric intensity value to the lowest. Except for a brief increase in parametric intensity around 2:45, the trajectory is entirely downward for most of the work. In this sense, the process being employed within this parameter is clear, but this process also strongly contrasts with the opening. The first segment of the work, as we have already noted, is characterized by rapidly transitioning subsections which cohere into a highly volatile formal unit. The second large unit, however, is characterized by long, stable, gradual change. The work then closes with a segment that reiterates this descent (tiered this time instead of transitory) through the use of three distinct sound objects.

Figure 3.18: Lillios: Threads - Parametric Intensity Graph and 15-5 VC Graph for Tessitura

The VC graph clearly shows at least three segments, a short beginning and ending segment with an extended segment making up the main body of the work. We might also divide the main section into subsections, especially considering the local maximum just before 3:00, but we should first consider other parameters. This specific parameter might be best be characterized 123 as a one-part form with an introduction and a closing section. While the dynamic parameter clearly presents three segments, a simple ABA’, ABC, etc. design might imply that each section carries the same amount of formal weight, which does not seem to be the case here. While the first and last sections undoubtedly present the most volatility and parametric change, proportionally they are dwarfed by the large second unit that spans roughly the time of 0:50-

4:25. This is not to say that these sections are unimportant, as they seem to play an important introductory and closing function within the musical system of this parameter, but simply that it may be somewhat disingenuous to imply that they contribute to the formal experience of the parameter in the same way that the large main section does.

The harmonic density graphs, shown in Figure 3.19, again support the notion that three unique segments can be identified throughout the course of the work. The parametric intensity graph shows two clear parametric trajectories that are initiated at the beginning of the piece and around 1:40. Each of these gestures features an initial rise in intensity which is then followed up by a descent, finally culminating in a dramatic increase in intensity to a brief moment of stability at the close of the gesture. While we can certainly observe some differences between these two gestural trajectories (the descending segment of the first is much more pronounced and covers a wider intensity gap, for instance), it is nonetheless apparent that these gestures are in some way transformations of each other. Once again, we can also see a small, yet clearly defined segment at the end of the work from 4:20-5:00.

The 15-5 VC graph reveals a key difference between the first two segments. While they undoubtedly share a similar shape in terms of overall trajectory, we can see that they strongly contrast each other in regard to volatility. Whereas the first segment shows multiple local maxima and minima, the second segment is almost entirely flat, showing a very stable amount of

124 change at the rate of approximately 0.5 per 15-second segment. This may suggest that, although these two segments appear to be similar when looking at the parametric intensity graph, they may actually contrast each other in terms of their function within the work and the way in which we perceive these segments.

Figure 3.19: Lillios: Threads – Parametric Intensity Graph and 15-5 VC Graph for Harmonic Density

Thus, the issue arises of whether we might consider the design of this parameter to be

AA’ or AB (plus a short closing segment at the end). While the answer to this question does not necessarily determine what the perception of the overall work will be (it may be neither of these), it is worth considering in that our understanding of individual parameters necessarily informs our understanding of the whole. In this instance, I would argue that the two sections contrast each other enough to be considered A and B. Though the trajectory shapes are similar, the extended beginning and ending gestural segments of the second gesture (as well as it being longer overall) combined with the difference in stability shown in the VC graph provide my ear

125 with sufficient contrast to argue that I understand this specific parameter as projecting an

AB+closing design.

The third parameter that was examined was the usage of pitched vs. unpitched sound, the graphs of which can be found in Figure 3.20. In terms of segmentation, it is quite clear that these graphs project three distinct gestural trajectories with segmentation points around 1:40 and 2:50.

Notably, the first and third segments seem to function like mirror inversions of one another, the first projecting and up-and-down type trajectory and the second doing the opposite. Interestingly, however, these two segments not only mirror each other in terms of their parametric intensity trajectories, but also in terms of their rate of change as shown in the 15-5 VC graph. If we look at the graph for just the first segment (0:00-1:40), we see two slightly elongated local maxima at values of 0.5 and one. Notice that the third segment (beginning around 2:50) nearly mirrors this process in reverse, first settling at a stable local maximum of one followed by another elongated maximum of 0.5 Thus, not only are these sections related to one another through the deployment of their specific parametric trajectories (displayed in the parametric intensity graph), but also, they have nearly perfectly mirrored values on the VC graph as well. This might suggest a striking relationship between the first and the third segment of the work as well as the function of the middle segment.

The middle segment of this work is very clearly presented not only because of the salient jumps in parametric intensity that surround it (the segregation criterion), but also because it is completely stable in terms of its parametric intensity, contrasting it from the gradual transitions between parametric intensity values that characterize the outer segments. The three segments together almost form a sort of parametric palindrome, especially considering the 15-5 VC graph.

Ultimately, because of the evident relationship between the first and third sections as well as the

126 contrasting and stable nature of the middle section, I feel quite comfortable in stating that this parameter presents an ABA’ formal design. Notably, none of the parameters that we have examined so far agree on a formal design, nor do they even agree on how many major segments there are and where the formal boundaries for those points might be. We will return to this issue when we examine the composite graph.

Figure 3.20: Lillios: Threads – Parametric Intensity Graph and 15-5 VC Graph for Pitched vs. Unpitched Sounds

The final parameter we will examine is the overall dynamic level of the work. Figure

3.21 shows the parametric intensity graph and 15-5 VC graph for this parameter. Unlike the previous parameter, whatever segments that may exist within the scope of this parameter are not so clearly defined except for the one large change in parametric intensity around 1:40. This moment is a clear candidate for a formal boundary, but the rest of the parametric intensity graph requires further investigation to determine whether or not it can be divided into more units. On one hand, we do see a small leap in parametric intensity at 3:00, producing a local maximum on the VC graph and suggesting that this may be a candidate for a formal boundary. Furthermore,

127 the segment immediately following 3:00 is an extended period of gradual, stable change in parametric intensity, which contrasts the relatively short and volatile periods of change and stability that preceded it. However, on the other hand, we should be careful not to assume that any leap in parametric intensity is a formal boundary; we have seen many instances where leaps between parametric intensity values are actually a salient feature of the formal unit itself, and these leaps sometimes help hold a unit together perceptually by grouping like-with-like (the cohesion criterion). In this instance, I believe there is justification for segmenting the work into three units (with boundaries at 1:40 and 3:00) specifically because the small maximum at 3:00 on the VC graph is still an outlier compared to the relatively stable values around it.

Figure 3.21: Lillios: Threads – Parametric Intensity Graph and 15-5 VC Graph for Dynamic Level

As far as how we might label these three segments, there does not seem to be any unifying characteristics among them. Certainly, we can observe some similar gestural shapes; each of them presents some form of elongated increase in parametric intensity, for instance.

However, each gesture truly does have its own defining characteristics. The first gesture is the

128 only one that includes a non-transitory change in parametric intensity; the second is comprised of sub-segments that are much shorter and result in a greater overall volatility; the final segment is by far the most stable and features elongated, steady change as well as parametric stability. Thus, because there are no observable parametric processes or relationships at play, I would argue that this parameter represents perhaps a moment form or a through-composed design. Yet again, we see that this parameter does not agree with any of the others in terms of possible formal design, nor does it align with the number and locations of formal units as projected by other parameters.

The 15-5 VC composite graph, shown in Figure 3.22, provides an alternate view of the work and some possible solutions to the formal “disagreements” present among individual parameters. The first thing that is evident from the graph is that, although the individual parameters do not necessarily concur with one another about where formal segmentation points are, when considered as a whole, they combine to form moments of striking change in parametric intensity which result in salient candidates for formal boundaries based on the segmentation criteria. The points suggested by the composite graph are numerous, including 0:30, 1:40, 2:50, and 4:30, which would result in five distinct sections within a five-minute piece. While this may seem like a lot of segments in a short period, this argument is strengthened when we observe that each of these formal segmentation points are supported by at least three out of the four sonic parameters that we examined. While some of these points are clearly due to one specific parameter, such as the 4:30 segmentation point, all parameters nonetheless work together to manifest this structure.

Not only does the composite graph show that these points strongly adhere to the segmentation criteria of large overall parametric change, but also the spans in between these segments generally feature a high level of parametric stability, the main part of the cohesion criteria. Thus,

129 it is entirely possible and probably that one might perceive five distinct formal units within this work.

Figure 3.22: Lillios: Threads – 15-5 VC Composite

Both the composite graph as well as my hearing of the work suggest that the ordering of formal segments is arbitrary and not syntactical. That is not to say that we cannot aurally or visually observe connections between sections, but simply that at no time do the graphs or the experience of the work suggest that one section causes another. We can observe lower-level cause and effect, like in the sub-gestures of the first 30 seconds of the work, but outside of the boundaries of each moment (in the formal sense), there does not seem to be any causal relationship. Second, each section functions as its own unit that is capable of standing on its own; no section relies on others to make sense or to seem “complete.” Finally, the length of each section is more-or-less proportional to one another. In Threads, the sections would be approximately 30, 60, 60, 90, and 30 seconds long, respectively. Although some of these are 2:1 and 3:1 ratios, the perceptual difference between 30 and 60 seconds is surely not the same as the 130 difference between 30 and 60 minutes. All of these segments might generically be perceived as

“short.” Thus, because it stays true to these formal conventions and the listening experience supports it, the piece may be a true example of moment form in that the progression of moments is perceptually arbitrary, and the sections maintain relatively similar proportions to one another.

Threads is a truly interesting case where the individual parameters all seem to disagree about the structure of the work, but when one considers the ways in which they interact with each other and the piece as whole, a logical and perceivable formal structure emerges. Indeed, this piece represents an instance where the formal whole really is greater than the sum of the parts.

Concluding Thoughts

Throughout this chapter, we have examined four distinctly different works of acousmatic electronic music that each present a unique challenge to the analyst. Jean-Claude Risset’s Sud presented a seemingly controlling extra-musical narrative that clashed with the experience of the formal structure. John Chowning’s Stria appeared to be one continuous, warping texture without discrete formal units. Ligeti’s Artikulation is a piece for which we have both extensive notes from the composer about how it was written as well as popular graphic scores and transcriptions, but we saw that the experience of the formal design of this work might not align with the compositional process. Finally, Elainie Lillios’s Threads showed us a prime example of moment form being manifested through a number of seemingly disagreeing individual parameters.

Each of these examples demonstrates the viability of a parametric approach for the segmentation of formal units and determination of formal design and structure. The parametric intensity graph serves to show unique parametric trajectories and gestures present throughout the work. By visualizing these trajectories, we are able to discuss the behavior of specific parameters and the ways in which segments are similar, contrast one another, and function within the formal

131 experience of the piece. The value-change graphs further help to bring out characteristics of these gestures. They especially help to show which segments of a piece feature rapid changes in parametric intensity and which segments are more stable, demonstrating possible adherence to the criteria of segregation and cohesion, respectively. Furthermore, by looking at characteristics like volatility and stability, we can compare segments to one another and help determine how they might function within the experience of the form of the work. The composite value-change graphs helped to demonstrate how various individual parameters actually work together to help manifest the formal design of a piece. In some cases, many of the individual parameters agreed with the composite graph. However, we also saw instances, like in Threads, where the individual graphs were actually in disagreement, resulting in a composite graph that actually suggested a clear (but different) formal design. Ultimately, the parametric approach demonstrated throughout this chapter shows the efficacy of using a methodology modeled on the listening experience to analyze formal design and segment formal units. In the next chapter, we will discuss the behavior of those lower-level structures.

132 CHAPTER 4

BEHAVIOR OF SOUND OBJECTS AND FORMAL UNITS

Small-scale Structure in Electronic Music

In the previous chapter, we discussed the segmentation of electronic music and the identification of various types of large-scale formal designs. In this chapter, we will focus our attention on individual formal segments and the sound objects and processes that underlie them.

We will continue to utilize parametric thinking in many instances, but instead of examining the work as a whole, we will examine the sonic changes in relation to small units and individual sound objects. We have examined the concept of the sound object in Chapter 2, but let us be reminded of our definition once more: a fixed sonic gestalt arrived at through the reduced listening of an auditory perception. By examining how sound objects are treated and transformed within discrete structural units, we will observe the ways in which they interact with the larger structures present in the work.

One set of tools that will be especially useful for this task is provided by Denis Smalley’s theory of spectromorphology. I have previously mentioned spectromorphology in Chapter 1, but here I will provide a fuller explanation of its concepts, and we will use elements of the theory to analyze and describe the behavior of sound objects throughout this chapter. The term

“spectromorphology” itself is comprised of two roots. The first, spectro-, refers to the spectral and harmonic qualities of the sound being examined, while -morphology refers to the ways in which these spectral characteristics are transformed through a temporal span.157 Thus, we can see that there is a dual necessity when utilizing the spectromorphological theory; we must first

157 Smalley, “Spectromorphology: Explaining Sound Shapes,” 107. 133 understand the quality of the sound in question before we can comprehend that ways in which it is transformed. The practice of reduced listening is especially suited to this task, and we will continue to utilize a reduced-listening approach in order to identify sound objects. The use of spectromorphological thinking is especially suited to our present examination because, as

Smalley points out, it is “a descriptive tool based on aural perception.”158 In other words,

Smalley’s theory will be helpful in engaging the subjective experiential side of our investigation.

Certain elements of spectromorphology will be adopted nearly wholesale and used in our analyses, while other parts will be extracted and adapted for our own uses.

Rather than provide a detailed description of each of the elements of spectromorphological thinking, I will discuss each as it becomes relevant to the pieces and structural segments being examined. We will discuss a number of segments that have been previously identified in Chapter 3, but we will also look at small-scale structure in a number of new pieces as well. I should note here that while it is not the purview of this chapter to provide parametric or segmentation analyses for the new works being considered, the segments that I focus on have been identified through the same methodological process outlined in the previous chapter.

Jean-Claude Risset: Sud, B Section

In the previous chapter, I argued that Sud might be effectively understood as an ABA’ design with a juxtaposed B section. Leaving large formal structures for a moment, let us consider only the juxtaposed B section and the how it is treated and manifested and the local level. As I have suggested, I believe that this unit can be further divided into two subgroups based upon the treatment and usage of two distinct individual sound objects. The first of these objects is what I

158 Smalley, 107. 134 will call the “noise” object, first presented at the beginning of the B section around 1:35. The object itself is extremely noise-heavy, but it also has a clear focal point in terms of frequency such that it can be perceived as descending in pitch. (The presence of noise does not inherently cause a loss of pitch perception; even “pure” types of noise like white and pink can be filtered in a manner that encourages the perception of a central frequency.) This object culminates by reaching a local minimum frequency and then cuts off. The noise object is sounded again and again throughout the first section of B, strongly marking it for consciousness as an important generating element of the segment. Also, this continued repetition and re-audition allows for the phenomenological reduction (as described in Chapter 1) to take place in the mind of the listener; the listener is able to develop a sense of the identity of the sound object by continually re-hearing it in slightly varied contexts.

Not only is the noise object salient through repetition and re-audition, but there is a clearly-perceived process underlying the first part of the B section that involves this object. Each time the object is projected into sounding space, another instance of it is initiated during the final descent in frequency of the previous object. This creates a sort of audible layering of sound objects wherein the end of any single instance of an object is not easily perceived because of the intrusion of a new instance. Combined with the descending quality of the object, the listener perceives a never-ending downward cascade of sounds; the ear is continually pulled to new and seemingly higher instances of the sound. The sensation is not unlike that of hearing Shepard tones, except in this instance the overall tessitura of the sound world is actually falling (as can be seen in the tessitura parametric intensity graph in Chapter 3). This overall descent in tessitura throughout the first part of the B section is particularly interesting in that it reflects the gestural characteristics of the sound objects that it is comprised of; the individual sound objects each

135 feature an overall descent in register. I do not mean to suggest any sort of large-scale motivic transformation or Schenkerian-esque motivic parallelism, but rather I am simply pointing out the audible relationship between the individual sound objects and the section as a whole.

The second segment within B is based around the “gong” sound object, which is clearly articulated as an almost raw sound recording just before 3:50. Though much shorter than the first segment of B, this second segment similarly presents a clear process in terms of the sound objects utilized and their transformations. This process is in relation to Denis Smalley’s spectromorphological concept of “gestural surrogacy,” which I will here discuss in detail before continuing on to discuss the behavior of the gong sound object. First, it is important to understand what Smalley means when he talks about a gesture in the first place; for him, gesture refers to “an energy-motion trajectory which excites the sounding body, creating spectromorphological life.”159 In other words, Smalley’s conception of a gesture in the spectromorphological sense is something that is inherently physical; it is that which generates the physical processes behind sound generation, not the sound itself. It is the difference between the sound of a drum being struck and the physical act of striking a drum. Smalley’s view of gesture is also strongly related to the first mode of listening, écouter (discussed in chapter two), wherein one listens to sounds for their physical causes. As Smalley states, “When we hear spectromorphologies we detect the humanity behind them by deducing gestural activity, referring back through gesture to proprioceptive and psychological experience in general.”160

This physical, generating gesture is what Smalley refers to as the primal gesture, and gestural surrogacy is the process of describing perceived remoteness from this primal gesture. In

159 Smalley, 111. 160 ibid. 136 other words, gestural surrogacy describes the relationship between any individual sound object and the perceived cognitive distance between it and its physical generator. In order to do this,

Smalley employs “orders” of surrogacy; the first-order surrogate of any primal gesture is the projection of that gesture into sounding space. (Remember that the primal gesture is purely physical in nature, and thus has no sounding quality itself.) For instance, if the primal gesture is hammering a nail into a plank of wood, the first-order surrogate of that gesture is the actual sound that this action would create. We might simply describe it by saying, “It is the sound of a nail being hammered.” The second-order surrogate represents the extension of the first-order surrogate into musical space. Second-order surrogacy is where nearly all traditional forms of music making exist; as long as the sound is perceived as traditionally “musical” in some form (a distinction which I will not even begin to attempt to distinguish here), it is displaying second- order surrogacy, according to Smalley’s definition. This is also the realm of musical and imitation; instead of saying, “That is the sound of a nail being hammered,” one might say

“That sounds like a nail being hammered.” It is not the projection of the gesture, but an imitation of that projection. Third-order surrogacy represents those aural situations wherein a gesture can only be inferred or imagined. In these conditions, one might be able to infer some sort of physical initiating gesture, but the perception is too far removed to be sure. This might be because the sound quality of the auditory perception is itself unfamiliar, but it might also be due to the fact that the spectral behavior of the sound is unexpected or non-normative. Finally,

Smalley also allows for the possibility of remote surrogacy, where all source-cause links with a primal gesture are severed. This level of surrogacy seems to exist simply because we are unable

137 to imagine a sound that exists without a generating gesture, even if the cognitive links to that gesture are remote at best.161

Logically, an individual sound object can exist only at one level of gestural surrogacy, since sound objects themselves are immutable cognitive constructs. However, sound-object fields can certainly contain a number of related sound objects that are at different levels of gestural surrogacy. For our purposes, we may choose to extract Smalley’s conception of gestural surrogacy untouched and utilize it to analyze the behavior of specific sound objects as well as sound-object fields. We would thus examine the surrogacy orders of these sound objects and make observations about the relationships between them. However, it might often be difficult to determine at exactly what order of surrogacy a specific sound object lies. Instances of third-order and remote surrogacy, for example, might be nearly indistinguishable. Similarly, the boundary between second-order music making and third-order gestural inference is not clearly-defined in any sense. Therefore, rather than treating gestural surrogacy as rigidly linked to a few nebulously-defined orders, we might instead treat surrogacy as a sonic parameter much like other parameters that were tracked in the previous chapter. While it is not a property of sound that can be defined in terms of physics or spectral makeup, it is nevertheless an inherently perceptual element of any given sound object. (Not only might surrogacy be examined in regard to a specific sound object or sound-object field as we are doing here, but it would also be possible to examine surrogacy as a salient parameter across an entire piece if it were perceived to be functioning that way.) If we treat surrogacy as a parameter instead of in reference to specific orders, it is no longer necessary to be concerned with the exact relationship of a sound object to its primal gesture; the analyst does not have to decide whether a sound object is a musical

161 Smalley, 112. 138 projection or an inferred gesture projection, for instance. Instead, the only issue that must be addressed is the perceived cognitive distance between a sound object and a physical, generating gesture. In this situation, simple parametric intensity values can be utilized to identify relative perceptual closeness and farness.

Let us return to the gong sound object, the main object utilized in the second half of the B section in Sud. The gong sound object that initiates this section around 3:50 is a nearly pure, unaltered recording, a sound that is as close to the physical generating gesture (primal gesture) as possible. Following this sample, we can hear the sound being processed and transformed into a series of new sound objects that become increasingly further away from the primal gesture. As we move closer to the reemergence of A material at 4:30 (A’), nearly all semblance of the primal gesture has been lost, and we return to the more abstract sound-object field that is characteristic of the A and A’ sections. Thus, this second segment can be relatively simply defined as a gradual shift through more and more remote levels of gestural surrogacy away from the primal gesture of the gong being struck. We could certainly construct a parametric intensity graph for gestural surrogacy as it relates to a specific field of related sound objects, but in this instance the process is simple enough to comprehend that a graph hardly seems necessary.

There is also an interesting perceptual relationship between the noise sound object present in the first part of the B section and the gong sound object that characterizes the second part. I distinctly hear the noise object as imitating the resonance of the gong sound object that is to initiate the next section; thus, the noise object is acting as a surrogate of the gong object. The noise object undoubtedly retains its own characteristics and function, but it also has a syntactical relationship with the gong object in that it foreshadows the sudden shift to close surrogacy when the gong object is sounded. This is not to say that these two sound objects have the same

139 perceived source-cause; I do not necessarily perceive the noise object as being generated from a gong sound, but rather it is related through gesture and through the gestural imitation of the resonance of the gong. It is thus crucial to maintain a clear differentiation between source-cause and primal gesture: the source-cause is the sounding body, the primal gesture physical action that excites it. Sound objects might be related by both of these, but it is possible that they share only one of these relationships and not the other.

In Sud, these two relationships actually help in understanding the discrete structure of the

B section. The gestural relationship between these two sound objects (that the noise object is a surrogate of the gong object) helps to hold the entire section together through a perceived similarity. Because they can be perceived as both originating from a similar primal gesture, we can group them as a perceptual unit (the cohesion criterion). However, the sudden shift in parametric intensity to a lower level of surrogacy helps to signal the initiation of a new sub- section when the gong sound object occurs. Furthermore, because there is also a shift in aural focus to a sound object with a different perceived source-cause, this moment is even further marked for consciousness. This entire B section is thus an interplay between two discrete sound object fields and the gestures and source-causes the underlie them. Figure 4.1 shows a basic graph of the interaction between the two sound objects and their perceived levels of surrogacy throughout this section.

Visually, we can see that the behavior of the gong object sharply contrasts that of the noise object throughout this section. Conceptually, we can also see that the overall structure of this segment consists of a movement from remote levels of surrogacy to close levels of surrogacy and back again, creating a structure within this segment that is reminiscent of the ABA’ structure of the work as a whole. Though the B section itself is only made up of two discrete parts, as we

140 have already observed, the initiation of A’ at 4:30 could be understood as simultaneously completing the process of the B as a kind of “process elision.” Again, I do not mean to suggest motivic parallelism between larger and smaller structures, but simply that these relationships exist and can be perceived through a close, reduced listening procedure.

Figure 4.1: Interaction between Noise and Gong objects in Risset’s Sud

Surrogacy undoubtedly plays an extremely important role in this section of the second movement of Jean-Claude Risset’s Sud. While there may be other processes involved in the manifestation of this particular segment, surrogacy is perhaps the most salient and marked for consciousness due to the sudden shifts in intensity and conceptual relationships between the sound-object fields utilized. Let us turn to another work of electronic music that similarly utilizes gestural surrogacy as a generating process for small-scale structure.

Jon Fielder: Lösgöra (2016), B Section

Jon Fielder’s 2016 stereo acousmatic work Lösgöra clearly shows some of the technological developments that have occurred throughout the second half of the 20th and early parts of the 21st century after many of the other pieces that we have examined were written.

However, while it features more modern processing and mastering techniques, and even though the resulting sound world that Fielder creates is audibly modern, the techniques that

141 we have used so far to examine older electronic music will nevertheless continue to be useful.

Consider the second major section of Lösgöra, which occurs during the span of 1:40 – 4:00.

Within the context of the work, this is not only the second major section but it is also the first contrasting section. (I will refer to it as the B section for the sake of convenience, but I will not provide a complete analysis of the large formal design here.)

At 1:40, there is an initiation of a very clear, nearly-raw sample of a guitar string being scratched and then heavily plucked. It is this gesture which forms the basis for both the sound- object field as well as the primal gesture for all gestural surrogates throughout the entirety of this section. In terms of sound objects, this initiating object (the “string” object) gives way to an entire field of sound objects that all feature the same sorts of spectral envelopes and resonant qualities, forming an audible parallelism among them. This sound-object field is one element that is used to meet the cohesion criteria for large segments; because the listener perceives all presented sound objects during this span to belong to the same field, they will be perceptually grouped into one segment. It is not only that this segment is stable in terms of its overall parametric intensity that causes perceptual cohesion, but also that the sound-object field itself is stable. Thus, even when there is a clear and audible shift to the use of sound objects from a different part of the field at 3:30, these new objects do not cause a break in cohesion due to the overall stability of the sound-object field. While field stability is not necessarily something that is perceivable at higher structural levels, it is clearly present on a smaller scale, and it is evident here. Put more simply, the stability of this segment is partly created because the entire field of sound objects used throughout can be traced back and related to the generating sound object that began the segment.

142 Returning to the concept of gestural surrogacy, Lösgöra presents related process to that which we observed in Risset’s Sud, however the process is perceptually reversed. Whereas Sud presented a number of gestural surrogates that led the listener to infer a primal gesture (and eventually revealing that primal gesture through a raw recording), this segment of Fielder’s work actually presents the listener with the primal gesture first and then allows sound objects at remote levels of surrogacy to emerge. In other words, the gestural process in Sud is one of discovery, whereas this segment of Lösgöra is one of gestural exploration; the listener is not tasked with determining or inferring a potential physical source-cause, but rather he or she is left to examine the aural possibilities of an already-presented gesture. This primal gesture, audibly displayed at the beginning of the section, is the plucking of a string. While we also hear scraping, it is the pluck (along with its resulting resonance) that is transferred into higher levels of gestural surrogacy throughout the segment. My perception of gestural surrogacy in relation to this primal gesture in this section is shown in Figure 4.2.

Figure 4.2: Surrogacy of Plucking Gesture in Fielder’s Lösgöra

As can be seen, I perceive a salient process going on throughout this segment in terms of the surrogacy of the plucking gesture. There are three prominent sub-segments, each featuring a field of related sound objects that each time are at slightly more remote levels of gestural

143 surrogacy. These sub-segments are each separated by a resonant sound object at the most remote levels of surrogacy; this object is aurally salient at both 2:30 and 3:30, and it evokes elements of the primal gesture, most notably the resonance that follows the pluck. While it may possible to connect this remote surrogate to a number of different primal gestures, the salience of the resonant quality of the primal gesture that was presented at the beginning of the section naturally leads to relating it to the resonance of the surrogate. In other words, if the primal gesture’s resonance was aurally important at the beginning of the segment, the return of that resonance

(even an implied return) should likewise be viewed as important. Thus, the implication of resonance serves to split the segment into three unique units, each of which is part of a larger overall motion from closer to more remote levels of gestural surrogacy. This is the process of gestural exploration that plays out over this span of the work.

Broadly, my impression is that gestural surrogacy seems to play an important role in B sections of works, at least in the pieces that I have examined through a spectromorphological lens. One might be left to wonder why this is. Perhaps the most obvious reason might simply be that playing with gestural surrogacy is an efficient and audibly salient way for the composer signify a change in rhetoric. Similarly, it may be that a change in surrogacy simply causes the listener to perceive a change in musical rhetoric, whether or not the composer intended it.

However, whether this change is driven by design or by perception does not alter the result; gestural surrogacy is an effective way to change the discourse of the work. Thus, because its role is fundamentally state-altering, it would make sense that gestural surrogacy often shows up in B sections or contrasting sections, given their nature as opposed to A sections or stable sections.

This is not to say that B sections themselves must be unstable, but rather that in order to identify a section as contrasting, some level of instability must be introduced to disrupt the feeling of

144 sameness. The role of an opening or stable segment is often to clearly and efficiently present musical material to the listener; these segments have historically been utilized as a way to “set the stage” for the musical work that is to unfold. Thus, if we expect electronic music to behave similarly, the opening and stable segments should likewise clearly and audibly present the musical material of the work. Playing with gestural surrogacy would thus disrupt any presentational quality that a stable segment might have, as the listener might be left unable to grasp the totality of the sound materials. On the other hand, B sections and contrasting sections have historically featured more advanced and interesting musical rhetoric within a given work, and their role is often to extend and explore the musical material that was presented in the opening section. The development of a sonata form is an excellent example, wherein typically, material that was presented in the exposition is fragmented, sequenced, expanded, recombined, etc. Likewise, in contrasting sections of electronic pieces, one often finds a fragmentation and exploration of previously-presented musical materials, and gestural surrogacy provides for one of the most common exploratory avenues.

The B section of Lösgöra is thus constructed from a sound-object field that causes segment cohesion due to the stability of the field and the relationships among the sound objects that it comprises. Simultaneously, the process that underlies the activity of this segment is again gestural surrogacy. While Risset’s Sud uses gestural surrogacy to foreshadow and ultimately reveal a primal gesture that relates the sound-object field, Jon Fielder uses an initiating primal gesture in order to explore the sonic possibilities of the gesture and its surrogates. (It is extremely unlikely that these are the only two ways that a composer might utilize gestural surrogacy or that a listener might perceive it, but these are two audible and salient examples found in the literature that show some of its possible uses.)

145 As a final note, this type of analysis and its result is directly in line with what Brian Kane states about concrete electronic music: “Generally speaking, concrete compositions (or parts thereof) are often organized around one of the three features inherent in any given sound: the source, the cause, and the effect.”162 In these two cases, one could make a convincing argument that each is actually organized around all three features in some way. Lösgöra, for example, is clearly based around the cause (the plucking primal gesture), but also effect (the gestural surrogates) and the source (the “string” sound object and its related sound-object field). We can observe similar relationship in Risset’s Sud. Kane’s comment is made specifically in regard to pieces of musique concrete. While Sud and Lösgöra themselves are not strictly speaking an example of concrete (at least in the narrow Schaefferian sense), they nevertheless retain clear elements of it, especially in their use of recorded sounds as generators of musical material. If

Kane is correct in his assertion that concrete compositions are generally based in one of his proposed processes, then it is unsurprising that the individual segments of these works are based around spectromorphological attributes of their salient sound objects.

John Chowning: Stria, First Section

As we saw when we examined the large-scale structure of John Chowning’s Stria in the previous chapter, one of the primary difficulties faced when analyzing this work is that its salient auditory characteristics tend to be textural. That is, the sound objects that Chowning uses throughout the work tend to be amalgamated into one contiguous “whole” rather than heard as discrete individualized objects. However, Smalley provides a number of useful ways of describing the overall behavior and motion of a texture composed of individual sound objects. I

162 Kane, Sound Unseen, 126. 146 will briefly explain the concept of textural motion and then examine how it might be applied to the first section of this work (itself having two sub-sections).

I have reproduced Smalley’s diagram of the possible types of textural motion in Figure

4.3. While the diagram itself may initially seem confusing (and one can certainly make an argument that it is unnecessarily complex), the actual principles behind Smalley’s conception of textural motion are quite intuitive.

163 Figure 4.3: Texture Motion from Smalley

The four terms in the leftmost column of the diagram represent the ways in which individual sound objects interact with one another in order to create a sense of motion. Streaming implies the sensation of a series of distinct and separable layers. This separation may be caused by a gap in the frequency or spectral space between streams, or it may be because of a distinct differentiation between the sound objects that are utilized in each. (It also stands to reason that there is some cognitive point where too great of a spectromorphological difference between the sound objects of different streams would actually result in a perceptual break in the texture.)

163 Smalley, “Spectromorphology: Explaining Sound Shapes,” 118. 147 Flocking is exactly what one might imagine, a group of sound objects behaving and moving as a collective body. It is conceivable that a flocking texture body could become tightly-knit to the point that the texture itself could be treated as a type of sound object and investigated based upon its spectromorphological attributes. Smalley does not go into depth about his conceptions of convolution or turbulence except to state that they represent “coiling and twisting” and

“irregulation fluctuation, possibly stormy,” respectively, and that they “involve confused spectromorphological entwining, but nevertheless tend to concur in their chaos.” I interpret these definitions to mean that convolution is a sensation of textural motion around a fixed point, and turbulence is a sensation of irregular and non-linear motion. (In any case, these will be the perceptual situations for which these terms will be employed here.)164

The center of the chart represents the perceptual between continuous and discontinuous texture motion. Again, though it looks complicated, it essentially states that a completely continuous repetition of sound objects will result in the perception of a sustained texture, whereas a discontinuous series of sound objects will result in an iterative texture

(wherein sound objects are heard as individual units). Somewhere in between these two is granular continuity, where one might perceive a texture as either sustained or iterative depending upon the density of the sound grains. Ultimately, any of the four texture qualities

(streaming, flocking, convolution, turbulence) could exhibit any of the motions on the continuity-discontinuity scale. For example, it would be possible for a flocking texture to shift from a sustained texture to a granular texture. Similarly, it would also be possible to retain an iterative texture while switching quality from streaming to turbulence.165 Smalley provides little

164 Smalley, 117. 165 ibid. 148 explanation of the right side of the chart, but it seems to relate to the role of higher-level grouping and repetition of the texture. For the present analysis, I will mainly rely on the qualities and motions in the first two columns of the diagram. (The second column includes both discontinuous/continuous and the iterative-sustained scale.)

Let us now return to Stria and examine the ways in which texture motion influences the behavior and perception of the A section (around 0:00 – 3:20), which comprises two sub-sections separated by a transitional span from around 1:10 – 1:30. It is evident even on a first listen that this entire span of the work is based upon one conceptual sound object and its field of related sound objects. I call this the “bell” object, although it is actually synthesized and not derived from a recorded sample.166 Since the object in question is synthetic, the composer has absolute control over its spectral content and spectromorphological attributes. According to Smalley’s conception of texture motion (at least the first two parts that are being utilized here), we might classify the first sub-section as initially being flocking-iterative. The flocking part is self- explanatory; the texture is created through a series of layered sound-objects that are perceived as part of a larger whole and not necessarily as unique objects in themselves. However, the designation and perception of this segment as iterative is actually quite interesting and potentially problematic. Remember that the continuity-discontinuity spectrum has to do with the perception of the occurrence of new sound objects within the texture (at least in the way it is being utilized here). For example, one could imagine a situation where, even though a great number of individual objects or grains are being separately and clearly articulated, the overall perception of the sound is as one continuous texture. In this sub-segment of Stria, a sort of

166 The “bell” object is actually the result of the process of (FM) synthesis, a now ubiquitous part of sound design which Chowning pioneered. 149 reversal of this process takes place. Although the sound objects that make up the overall texture are themselves sustained, the listener clearly perceives the identity of each new sound object that is articulated. This demonstrates one key concept regarding texture motion, that the spectromorphological characteristics of the sound objects that make up a texture do not necessarily align with the spectromorphological attributes of the texture itself. In other words, the resulting texture is not simply the sum of its spectral parts.

The articulation of new sound objects in the first sub-segment of A heavily relies on a small amount of spectral space between sound objects. This space is not enough to perceive the objects in different textural streams, but it is enough to clearly hear the onset of each new object within the texture. However, around 0:55, as we approach the moment of transition between the first and second sub-segments, the rate at which new sound objects are articulated increases.

Furthermore, the spectral space that separates new sound objects from the existing texture decreases. This increase in articulation rate and tightening of spectral separation creates a change in perception from a texture that is iterative to a texture that is sustained (the other end of the continuity-discontinuity spectrum). Even though the actual length of newly-articulated sound objects is shorter, the lessened ability to perceive the onset of these sound objects results in a shift on the spectrum toward something more continuous. It is this increase in textural motion from flocking-iterative to flocking-sustained that prepares the transition from the first sub- segment to the second.

The second sub-segment of the A section maintains the sustained textural motion that led into the transition, but the flocking behavior that characterized the first sub-segment is replaced by streaming. Whereas at the beginning of the piece, new iterations of the generating sound object were generally close to the overall spectral content of the larger texture, the second sub-

150 segment maintains clear separation between different spectral layers of sound objects. Though these objects undoubtedly all belong to the same sound-object field, the spectral distance between them (and the lack of sound objects that occupy that spectral space) create a distinct impression of textural streaming. Recall that this is one of two ways Smalley mentions that textural streaming can occur, the other being that the spectromorphological attributes of the sound objects present in each stream are different enough that the listener groups them in separate perceptual categories. Toward the end of this second sub-segment, we clearly hear the intrusion of a lower textural stream that causes the overall texture to shift toward the iterative end of the continuity spectrum. In effect, this mirrors the effect that took place during the first sub- segment, the move from iterative to sustained. Thus, we might understand the two sub-segments of the A section in Stria to function as a sort of departure and return in terms of their continuity.

Figure 4.4 provides a visualization for the textural “path” that the work takes throughout the two sub-segments of the A section. As we can see, all four possible combinations of the flocking/streaming and iterative/sustained dualities are presented.

Figure 4.4: Textural Motion in Chowning’s Stria (A)

151 We can also see that the ordering of the presentation of these different states is not arbitrary, but rather that in reality there is a clear spectromorphological process that underlies the entire section. Thus, we can define each of the sub-segments and their function within this formal level of the work based upon their two-dimensional movements. The first sub-segment moves along the continuity spectrum toward sustain, the transition retains that continuity while changing the textural behavior from flocking to streaming, and the second sub-segment reverses the process of the first, returning to the discontinuous (iterative) pole of the continuity spectrum. This process itself is not inherently complex, but it is something that can be easily heard provided that one knows what to listen for. Surely, there are many different ways that one might interpret and understand this segment, however this interpretation clearly utilizes salient, audible spectromorphological attributes of the work and the sound objects from which it is created.

As a final note about this work, it is worth stating that the process that seems to unfold in

Figure 4.4 (the “loop” around the matrix) is actually completed at the end of the work, shown in

Figure 4.5.

Figure 4.5: Textural Motion in Chowning’s Stria (Whole Work) 152 While the B section of the work is not what is being investigated here, it will suffice to say that it maintains the streaming-iterative motion that characterizes the culmination of the process of the second A sub-segment. The very end of the work completes this cycle by transitioning back to the flocking-iterative textural motion with which the work begins. Thus, not only does

Smalley’s “texture motion” play a role at lower structural levels and at the level of sound objects, but it may actually be understood to play a role in generating the entire form of the work. Again, whether or not the listener is actually drawn to these spectromorphological attributes or not will necessarily influence their interpretation. However, it is evident that if the ear is directed toward these salient processes, it is not difficult to see how they function within the musical discourse of the work as a whole.

Elainie Lillios: Threads, Introduction

In the previous chapter, we identified a clear introductory section in Elainie Lillios’s stereo acousmatic work, Threads. Now, let us examine the attributes of this segment that allow us to ascribe it introductory function. Before we begin that analysis, however, it is worth noting that the texture motions present throughout this segment are prime examples of granular and sustained textures that are not made up of sound objects that are themselves sustained. In the previous analysis, the sound objects that were utilized were long and stable, so it might be easy to confuse the designation of a texture as “sustained” with the spectromorphological attributes of the sound objects that make up the texture. In the opening of Threads, however, the listener can clearly perceive a great number of very short, granular sound objects, but the overall aural focus is on the larger texture and not these discrete sounds. Even though the texture’s constituent sound objects are not sustained, the overall sound is perceived as a single, unbroken texture. (While texture motion is not the focus of the present analysis, a careful listen reveals that the entire

153 introduction is based upon shifts back and forth between flocking-sustained and flocking- granular textures.)

Texture motion worked especially well for analyzing Chowning’s Stria partially because the work is so driven by textural change and because it is generally governed by a single perceived texture. While Threads undoubtedly features textural shifts, what is more readily perceived is a general sense of motion and growth processes, not necessarily related to a specific texture. In many ways, these motion processes seem to function as the locus of rhythmic activity within the work; while traditional conceptions of rhythm are not readily present within the work, there is nonetheless a sense of temporal pacing. This temporality is generated largely by growth and motion processes.

Denis Smalley posits four broad categories of motion and growth processes that may be present in a given work of electronic music. I will briefly explain each category, which can be seen in Figure 4.6.

167 Figure 4.6: Motion and Growth Processes from Smalley

167 Smalley, “Spectromorphology: Explaining Sound Shapes,” 116. 154 These motion and growth processes all tend to have some sort of directional tendency or expectation, and they can help us to determine the discursive function of both small units and groups of units. Unidirectional motions are exactly what they sound like: the listener perceives motion from one point to another. While Smalley’s sub-categories of unidirectional motion all have to do with the perception of height, we could also imagine unidirectional motion within a stereo field or a quadraphonic field. In other words, the perception of unidirectional motion does not necessarily have to be related to height, but may in fact be mapped onto other auditory perceptions as well. Reciprocal motion involves a departure and return; again, this may be something like “up-then-down,” but it could be related to other parameters of sound.

Cyclic/centric motion depends upon what Smalley calls “spectromorphological recycling,” which can generally be read as “repetition.” This makes sense, given that a series of repeated motions will tend to draw the focus to a specific point or range of points; we naturally are drawn to patterns and repetitions of patterns, so a listener tends to instinctively gravitate towards those points that define patterns. Bi/multidirectional tend to have a sense of multiple directed motions, as evidenced by sub-categories like “convergence” and “dilation.” They can often be thought of as a composite of other types of motion. For example, “divergence” typically consists of two or more simultaneous but distinct unidirectional motions. Multidirectional motions are not inherently opposed to the other three broad categories of motion or growth (as Smalley’s figure seems to imply), but rather are best thought of as motion/growth composites.168

The introduction to Threads exhibits a number of different motion processes from varying categories. The most salient of these is the series of four well-defined unidirectional motions that begin the work. I perceive an initial brief unidirectional ascending gesture that

168 Smalley, 115–16. 155 provides for a dramatic increase in energy to begin the work. This ascent creates a set of sonic expectations; while there is not a singular expectation of what will follow this dramatic ascent, it would also be incorrect to say that all motion continuations are equally probable. For instance, we might expect a stabilization at the peak of the ascent or a dissipation of energy, but we are much less likely to expect a sudden drop in register or a complete dissolution of the texture. The actual result of the initial unidirectional ascent is a dissipation of the energy that was built up in the form of a unidirectional descent starting from a slightly lower register. The two trajectories are thus not connected via register or sound object, but we can create a perceptual link in that the second motion fulfills one possible expected outcome of the first. The third and fourth gestures within the introduction essentially mirror this process, again featuring and initial energy-gaining ascent that results in a dissipation of energy via a unidirectional descent.

This reading of the introduction as a series of four distinct unidirectional motions is certainly audibly salient, but we also might think of it as a sort of “surface-level” reading. It is possible to think of these motion trajectories not only as individual units but also as parts of larger composites and higher levels of motion. Consider the first two motion trajectories that we identified in the introduction: the initial unidirectional ascent and the dissipating unidirectional descent. While each of these gestures has its own trajectory, they both work together to create a composite motion that would fall into the category of “reciprocal” motion. Whether it is a

“parabola” or an “oscillation” is of little importance; rather we can simply state that the relationship of the first motion to the second is clearly causal. The energy-gain and ascent of the first directly leads to the dissipation and descent of the second, potentially allowing us to group them as a larger second-order reciprocal motion. Furthermore, it stands to reason that if we are able to group the first two motion trajectories in such a manner, we are also able to group the

156 third and fourth motions as a second-order reciprocal motion, since these latter motions mirror the growth behaviors of the first two.

We can even go one level higher and group these two second-order reciprocal motions into a larger cyclic motion of the third-order. Since both of the second-order motions are of the same reciprocal nature, grouping them together into a single periodic/cyclic motion is only natural; we could certainly imagine that this cycle of second-order reciprocal motions could continue nearly indefinitely. Thus, we can see that the introduction of Threads can be understood both at the surface level as a series of individual unidirectional motions or at higher levels as a pair of second-order reciprocal motions or a third-order cyclic motion. This multi-level relationship is shown here in Figure 4.7.

Figure 4.7: Levels of Motion in Lillios’s Threads (Introduction)

While it may initially seem like this is an inherent property of all motion trajectories, it is not necessarily so. It may be possible to group surface-level motions into higher-order motions, but the specific three-level structure exhibited by the opening of Threads is unique in its recursive ability to form higher-order structures. For example, if the third and fourth motion trajectories of the introduction did not mirror the motion characteristics of the first two, hypothetically speaking, the overall motion of the section would not be cyclic. Furthermore,

157 depending upon the nature of these hypothetical motions, the third and fourth might not even combine into higher-order motions past the surface level in the first place. This is not to say whether or not these types of higher-order motions are exceedingly rare in electronic music, but simply that motion combinatoriality is not an inherent property.

The prevalence of this type of motion makes sense given the introductory nature of this segment as a whole. The function of an introduction is generally to lay out musical materials and suggest ways in which they will play a role in the musical discourse and rhetoric of the work. In that sense, the higher-order cyclic motion shown here allows the material to be aurally introduced and examined without a sense that the work has really “gone anywhere.” If the listener did have a sense that the work had somehow traversed a temporal span from point A to point B, the perception of the segment as introductory would likely be lost. Thus, we might hypothesize that cyclical motion (and perhaps higher-order and recursive motions) are most likely to be found in segments that are inherently stable, such as introductions, codas, etc., and not in sections that feature goal-oriented processes like transitions. Ultimately, it is clear that one of the most important elements of the introduction to Threads is not simply the sound objects employed or their spectromorphological characteristics, but rather the overall motion and growth trajectories actually underlie and support its discursive function.

Natasha Barrett: Mobilis in Mobili (2006), First Two Sections

To this point, all analytical claims have been rooted in a methodology based in

Schaefferian principles of reduced listening, focusing only on the spectromorphological and sonic characteristics of sonic objects themselves. However, we might wonder what happens when the musical discourse of a piece is such that extrinsic links between sound objects within the work and extramusical conceptions are inevitable. In other words, how is our analysis

158 affected if the primary mode of musical discourse is decidedly not of the reduced listening variety, but instead actively invites the listener to hear extra-sonic information? Natasha Barrett’s

2006 acousmatic work Mobilis in Mobili provides for an exemplary case study relating to this question. While one could certainly approach the work in much the same way as we have up to this point, paying attention only to the spectromorphological attributes of the sound object present within the piece, Mobilis in Mobili actually relies on this extra-sonic information in order to convey its inherent spectromorphological information. Thus, relying only on extra-sonic or spectromorphological attributes would in effect be missing half of the experience. This is not to say that a reduced listening approach is impossible, but simply that if the perception of the musical rhetoric is that it relies on extra-sonic information, one might choose to follow both a sonically intrinsic and extrinsic path and see how they interact with one another.

A key concept from Smalley’s theory of spectromorphology that might aid in this process is source bonding. This is “the natural tendency to relate sounds to supposed sources and causes, and to relate sounds to each other because they appear to have shared or associated origins.”

Thus, the perception of a shared sonic origin allows us to perceptually group a field of sound objects. Furthermore, “source bondings may be actual or imagined – in other words they can be constructs created by the listener.”169 One might even argue that whether or not we are actively focusing on source bonding in an electronic work, it is inherently part of listening. As Joanna

Demers notes, “empirical data and common sense both indicate that the recognition and classification of sound are integral aspects of the listening process and cannot be easily discarded.”170 Thus, when a work seems to emphasize extramusical and extra-sonic listening

169 Smalley, 110. 170 Demers, Listening through the Noise, 87. 159 processes as an inherent part of its musical discourse, source bonding might prove to be a useful analytical endeavor.

Let us now turn to the first segment of Barrett’s Mobilis in Mobili and explore the interaction between intrinsic spectromorphological attributes of the sound objects and their extrinsic links. This segment, which lasts from the beginning of the work until approximately

2:20, exhibits a high degree of textural streaming. We have previously examined this concept in relation to textural motion. Here, the concern is not the motion of the specific textural streams present within this segment of the work, but rather the spectromorphological properties that cause the cohesion of these streams in the first place. Previously, we observed that textural streams can be considered cohesive either because of the spectral space present between streams or because of an observed spectral similarity among objects that make up the streams. While both of these are undoubtedly possible criteria for texture stream cohesion, I would also posit here that fields of source-bonded sound objects might also allow for textural cohesion. In this situation, it is not the spectral spacing or the similarity of the objects that allows for perceptual cohesion, but rather the extra-sonic interpretation that all sound objects present within the texture stream are derived from or related to the same sonic source.

In the first segment of Mobilis in Mobili, source bonding serves to differentiate three distinct texture streams present throughout the section. The first of these that is heard is the

“water” texture. The sounds present in this stream have clear shared spectromorphological attributes that link them to sampled sounds of water and bubbles, including some sounds that are heavily processed but still retain behavioral characteristics of the perceived source cause.

Second, we hear a texture related to what seems to be a sample of a wooden door being open and closed. This “door” stream not only features sounds that are reminiscent of wooden objects being

160 struck, but there is also the sound of the hinge resonating as the door is being moved, causing a salient pitched sound to emerge. Since all of these sound objects can be related to a source sound, they can be grouped into one perceptual stream via source bonding. (Whether or not this texture is actually based on a sample of a wooden door is unclear, and I genuinely do not know if that is the source sound. However, recall that relating a field of sound objects via source bonding requires only the perception of a shared source.) The final texture that we hear throughout this segment is a “synthetic” texture, which features sound objects that appear to have a shared origin as synthetic sounds. Thus, the first 2:20 of the piece evoke a sound world that evokes a sense of multiple states of being: liquid, solid, and ethereal. The textural streaming creates a distinct sense of vertical temporality; the aural focus is on the interaction between these layers, not on a linear progression from point A to point B. As we have seen previously, this type of temporality and perception effectively fulfills the role of an introductory section in “setting the stage” rather than traversing cognitive distance.

Source bonding plays a very important role in the transition lasting from 2:20 – 2:50 between the first large segment and the second. The second segment begins at 2:50 with a nearly-raw recording of a group of men , a sound that seems to be a clear contrast to the material that precedes it in the first segment. Indeed, if these two segments were simply butted up against each other, the effect would be stark. However, the transition between these two segments introduces a sound-object field that creates a “source-bond bridge” between them. At

2:20, we hear a distinctly synthetic sounding voice, most likely a sampled voice or that has been processed through a , comb filter, or some other related device. This sound-object field is important in that it can be source bonded to the synthetic texture stream that precedes it via the shared synthetic quality of their spectromorphological attributes. If one perceives them as

161 both being derived from a synthetic source, they may be source bonded. Similarly, the perception of the sound object as distinctly vocal in nature allows it to be source bonded to the singing stream that is to come at the beginning of the second segment. Thus, this brief “synthetic voice” sound object field creates a perceptual link from synthetic to organic, processed to raw, across the transition from the first segment to the second via a series of interconnected source bondings.

A simple visualization of this process can be seen here in Figure 4.8.

Figure 4.8: Source Bonding Link in Barrett’s Mobilis in Mobili (Transition)

Source bonding serves an extremely important role in the understanding of the second segment as a conceptual unit as well. After the transition, the streams featuring the wooden door and synthetic sound objects from the first segment dissipate, while the water sounds remain, along with the recording of the group of men singing. Clearly, the singing takes primary aural focus, as it is the novel sound that has been newly introduced. The listener may immediately believe that this section will be a sort of “soundscape” featuring the sounds of water and the singing men, perhaps evoking visual imagery through its sonic design. However, the singing sample soon becomes heavily processed (most notably through granulation, amplitude modulation, and panning effects) to the point where, if the listener had not heard the raw sample from which it was derived, the connection between the sample and its processed version might be lost. Thus, this entire segment relies on source bonding, the perception that the processed sound and the raw sound both come from the same source, in order to hold together as a coherent

162 unit in the first place. Furthermore, it also allows for the cohesion of distinct texture streams within the segment, preserving the temporally vertical nature of the work that was established in the first section. This section in particular features texture streams that actually overlap each other in terms of spectral space and spectromorphological content, so paying attention to source bonding and the intrinsic-extrinsic links among sound objects within the texture is critical to understanding the discursive elements in this span of the piece.

We can thus see that examining extra-sonic links can prove valuable, provided that the musical discourse and rhetoric surrounding a given work suggests that these links are important to the understanding of its structure. We should be careful not to overly rely on or overemphasize the importance of extramusical content in analysis, especially in a system based in Schaefferian principles of reduced listening, but certainly it can help to support a reading. In Mobilis in

Mobili, extra-sonic information is key in being able to distinguish textural streams present in both the first segment and the second segment through the process of source bonding. Likewise, the segment between these two sections can best be understood as a series of interconnected source bondings that allow for the seamless transition and transformation from a textural stream that is fundamentally synthetic to one that is based in raw, sampled sound. Thus, source bonding not only allows for the understanding and cohesion of structural segments, but in this work, it also underlies the perception of their formal functions and behaviors.

Toru Takemitsu: Vocalism Ai (1956), Ending

The final piece that we will examine in this chapter is Toru Takemitsu’s 1956 electronic exploration of a single word, Vocalism Ai. The piece is based on a large number of recordings of people saying the Japanese word “ai” (loosely translated as “love”) in varying and dramatic ways. This is certainly no big secret; it is obvious from the first seconds of the work that it is to

163 be an investigation into the dramatic and spectromorphological possibilities of this one word.

Clearly, because the listener will quickly perceive that each sound can be linked to a similar source-cause (not even just the sonic generator, but in fact the same word), source bonding and the perceptual constructs that come with it are undoubtedly at play here. However, because the entire piece is actually predicated on the listener understanding this source bonding in the first place, one might argue that it has little or no importance for the conception of the formal structure or function of the work.171 Because the source bonding related to the word “ai” underlies the entire piece, one might examine other salient spectromorphological factors that seem to play a role in determining structure and function.

Perhaps the most salient aural feature about Vocalism Ai is the sense of space that it creates in the mind of the listener. All pieces, electronic and acoustic, will necessarily utilize space as an inherent part of the listening process; it is impossible to hear something without a space in which to hear it. However, Takemitsu’s work not only utilizes the concept of space in order to create aural interest, but it actually aids the listener in determining the structure and behavior of individual segments. This directly aligns with Smalley’s observations about space:

“As well as enhancing the character and impact of spectromorphologies, changes in spatial perspective are a means of delineating musical structure.”172 Indeed, in Vocalism Ai, spatialization plays an extremely important role.

Broadly, Smalley’s primary spectromorphological aim as it relates to spectral space is to create a grammatical lexicon of terms designed to describe the location, characteristics, and

171 To draw a parallel to something more familiar, one might observe that there is little analytical value in stating “those are musical instruments” when about the hear a symphonic work. It is not that the statement is false, but simply that being aware of the source bonding among the sounds is an inherent part of understanding the musical totality in the first place. 172 Smalley, “Spectromorphology: Explaining Sound Shapes,” 122. 164 behaviors of various types of space.173 The spaces that must be defined, however, are not limited to the real sounding spaces that exist in everyday life, however. Electronic music is notably different in that it “is not limited to spatial reality, and the composer can, for example, juxtapose and rupture spaces, an impossible experience in real life…This makes electroacoustic music a unique art.”174 Thus, any grammatical system devised to talk about space and its characteristics and transformations (what Smalley calls spatiomorphology) necessarily must account for not only the real listening spaces, but also the imaginary. Figure 4.9 shows Smalley’s diagram describing the variances possible under different types of listening spaces.

175 Figure 4.9: Listening Space Variances from Smalley

The primary differentiation between types of listening spaces, shown in the center of the chart, is that between listening spaces and composed spaces. Simply put, a listening space is that actual listening environment in which a work is heard, whether it is a concert hall, your bedroom, or a

173 By this point, Smalley’s consistent tack in defining broad grammatical systems for every perceived analytical situation should be readily apparent. 174 Smalley, “Spectromorphology: Explaining Sound Shapes,” 122. 175 Ibid., 123. 165 space station. Similarly, the listening space also includes the method of listening, such as a loudspeaker setup (diffused space) or something more intimate, like headphones (personal space). Composed space, in contrast, is the perception of space present within the work as it is performed or transferred to media; it is the way the listener perceives space when listening. Thus, the composed space is necessarily contained within the listening space. Within the composed space, we can also identify internal and external spaces. Internal spaces exist when a sound object itself seems to encapsulate a resonance, such as a bell or the body of a . Internal spaces are the resonating bodies themselves. External spaces, then, are those spaces which enclose spectromorphologies themselves. These perceptual spaces are generally made apparent to the listener through their reverberant qualities.176 The most convenient way to think about external spaces is as analogous to the broader listening space: the internal spaces of individual sound objects are enclosed in the external space of the work, and this entire space is again enclosed within the listening space where the piece is heard. In this way, I disagree with

Smalley’s conception that listening space and composed space are opposed; rather, I argue that composed space is contained within the listening space, as shown in Figure 4.10. Because each listener will inherently experience the piece through a different listening space, even within the same room, we will focus specifically on the composed space of Vocalism Ai, especially the changing perceptions of external space present throughout the work and the ways in which these transformations provide structure and function.

Takemitsu’s Vocalism Ai emphasizes the sense of composed space, especially external space, throughout the entire work. Here, we will examine the final segment of the piece, which lasts from just before 3:00 (approx. 2:57) until the end of the piece (4:15). Of the five variants

176 Smalley, 122. 166 that Smalley considers in Figure 4.9, intimacy seems to be the most prominent throughout much of the piece and certainly at the beginning of this final segment.

Figure 4.10: Re-conception of Smalley’s Spaces

This section begins with a loud, forceful vocalism that has a very long reverberation time, creating a sense of immense space in the mind of the listener. Though the sound itself does not sound as if it is far away, its reflections do, and it evokes an extremely large enclosed space. One might imagine a large reflective hall or perhaps even the inside of an empty cistern. Regardless, this single sound that begins the section defines the space in which the rest of it exists, even as sounds might become more intimate and less reflective. In fact, this is exactly what happens as we approach 3:20; the vocalisms gradually change from a shout to an extremely close whisper.

Although the reflections that characterized the beginning of the section are no longer present, the listener still feels as though these close, intimate sounds continue to take place in this enormous composed space. Thus, the perception throughout these first 30 seconds of the segment is not that of changing space, but rather it is the sensation of motion and localization variance within the space.

The introduction of a second sound-object field just after 3:20 reinforces the invariance of the composed space of the work. A repeating series of “ai” vocalisms (that almost sounds like a frog jumping around) are seemingly placed far away from the listener within the space,

167 contrasting the intimacy of the female vocalisms continuing from the beginning of the section.

These repeating vocalisms gradually shift toward and back away from the listener, again displaying the enormity of the imagined space in which the segment exists. Eventually, these vocalisms remain toward the edge of the space, circling the listener from afar. The result is two clear fields of sound objects, one that exists in the intimate space near the listener and one that exists at the edge of the imagined space within the composition. The two fields are separated by a divide in the listening space; each field is able to cross this space, but neither inhabits it. A visualization of the listening space is shown in Figure 4.11.

Figure 4.11: Space in Takemitsu’s Vocalism Ai (Ending)

Thus, we are left with something that resembles the concept of textural streaming that was discussed earlier. However, instead of spectral space or variations in spectromorphological quality, the element that allows for stream segregation is the spatial divide between the streams.

168 Certainly, there are spectral differences between the two sound-object fields utilized in this section, but the most salient aural characteristic is the sense of space that they invoke.

What holds this segment together as a cohesive unit? Here, we return to the concept of source bonding. Previously, source bonding was used to connect multiple sound objects or fields of sound objects to one another through the perception that they share a generating source sound

(like sound-object fields generated from singing or water sounds). However, we also might consider source bonding in relation to the composed space of a work. More precisely, the sounding space that sound-object fields inhabit may be considered a type of source itself, and we can link sound-objects and fields not only through source sounds but through source spaces. In this particular situation, both streams of vocalisms clearly evoke the same composed space, even though one stream generally inhabits an intimate space, while the other inhabits a remote space.

Each of them exists at one point or another in both sub-spaces, even if for a brief moment, jointly defining the entire composed space of the work. Thus, even though the two streams are audibly segregated from one another via a variety of spectromorphological factors, the invocation of the same composed space allows for perceptual coherence of the segment.

This process is especially effective at this point in the piece, given that it is the final segment of the work and serves to create a sense of completion or ending. This section projects a sense of completion in that it emphasizes both extremes of the composed space of the work: the most intimate and remote sub-spaces of the total space. Furthermore, because the spatial divide that separates these two sub-spaces remains spectromorphologically uninhabited throughout this segment, the listener retains a clear sense of the totality of the space utilized throughout the work. The brief transitions that each stream takes through this spatial divide in order to arrive at the intimate and remote spaces further emphasize and recapitulate the sense of space evoked

169 throughout the entire work. Thus, space and spatiomorphology not only play a role in creating aural interest and aiding in stream segregation, but also the use of space actually helps imbue individual segments in Vocalism Ai with a sense of formal function.

Concluding Thoughts

Throughout this chapter, we have observed that many of the processes that underlie the perception of small-scale form and structural function have to do with the deployment and behavior of specific sound objects and sound-object fields within a piece of electronic music.

Whether sections function as introductory, stable, transitory, unstable, concluding, etc., seems to have as much to do with the way their constituent sound objects are treated as it does with the temporal position of the segment in the work. In other words, introductions in electronic music might not be understood as such simply by virtue of coming first, but also because they tend to exhibit certain spectromorphological behaviors (such as cyclical motion and texture cohesion, for example).

Denis Smalley’s theory of spectromorphology provides a number of useful tools that can be extracted and manipulated in order to fit our analytical needs. While it is undoubtedly true that any given piece of work necessarily exhibits all elements of spectromorphological consideration (simply because the theory describes spectral attributes of sounds that are inherently present), like parametric analysis, the analyst must choose those spectral processes that seem to be the most salient in a given piece. In each piece that we looked at, the spectromorphological elements that we examined were chosen because they were aurally salient and seemed to play an important role in the behavior of the segment’s sound objects. Thus, even the act of choosing descriptive tools to use from Smalley’s theory is itself an act of subjective

170 analysis, aiding the analyst in injecting elements from the subjective listening experience into the objective analysis of the work.

First, we examined the role of gestural surrogacy, the perception of a gesture’s closeness or distance from a physical, generating gesture, in relation to both Jean-Claude Risset’s Sud and

Jon Fielder’s Lösgöra. Gestural surrogacy played an interesting role in internal segments of both works, but the processes employed were vastly different. In Sud, gestural surrogacy served as a means of foreshadowing and discovering the generating gesture of the segment, leading to a reveal of this object at the crux of the work’s form. In Lösgöra, the physical gesture served to provide an initial basic sonic idea, the aural possibilities of which are explored for the rest of the section. By linking sounds in these pieces together through a shared physical generating gesture, we are able to achieve the perception of sectional cohesion while at the same time maintaining compositional interest in the interior segments of a work.

Texture and motion had an extremely important role in supporting the formal functions of segments within John Chowning’s Stria and Elainie Lillios’s Threads. In Stria, we saw how changes in two different dimensions of textural motion created a process which underlies not only the first section of the work, but also the piece as a whole. Perceptually, the texture motion trajectory of the work is a closed loop, ending where it began, and providing the listener with a sense of movement throughout the piece. In Threads, motion helped to define the introductory nature of the segment. While on the surface there appear to be four unrelated unidirectional motions, these motions can be grouped into higher orders of motion. At the second level, these unidirectional motions become two pairs of reciprocal motions. At third-level motion, the two reciprocal motions can be further grouped into a single cyclic motion, allowing for the perception of motion while at the same time providing the stability that an introductory section

171 often exhibits. Thus, texture and motion allow not only for compositional interest, but in fact they imbue their respective segments with a sense of function.

Natasha Barrett’s Mobilis in Mobili revealed the ways in which source bonding, the cognitive linking of sound objects or sound-object fields through a perceived shared generative sound, can help to define not only stable sections but also transitory sections. Though the initial two main segments of this work share similar textural processes and streams, one might initially understand them as juxtaposed (instead of transitory) because of the seemingly sudden intrusion of a new sound-object field. However, the short interlude between these two sections actually acts as a transition through the use of a chain of interconnected source bondings. In this way, source bonding not only acts as a way to perceptually link sound objects, but it provides for structural and functional coherence.

Finally, we examined the evocation of space and spatiomorphology in relation to the final segment of Toru Takemitsu’s Vocalism Ai. We observed that the emphasis on the extremes of the composed space of the work (both the intimate and the remote) helped the section to truly function as culminating. Furthermore, the avoidance of the perceived spatial divide between these two regions also added to the sense of an ending that the segment provides. We also noted that space itself can be understood as being another vehicle for source bonding. Typically, source bonding takes place because sound objects or sound-object fields are perceived as sharing a generating source sound. However, we can also understand sounds to inhabit the same composed space, therefore linking them through this perception. Space, whether composed (internal, external) or listening (personal, diffused), is necessarily present in all forms of musical listening.

A piece inherently requires a space in which to hear it, and it is unavoidable that a work will

172 invoke a sense of space, whether intended or not. However, Vocalism Ai does not simply imply a sense of space; rather, it clearly uses space in order to express its musical structure and discourse.

Thus, individual sound objects can work in tandem with parametric analysis to create an effective and critical model of the experience of listening to a work of electronic music.

Parametric analysis allows us to suggest possible points for structural segmentation, while the analysis of the behavior of sound objects and sound-object fields within those particular segments help define the function of those segments. While this chapter has certainly not examined all possible formal functions that can be attributed to segments based on the spectromorphological behavior of sound objects (far from it, in fact), it is nevertheless a good start in extending electronic music analysis from the what to the how.

173 CHAPTER 5

ELECTRONICS AND LIVE INSTRUMENTS

Toward New Repertoire

In Chapter 3, we examined ways to approach the segmentation of electronic music from an experiential standpoint. Chapter 4 took a similar approach, except the focus was on how the individual segments functioned within the musical discourse of a given work. Each of these chapters exclusively examined acousmatic works of fixed media (often referred to as “tape music”), and while these pieces were distinct from one another in a variety of ways, they all might be said to share a similar musical aesthetic. This may lead us to wonder how we might extend some of these techniques into other repertoires of electronic music, including non- acousmatic electronic works that feature instruments.177

How does the introduction of an instrument change the ways we perceive of musical structure and function? Is it necessary to distinguish between “instrument plus fixed media”

(tape) and “instrument with live processing,” or can analytical techniques utilized for one of them effectively be used for the other? Furthermore, does the reintroduction of a musical score

(at least for the live performer(s) if not for the electronic part as well) change the ways in which we might approach the analysis of this music? Surely a score does give the analyst some information about what the instrument is playing, but it may or may not actually tell us how the music will sound. Depending on the nature of the electronics involved in the performance, the

177 The introduction of an instrument necessarily causes a shift from the acousmatic to the traditional (non-acousmatic) listening situation, at least in live performance. While the electronics component of the work might remain unseen, the presence of a live instrumental performer will inherently effect this change. 174 notated score might be more or less useful, but ultimately the analyst will need to decide for what purposes it can be used in understanding and analyzing the work.

These questions, among others, will be the primary focus of this chapter. We will examine two different works of electronic music featuring instruments and electronics, and we will extend the experiential and analytical techniques developed in previous chapters. The first work, Synchronisms No. 6 (1970) for piano and fixed media electronics by Mario Davidovsky, will be examined through a segmentational parametric analysis. We will explore the second work, Kaija Saariaho’s Prés (1992) for cello and live electronics, through the use of sound objects and their spectromorphological properties and transformations.

Instruments with Electronics

Although we have so far examined only fixed media acousmatic works of electronic music, works for instruments with electronics have always been an equally important part of the repertoire. Not only do they offer different musical possibilities and limitations to the composer, but also these works often provide an unfamiliar or non-specialist audience with something to conceptually hold onto during a performance. For many audience members, whether musical experts or not, the idea of turning the lights off and listening to loudspeaker playback during a concert is clearly foreign. In this acousmatic situation, normative musical rituals are broken down; while this breaking down might be considered one of the strengths of acousmatic electronic music, it can undoubtedly cause both the listener and the analyst to feel some uncertainty. Thus, the introduction of something familiar, like a performer playing an instrument in real time, can provide the listener a conceptual foothold while simultaneously increasing the breadth of musical and compositional possibilities.

175 The ways in which we choose to analyze works for instrument and electronics will largely depend on a number of factors. Perhaps the most salient factor is the relationship between the live instrument and the electronics. (The type of electronics present, recorded electronic accompaniment or live processing, may also affect our analytical choices.) Often, the electronic portion of a work provides an accompaniment or backdrop, and the instrumentalist and the electronics seem to work as a sort of musical duo.178 The electronics may be fixed media or live processing, but the key is that the perception of the overall musical texture is that of two distinct entities. In contrast, other works featuring instruments with electronics tend to form one continuous texture, as if the live instrument is being enhanced by the electronics but not accompanied by them. This is not to say that the electronic portion of the work will never be perceived on its own, but rather that its role is primarily to add spectromorphological content to the live instrument(s) rather than to serve as a backdrop against which the live instrument plays.179 Of course, it is not necessary that a piece maintain one of these paradigms throughout, but rather it may shift freely between them as well as the grey area in between. Regardless of where the piece lies on this spectrum between instrument-plus-accompaniment and electronically-enhanced-instrument, it will prove useful to be mindful of its positioning.

Instrument and Fixed Media – Mario Davidovsky: Synchronisms No. 6 (1970)

Let us now turn to examining works for instruments and fixed-media electronics. We will first briefly consider two existing analyses of other works for instruments and electronics. In the first chapter, we saw how Judy Lochhead examined the emergent quality of “radiance” in regard

178 For a good example of this, see Russell Pinkston’s Lizamander (2003) for flute and interactive electronics. 179 ’s Répons (1981-85) for chamber and electronics often exhibits this paradigm throughout the course of the work. 176 to Kaija Saariaho’s work Lonh for soprano and electronics. Lochhead also writes about Anna

Clyne’s Choke (2004) for baritone saxophone and electronics in a manner very much reminiscent of her analysis of Lonh. She argues that the main process at play in generating the form of Choke is “ morphing,” a “morphing process that also entails a sense of transformed recurrence.”180

These transformations of “sound things,” as she calls them (distinct from sound objects), are responsible for creating a sense of underlying musical discourse and function. “The morphing process generates emergent relations between sound things; that is, a temporally later and new sound thing emerges from preceding things as a consequence of morphing. These changes produce a sense of refreshing (as when a web-browser refreshes) characterized by both continuity and novelty, by continuous differing.”181 This type of analysis if very much akin to what we did in the previous chapter, examining individual sound objects or fields of sound objects and the ways that they interact with one another. Unfortunately, it is often difficult to ascertain how or why Lochhead hears transformation between “sound things” in the way that she does; she provides descriptions of how sound things are “timbrally similar” or that one “leads to another,” but the way that these sounds cohere as one discrete “loop” remains unclear. The charts and figures that are provided to help visualize and explain these processes are also largely ambiguous; although Lochhead clearly presents the temporal and transformational relationships that she perceives between sound things, it is difficult to imagine how these relationships produce a sense of “spiral morphing.” I do not doubt that Lochhead herself perceives this process unfolding throughout the course of the piece, but I do not personally hear it, and I do not find anything in her explanations that would lead me toward this type of hearing. As discussed

180 Lochhead, Reconceiving Structure in Contemporary Music: New Tools in Music Theory and Analysis, 163. 181 Ibid., 164. 177 previously, this is one of the inherent dangers in utilizing an analytical methodology that is so ad-hoc and piece-specific.

Elsewhere, Lochhead also analyzes Kolb’s 1985 work Millefoglie for chamber orchestra and computer-generated tape. This analysis revolves around the perception of

“texture/timbral types,” which Lochhead defines as “generalized types of textural/timbral relations that recur in variation over the course of the piece…the recurrence of these textural types generates a series of referential associations over the course of the piece, pulling the diverse sections together.”182 However, the texture/timbral types are nebulously defined, and

Lochhead never explains exactly how these types are derived from the sounding experience of the music. Again, the charts and figures presented throughout the analysis clearly show that

Lochhead is thinking about the content of the work in a critical way, but there is nevertheless a cognitive gap between these charts and the perception of a series of texture/timbral types as form generating. Furthermore, although she says that she is talking about texture and timbre,

Lochhead really seems to be talking about . What brief clues the reader is given about the derivation of texture/timbral types generally revolve around what instruments or electronic sounds are present during a given segment. It is undoubtedly true that changes in instrumentation and orchestration result in a perceived change in timbre, but it is perhaps a bit misguided to insinuate that the analysis is about the latter when it is, in reality, about the former.

One important thing that Lochhead notes in this analysis of Millefoglie is the relationship between the score in a work of electronic music and the actual experience of hearing the piece.

Even in situations where the electronic part is “notated,” this notation does not necessarily

182 Judy Lochhead, “Texture and Timbre in Barbara Kolb’s Millefoglie for Chamber Orchestra and Computer-Generated Tape,” in Engaging Music: Essays in Musical Analysis, ed. Deborah Stein (New York: Oxford University Press, 2005), 258. 178 provide any aural information to the reader. In Millefoglie, the electronics staff is essentially there to aid the conductor in cueing the players; it would be almost impossible to practically notate the contents of the electronics on musical staves. As Lochhead states, “This ‘cueing’ role of the notated computer part draws attention to the performance function of scores in general, and suggests some of their limitations for music analysis. The score is not intended as a representation of the piece for the purposes of analysis; rather, it represents those aspects of compositional intention that are sufficient to a successful performance of the work.”183 In other words, the reintroduction of some sort of appreciable notation is not necessarily a “fix” for the inherent problems in analyzing electronic music. The score can certainly help us to understand the musical texture and identify pitch- and rhythm-based processes present within the instrumental parts, but the score does not provide a visual analogue for the experience of listening to the piece.

A key element to take away from both of Lochhead’s analyses is that in both instances, she considers the totality of the sounding experience as relevant to the analysis. She avoids the temptation to simply analyze the score as though the work were a strictly acoustic one and then examining how it is enhanced by the electronics. While there may be moments in any such work where the electronics or the instruments may seem more important or aurally salient, we should not be overly quick to rely on traditional score-based methodologies simply because we have access to notation. Although scores exist for the works that we will consider in this chapter, I will refrain from referencing them in my analyses. (Consequently, I will still refer to time points in the recordings of the works as opposed to measure numbers in the score.) As the score cannot

183 Lochhead, 254. 179 reproduce the experience of listening to the works being considered, sonic experience will continue to be our first point of contact as we analyze these works.

By many accounts, Mario Davidovsky’s 1970 work Synchronisms No. 6 for piano and fixed media electronics is one of the most important early works of its kind. It showed how a piece for instrument could be drastically enhanced by the addition of electronics while still maintaining the its soloistic quality. As such, much of the piece inhabits a region close to the electronically-enhanced-instrument side of the spectrum, as opposed to the instrument-plus- electronics side. We will see, however, that this work also often exists in the grey area between these two poles, and in fact, this fluctuating relationship between the electronics and the instrument is actually a key element in articulating the formal structure of the work. In order to show this, let us return to the process of parametric analysis that we discussed in Chapter 3.

There, we utilized parametric analysis to segment works of acousmatic electronic music, and we will continue to use it for this purpose here (although it will also show other qualities of the work and its constituent segments, as we will see). As before, we will first examine the parametric intensity graph and the 15-5 value-change graph for each parameter and discuss the possible formal implications of each. We will then examine the composite value-change graph and compare it with the individual graphs in order to reveal some more global suggestions of what the phenomenological perception of the work’s form in real time might be.

As before, I have chosen four parameters for the analysis of Synchronisms No. 6. These parameters were selected after multiple open and self-reflective listenings of the work, and they represent what I believe to be four of the most aurally salient sonic parameters. Because the work is for instrument and electronics and not for electronics alone, there are a number of newly available sonic parameters that are possible, and I have sought to demonstrate some of these here

180 as well. The parameters chosen are spectral unity, textural streaming, electronic salience, and resonance. Each will be explained and examined individually in the analysis that follows.

Spectral unity, the first parameter that will be examined, refers to the extent that the live instrument and the electronics are perceived as timbrally/spectrally unified. In other words, to what degree does the listener perceive the two as a single electronically-enhanced “super instrument” (an enhanced piano in this case) as opposed to two distinct parts. The more spectral content that the two parts share, the more they will be perceived as one unified sound source. In contrast, the more that the spectral content of the two parts differ, the more they will be heard as two distinct and unrelated musical elements. The parametric intensity graph and 15-5 value- change graph for this parameter can be seen below in figure 5.1.184 Recall that these charts represent nothing more than my own personal perceptions of the experience of listening to the piece, and that each listener would likely get subtly different results depending upon the parameters chosen, listening background, acoustic environment, playback system used, etc.

In examining the parametric intensity graph, the first thing we might notice is that is entirely stepped; it does not transition between states of intensity, but rather it seems to leap quickly from one to another. Furthermore, except in one noticeable instance, these jumps between parametric intensity are always to an adjacent level of intensity, which creates as smooth of a transition as possible between the states.185 Beyond the stepped nature of the shifts in parametric intensity, the parametric intensity graph seems to suggest two distinct formal segments, each featuring a clear and similar process that underlies the section. The first

184 Information about the meaning, derivation, and labeling conventions of these charts can be found at the beginning of Chapter 3. 185 It may simply be a feature of this specific sonic parameter that it is difficult to perceive smooth transitions between levels of parametric intensity, while sudden changes in intensity are readily perceived. 181 suggested section (from 0:00 – 3:50) features a sort of inverted arch, wherein parametric intensity gradually decreases from 5 to 2 and then reverses the process back to the initiating intensity level. (Note that if the poles were arbitrarily reversed, this would simply be a “normal” arch, but the resulting perception would nevertheless remain the same).

Figure 5.1: Davidovsky: Synchronisms No. 6 – Parametric Intensity Graph and 15-5 VC Graph for Spectral Unity

The second segment, beginning just before 4:00, has a similar but noticeably compressed parametric trajectory. It shares the same arch structure, but it covers a much smaller range of the intensity spectrum. It is also interesting to note here that the second segment begins at the lowest point of parametric intensity that was reached by the first segment, as if it is picking up where the first segment left off, completing the full range of the intensity spectrum. It is clear that this specific graph suggests two large parts, but whether these segments are similar (AA’) or juxtaposed (AB) is not entirely clear. On one hand, they seem to have a similar parametric trajectory; on the other, the trajectory of the second segment is quite compressed, and it occupies a much different range of the parametric intensity spectrum. At this point, deciding between 182 these two interpretations is not entirely important. Rather, we will reexamine this question after we have explored other sonic parameters.

The value-change graph for spectral unity further shows the clear division of this parameter into two perceptual units, featuring a strong formal divide around 3:55. This moment is by far the most marked moment in terms of overall parametric change, suggesting segregation at this point. Furthermore, the value-change graph reiterates the stepped nature of the parametric intensity graph, as we can see a number of sudden peaks and valleys; the parameter is always undergoing either sudden change or absolute stasis. Ultimately, the parametric intensity graph and the 15-5 value-change graph both point toward this parameter having a two-part structure.

We will consider this as we examine other parameters.

The parametric intensity graph and 15-5 value-change graph for the second parameter, texture streaming, can be seen in figure 5.2. Recall from Chapter 4 that texture streaming is the spectromorphological attribute of a sound object or field of sound objects relating to whether it is perceived as a unified texture or a series of stratified textures. This stratification can occur either because of spectral spacing or timbral differentiation. (We also observed that texture streaming can be caused by spatiomorphological manipulation, such as in Takemitsu’s Vocalism Ai.) While this may initially seem to be the same sonic parameter as spectral unity, there are clear distinctions to be made. Spectral unity deals with the degree to which an instrument and electronics are perceived as being one cohesive spectromorphological entity, a sort of electronically-enhanced instrument. Texture streaming does not inherently involve any relationship between an instrument and electronics; a spectrally-unified sound could exhibit either streamed or unified textures. Likewise, a spectrally disparate relationship could also be texturally streamed or unified, depending on the overall perception of texture in which they

183 participate. To put it as simply as possible, we might say that spectral unity has to do with the spectral quality of sound objects and sound object fields, whereas textural streaming has to do with the behavior and musical deployment of those objects.

Figure 5.2: Davidovsky: Synchronisms No. 6 – Parametric Intensity Graph and 15-5 VC Graph for Texture Streaming

We can observe a number of similarities between the parametric intensity graph for texture streaming and the graph for spectral unity. Most notably, there is a drastic change in parametric intensity right around the 4:00 mark; in this instance, it has been notated as the largest change possible, from an intensity value of 1 to 5. Furthermore, not only do these segmentation points align between the two parameters, but also the general shape of the parametric trajectory that makes up the first segment is nearly identical. In this instance, it is represented as a normal

(non-inverted) arch, but this is simply a product of the way I chose to orient the graphs. If I had chosen textural unification as representing a high parametric intensity value, the shapes would be congruous. This similarity suggests that there is indeed some sort of spectromorphological

184 process that underlies this first segment. A key difference between these two segments, however, is that the sharp changes in parametric intensity that were present in the spectral unity parameter have been replaced with a lot of slow, constant fluctuation. The overall shape remains the same, but the method of articulating that shape has been altered. As mentioned before, this may simply be due to the perceptual nature of these two parameters (that one might be more easily perceived as transitional), but it is nevertheless a marked experiential difference. The portion of the parametric intensity graph from 4:00 on is less clear, however. While there is a clear segmentation point that begins this section, it is difficult to judge whether what follows is one unified section or perhaps multiple shorter sections. Overall, one might be able to argue for an extremely modified inverted arch trajectory, which would align with all other trajectories that have been presented so far. However, the return phase of the arch is never really made manifest, instead leaping back toward the original parametric intensity value. This leap, although far from the largest we have seen, is certainly a marked local moment, suggesting that the final 50 seconds or so may, in fact, be closing material.

The 15-5 value-change graph helps to elucidate some of these points, especially about the portion of the piece after 4:00. Again, we see an extremely marked maximum at 4:00, initiating the section, followed mostly by stable segments punctuated with local maxima. The maximum at

6:10 is quite prominent in that it is the second-highest parametric change value throughout the entire work, and also it is surrounded by entirely stable parametric intensity on either side. This does seem to suggest that, at least in terms of this parameter, the final 50 seconds can be understood as their own segment. If we go back to the parametric intensity graph, this proposition can be further supported in that leaps truly seem to be uncharacteristic of this particular parameter. Texture streaming intensity in this work seems to change through gradual

185 transition, and sudden large changes of intensity seem to trigger perceptual segregation. Again, this assertion must be verified by comparing multiple salient parameters, but it will be useful to consider the segmentation of the work after 4:00 as we continue.

Electronic salience, the third parameter chosen for the analysis of this work, refers to the extent to which the listener perceives and is aware of the electronic component of the work.

While this is again related to the previous two parameters that we examined, it does not refer to the relationship between the timbral or spectral components of the instrument and electronics parts (spectral unity), nor does it refer to the cohesion or segregation of musical textures (texture streaming); it simply answers the question, “How perceptually present are the electronics?” The parametric intensity graph and 15-5 value-change graph for electronic salience are shown here in figure 5.3. As before, there are clearly similarities between these graphs and those that have preceded them. Most importantly, we again see a very marked leap in parametric intensity just before the 4:00 mark, in this case the maximum distance of four from parametric intensity values one to five. The first large segment, from 0:00 – 3:55, has a similar sort of arched structure that we have seen in the previous two parameters. However, whereas spectral unity projected an arch trajectory composed of distinctly stepped changes in intensity, and texture streaming featured more transitional changes, the electronic salience parameter shows a hybrid arch made up of both transitions and leaps. Furthermore, it is more static overall than the previous arch trajectories we have seen, and the arch itself is not as symmetrical. The ascent to the highest value of four takes roughly three minutes and twenty seconds, whereas the return from that highpoint to the initial value takes only half a minute. This creates an arch wherein the changes in parametric intensity are mostly pushed toward the outside portions, while the middle is much more static.

186

Figure 5.3: Davidovsky: Synchronisms No. 6 – Parametric Intensity Graph and 15-5 VC Graph for Electronic Salience

The segmentation of the portion of the work after 4:00 is again unclear in the parametric intensity graph. While an obvious segmentation point precedes it, the decision of how to perceptually group the rest of the work is made difficult by a number of leaps. In this instance, the most perceptible and logical grouping of this segment is probably into two distinct units, one from around 3:55 – 5:20 and another from 5:20 to the end of the work. The leap in parametric intensity by a value of three certainly suggests segregation at 5:20 rather than cohesion.

Furthermore, the section after 5:20 clearly articulates one cohesive parametric trajectory, a stepped ascent from an intensity of one to three to complete the work. While the preceding unit from 3:55 – 5:20 presents a less salient parametric trajectory, we might argue that the very strong adherence to the criteria of segregation and cohesion on either side of this segment effectively renders it cohesive in a sort of perceptual process of elimination.

The 15-5 value-change graph visually clarifies the segregation of the work, especially this portion from 3:55 onward. We can clearly see two very pronounced local maxima at both

187 3:55 and 5:20, again suggesting points for segmentation. Although there are some maxima present within this segment, we can see that within the larger context of the parametric change taking place around them, they might not be perceived as segmentational but rather as musical features of this section. In other words, while there is parametric change taking place between

3:55 and 5:20, the more drastic changes that define these boundaries may trump the lower-level change within them. Thus, while the segment itself may be more active than those that surround it, and it might contain a number of sub-segments, it ultimately coheres as one section.186 The first and third segments presented by this parameter (0:00 – 3:55 and 5:25 – 7:00, respectively) are also readily seen on the 15-5 value-change graph. We can also see the uneven nature of the opening arch trajectory that we observed in the parametric intensity graph; the long segment of no change in parametric intensity beginning just before 2:00 shows the relative “flatness” of the arch. Considering only the first three parameters now, we observe that they all agree that the point around 3:55 or 4:00 is a strong candidate for perceptual segmentation, and that a similar

(though less strong) point might exist around 5:25. We have also observed a similar arched parametric intensity trajectory present in the first segment of each parameter.

Let us consider the final sonic parameter that was chosen for Synchronisms No. 6, resonance. While resonance is often conflated with reverberance, they are indeed different.187 In this case, I use resonance to describe the extent to which the resonant frequencies of a given sound object (the formants, in technical terms) are perceptually present within the sound. More

186 Additional analytical methods could help to understand this. For example, one might choose to examine the spectromorphological aspects of the sound objects and sound-object fields utilized during this segment in order to understand cohesion at the surface level. The present analysis is primarily concerned with segmentation, however. 187 Reverberation deals with sound reflections and decay times and has more to do with the sense of the space a sound object inhabits rather than qualities of a specific sound object itself. 188 pronounced formants can cause a variety of aural effects like “brighter” or more “tinny” sounds; in contrast, sound objects without strong formants can often sound muffled or dull. There are undoubtedly other physioacoustical phenomena at play that relate to these processes, but for our purposes, we will focus on the resulting quality of resonance. The parametric intensity graph and

15-5 value-change graph for resonance is shown in figure 5.4.

As we can see, the parametric intensity graph clearly does not suggest segmentation points as cleanly as the first three parameters that we examined. In fact, determining any segmentation points at all from the parametric intensity graph is a difficult task. One point that may be a candidate is around 5:20, which does align with some of the segmentation points that were observed for other parameters. Notably, however, the sense of strong formal segmentation just before 4:00 is somewhat lost in this parameter. There is still a noticeable leap in parametric intensity by a value of two at this moment, but given that the overall nature of this parameter seems to be more volatile, this moment is not as marked as it is in other parameters. Furthermore, there is little to no sense of a clear parametric trajectory that defines any single segment. We can observe small local gestures, like the descent in parametric intensity from 0:45 – 1:30, but these micro gestures do not seem to group into anything larger. Certainly, any semblance of the arch trajectories present in previous parameters is gone here.

The 15-5 value-change graph is similarly unhelpful on its own. We can see that the highest point of change in parametric intensity occurs around 5:25; this is the only value-change graph of the four that were examined that does not show a clear high point around 4:00. While a local maximum still exists at this 4:00 mark, this peak is hardly noticeable within the larger context of the parameter. It is clear that change and volatility are not novel features of this parameter; rather, they are the norm. This might lead us to posit that the perceptual role of this

189 parameter is not to suggest points for segmentation or drive formal functions itself, but to serve a supporting role for other parameters, boosting parametric change where salient moments occur in other parameters. As always, while it is interesting and important to consider these sonic parameters individually, the fullest understanding of the musical structure as experienced through listening will likely come from a comparative analysis of all the graphs as well as examining the composite change that takes place over the course of the work.

Figure 5.4: Davidovsky: Synchronisms No. 6 – Parametric Intensity Graph and 15-5 VC Graph for Resonance

It will come as no surprise that the composite 15-5 value-change graph, shown in figure

5.5, there is an incredibly marked segmentation point right around the 4:00 mark. This moment is actually stronger than any moment in the segmentational analyses presented in Chapter 3, and it is further solidified in that it involves relatively large changes in parametric intensity in each of the four parameters that we examined. In other words, it is not one or two of the parameters that seem to drive the segmentation at this moment, but in fact it is all four of them.

190

Figure 5.5: Davidovsky: Synchronisms No. 6 – Composite 15-5 VC Graph

We can also another notable maximum around 5:25, strongly present in two parameters and supported by a third, as well as an even lower local maximum at 6:05 which is supported by very small amounts of parametric change across all four parameters. It is quite clear that the first four minutes of the work constitute a cohesive segment in this model; there is a very clear ceiling of composite parametric change around two for this entire segment, providing for a remarkably stable section. The question, as it has been throughout the entire analytical process, is how to interpret the work after 4:00. Based on this model, it would be difficult to argue that 4:00 – 7:00 represents one single cohesive segment due to the relatively pronounced peaks in parametric change at 5:25 and 6:05.188 My own reading is that 5:25 is a true segmentation, while 6:05 is not, for two main reasons. The first of these reasons is simply audible and experiential salience.

188 One might the case that the first peak at 5:25 is driven almost entirely by only two of the parameters, and that shorter peak at 6:05 features very low levels of parametric change across all parameters that only give the appearance of salient change. 191 When I listen to this piece, I hear the moments around 5:25 as marked and important, especially in regard to the salience and resonance of the electronics component. Much of this work seems revolve around the morphing character of the electronics and their relationship to the piano, so this moment strikes me as important within the musical discourse of the work. Second, the seemingly salient parametric change at 6:05 may be “artificially inflated” in that all parameters undergo change in this section through leaps in parametric intensity (stepped) instead of transitioning. (Examine each parametric intensity graph to verify this.) In our system, the lowest possible parametric change value that a stepped parameter can project is 1, since there are no half values for parametric intensities. Thus, because each parameter is stepped, the lowest possible composite change value if all parameters are changing during a given period is 4, whereas the lowest composite change value if all parameters are transitory is 2. Much of the first part of the piece displays this difference; for example, the span around 1:00 features composite change of at least three different parameters at all times, but because these changes are transitory and not leaps, the composite value hovers around 1.5. If these changes were stepped, it would be at least a value of 3. Thus, while 6:05 might appear to be a candidate for segmentation, the nature of the underlying parametric changed combined with the lack of aural salience suggests that it might be an analytical red herring. Thus, this model suggests that the structure of the work might best be understood as either a contrasting two-part design (AB) divided around 4:00, or else it might be a three-part design with an additional segmentation point at 5:25 (perhaps ABA’ or ABC). In the first scenario, the B section would be comprised of two prominent subsections, whereas in the second, each of these subsections is a higher-level section unto itself. Ultimately, the distinction is semantic, as it does not change the underlying parametric trajectories that influence our hearing.

192 We can see that the method of segmentational analysis utilized throughout Chapter 3 in regard to acousmatic fixed media works continues to prove useful in works for instruments and electronics. In fact, the addition of a live instrument into the mix actually allows for new and varied sonic parameters that describe the relationship between the instrument and electronics. As a final experiment, let us consider how a spectromorphological understanding of sound objects and transformational processes might help us to understand another work for live instrument and electronics.

Instrument and Live Electronics – Kaija Saariaho: Près (1992)

We will now turn our attention to a discussion of works featuring instruments and live electronics. By “live” electronics, I refer to electronics created in real time during the performance, as opposed to pre-recorded (fixed) electronics that an instrument plays on top of.

Typically, this is done with some sort of real-time signal processing program that works during the performance, and depending upon the complexity of the program (the “patch”), a second performer may be necessary to facilitate the running of electronics and cueing the events. While this may seem like a minor distinction, the analytical implications between fixed-media and live electronics are actually fairly great. Perhaps the most important consideration has to do with consistency. Depending upon the nature of the electronics and the type of signal processing employed, it may be practically impossible to precisely reproduce any given performance. While some live processing might remain essentially “fixed,” like reverberation or frequency modulation, other processes typically have an inherent amount of randomness built in, such as granulation or convolution filtering.189 Thus, even performances of the same work that feature

189 Technical descriptions of these processes are not necessary here. The interested reader should consult the standard text on electronic sound production and signal processing, The Computer Music Tutorial by , for a full explanation of these and other processing techniques. 193 the exact same instrumental component (even if it were a recording of the instrumental part) might differ wildly simply because of the randomness that might be built into the processing.

Thus, when considering these works, it is entirely possible that a listener may perceive different structures and functions when listening to different performances, depending upon the results of the live processing.

A second related problem, more practical than theoretical, is that the lengths of sections and events in these pieces are often mutable. While traditional acoustic music might be mutable in the sense that a performer might choose a slightly different tempo, not to take a repeat, etc., it is often the case in works for instruments and live electronics that the performer is free to move at his or her own pace. These works are not often set in an actual meter (although they certainly can be), but rather the performer typically has the opportunity to take a lot of liberty with the pacing of the work. Again, this creates the problem that no two performances of a given work within this subgenre will be alike, and in fact, any two performances may exhibit a high degree of difference from one another. Thus, keep in mind that the analysis presented in this section is an example of only one specific recording of the work.190 I will continue to reference specific timings, but these may differ from one recording to another.

Let us now consider the final section of the second movement of Kaija Saariaho’s 1992 work Près for cello and live electronics. As in Chapter 4, we will consider the use of specific spectromorphological techniques and attributes as they relate to the sound-object fields present throughout the section. (For full explanations of these processes, please see Chapter 4.) Overall, this movement presents a sort of “soloist as ensemble” aesthetic; though it is written for a solo instrument, this movement of Près gives the distinct aural impression of a much larger ensemble,

190 Dawn Upshaw et al., Kaija Saariaho: Private Gardens, Compact Disc (Ondine 906-2, 1997). 194 helped in part by the addition of electronics. Because of this phenomenon, certain spectromorphological attributes and processes immediately come to mind as potentially relevant and salient for musical analysis, including texture streaming, source bonding, textural motion, and motion/growth processes. We will examine each of these processes during the final segment of this movement and explore how they individually contribute to a sense of closing function.

The spectromorphological process of texture streaming is the main vehicle through which this movement achieves its sense of enormity, even though it is only a work for one instrument.

Throughout the movement, the cello continuously alternates between playing a constant stream of notes in its lowest register and a stream of notes in a higher register through the use of natural harmonics. Although these streams each originate from the same live instrument, we will soon see that they are neither created nor perceived equally. The fact that sounds come from the same source does not necessarily mean that they belong to the same texture. Recall that texture streaming can occur either due to a perceived difference in spectral content (resulting in a timbral difference) or through a separation in spectral space (register). In this case, the latter certainly applies, but one could also make a convincing argument for the former. Depending on the specific harmonics utilized, there can be quite a timbral difference between naturally fingered notes on a and their harmonic counterpart, even if they sound the same pitch.

The resulting aural effect is at least two clear textural streams emanating from the same instrument. Although they are clearly linked through source bonding (the perceptual grouping of sound objects via a shared sonic source), they are nevertheless differentiated by their spectral content and spatialization.

The separation into textural layers of the seemingly continuous, run-on stream of notes present throughout the movement allows the analyst to utilize a number of spectromorphological

195 descriptions in examining the work. Perhaps the most salient process that underlies the structure and function of the segments in this work is that of textural motion. (Refer to Smalley’s chart reproduced in Chapter 4.) Previously, we examined a segment whose sound object fields existed in a transformational web of motion characteristics (streaming, flocking, convolution, turbulence) and continuities (iterative, granular, sustained). In Près, the streams themselves are presented fixed in place, and instead of being transformational, the primary method of creating spectromorphological interest (at least initially) is through the juxtaposition of these types. The final segment, which begins at 2:20, again features texture streams in both the low and high register. The low stream is best classified as turbulent-iterative; its motion is generally unpredictable and seemingly rises and falls at will, and the texture results from a clearly- perceived series of individual sounds. In a nearly polar opposite, the high stream is best classified as streaming-continuous. It almost has the effect of being a covering tone in the traditional sense; it does not seem to move anywhere, and it is quite difficult to pick out individual sounds or attacks. Thus, these two streams are aurally opposed in nearly every way.

This is particularly interesting in that the actual rhythm played by the cello that underlies these two layers is just a constant stream of notes. However, the spectral and timbral characteristics of these two registers creates a vastly different aural perception, especially when considering the continuity-discontinuity spectrum between iterative and sustained motion.

Although the actual rhythm never changes, the very present and perceptible attacks in the lower register create a distinctly iterative texture. As if to punctuate that, there are moments throughout the work where the cellist is instructed to play col legno (striking the strings with the wood of the bow), which maximally emphasizes the iterative nature of the texture by further bringing out the attack transients. In contrast, the high harmonics tend to ring and their attacks are less

196 pronounced, so even though they are being articulated at roughly the same rate as the low sounds, the listener does not perceive any breaks in the textural stream. Thus, by playing with registral and spectral separation, Saariaho creates two distinct and unique textural streams from one basic rhythmic pattern.

The electronics help to further define this juxtaposition between high and low, sustained and iterative textures. Throughout the work, Saariaho uses a series of live filters to emphasize different notes in the texture, creating a sensation of a sustained texture in the background of the work that is spectrally separated from the high and low streams found in the cello part. The electronics thus also contribute to the sense of ensemble created throughout the work, providing for a sort of third distinct textural stream in the background. This stream spectrally overlaps both the high and low streams found in the cello, but it is differentiated by its spectral content and by its stasis. The electronic layer mirrors the streaming-sustained motion in the high texture of the cello, but it remains its own distinct entity. This field of individual texture streams coheres through a perceptual source bonding, but its elements maintain their separate identities through various methods of spectral separation.

In figure 5.6 I have attempted to visualize the textural paradigm comprised of three distinct textural streams that I have identified. This is the paradigm that is active throughout most of the movement. As we can see, the visualization contains three unique textures, each defined not only by its sounding space and spectral content but also by the continuous or discontinuous nature of its component sound objects. Toward the top, we see the streaming-sustained texture of the harmonics represented by the wedge shapes. The square texture at the bottom represents the low notes in the cello, whose discontinuous nature creates a perception of an iterative texture with a turbulent motion characteristic (turbulent-iterative). Finally, behind both of these textures

197 is the electronics, wherein the filter creates a sense of sustain and stasis by emphasizing specific pitches and creating an emergent third texture. This is represented by the transparent horizontal bars. Each of these textures is characteristic and distinct from the others, separated either by spectral space, spectral content, or both.

Figure 5.6: Saariaho: Prés, mvt. 2 – Visualization of Texture Paradigm

It is this separation that spectromorphologically defines the bulk of the piece, and it is the breakdown of this paradigm that begins to signal its end. Up until 2:20, these textural streams each remain relatively uninterrupted and stable. However, at the initiation of this final section, the complete polarization of the continuity-discontinuity spectrum between iterative and sustained motion has been sounded, and it begins to collapse completely toward the iterative side. Any sense of a continuous texture, especially in the high range of the cello and the electronics, is replaced with discontinuity. By the end of the piece, the sustained third texture created by the filtering procedures in the electronics gives way to a strongly iterative texture comprised of sampled cello sounds, marking a complete reversal in textural character. Similarly, the high harmonics are no longer perceived as an unbroken texture, but rather are only articulated

198 once at the end of each gesture, a sort of iterative maximalism; the harmonics are now spaced so far apart and are played for so brief a time that they almost fall apart as a texture altogether.

Were it not for their presence throughout the rest of the work, the listener may not even be inclined to hear them as a distinct texture in the first place. It is only because they have previously been established as aurally salient that any sense of a high texture stream remains.

All of these changes in texture streaming and textural motion serve to create a sense of closing throughout this section by breaking down the established spectromorphological paradigm established earlier in the movement. This final section is really initially understood as an A’ to the initial A because of the similarity in the deployment of sound objects and textures, therefore the listener likely expects the established sounding world from A to be maintained in A’, or perhaps slightly varied. However, we understand as this segment goes on that any sense of A’ expectation that we might have is to be thwarted; essentially, the most accurate functional representation of this section is that A’ becomes closing material. In other words, the initial expectation of this section was that it would mirror the musical textures and behaviors of its counterpart that begins the movement, but in fact this expectation is undercut as the textural motions and streams in the cello and electronics parts break down. Not only do the textures break down, but the rate of alternation between them drastically slows down to the point that, at the end of the movement, the listener is really only aware of a single textural stream in the middle- low range of the cello. It is not only the changes in character of the textural streams that causes a sense of closure, but also the dissipation of established textures in the first place. All of these elements combine to effectively create a sense of closure.

Although the structural function of this final segment of the work is largely driven by texture streaming and textural motion, there are undoubtedly other spectromorphological

199 processes at play that aid in creating closure. Another salient attribute that we might examine during throughout the closing section of Près is the motion and growth processes that each stream exhibits. (Smalley’s table of motion/growth processes was reproduced in Figure 4.6.) We should again be careful to distinguish motion and growth from textural motion. Textural motion, which we just examined at length within this final segment, has to do with how a texture moves; in other words, it describes the behavior of a movement. We might think of it as roughly analogous to an adverb in language, which helps us to describe and characterize an action. In contrast, spectromorphological motion and growth describes metaphorical direction and location, or where something moves. Thus, we could say that a sound moves up or down (motion), that it exhibits textural convolution (textural motion), or perhaps that a flocking sound moves parabolically (textural motion and motion/growth). It will be important when using these terms to clearly articulate whether we are talking about textural motion or motion/growth processes.

The main vehicle related to motion/growth processes that helps in the projection of function in this closing segment is the idea of bi- or multi-directional motion. Whereas the first three types of motion/growth that Smalley allows for (unidirectional, reciprocal, cyclic) move in one direction at any time, multidirectional movement is complex and can provide for a variety of different effects. In this work, each of the distinct layers that we examined above has particular types of growth and movement associated with it, creating a motion paradigm along with the texture paradigm. As with texture, the disruption of this paradigm is crucial in signaling to the listener that this final segment is not simply A’, but rather that it initiates some new function. In the motion paradigm, the harmonics in the high range and the driving rhythms in the low range

200 of the cello typically exhibit some sort of reciprocal or cyclic motion.191 The background electronics created through the use of filtering are almost entirely static and unmoving, and thus they exhibit what Smalley calls “plane” motion, a subcategory of unidirectional motion. This juxtaposition between active reciprocal or cyclic motion and passive unidirectional motion is the key opposition throughout the entire movement, and it is in this closing section that it is resolved.

As the A’ segment begins, we hear the established motion paradigm return (along with the textural paradigm discussed above). Instead of stabilizing, however, the cyclic motion that characterizes the lowest stream sounds as if it begins to careen out of control, expanding and losing its sense of stability, giving way to a type of multidirectional motion. While Smalley does not describe each specific type of multidirectional motion outlined in his table, I believe that the best descriptor for this change in motion of the low stream is “dissipation.” The stability that characterized the controlled cyclic oscillation of this stream in the first part of the work is gone, and it has been replaced with this dissipating multidirectional movement. This process in the low stream eventually gives way to a series of five repeated open fifths, each finished by releasing the pressure on the strings and producing a harmonic open fifth at those nodes. Thus, the plane unidirectional motion that characterized the high register ultimately prevails in the instrumental part. However, the cyclic motion is now transferred into the electronics, creating a full separation between the two parts, even though the electronics at this moment only features sampled cello sounds. In terms of motion, the entire closing section is a process of de-unification of the instrument and the electronics. Initially, the two were unified in that both the electronics and the

191 Smalley is unclear about when reciprocal motion can become cyclic motion or vice versa. As we saw in Chapter 4, it is possible that individual motion trajectories can be grouped into higher- order motions, allowing for the combination of unidirectional motions into reciprocal, reciprocal into cyclic, etc. The focus of the present analysis is not on higher-order motions, but on individual motion trajectories. 201 high range of the cello shared the same type of unidirectional motion. (We might think of this as

“motion-bonding” in spectromorphological terms.) However, the transformation in the bass from cyclic motion to dissipating multidirectional motion ultimately allows for the transfer of motion types between the electronics and the bass stream. The result is unity in the cello but disunity between the cello and the electronics in terms of motion. This process is shown here in figure

5.7.

Figure 5.7: Saariaho: Prés, mvt. 2 – Transfer of Motion Types in Closing Section

While the initial juxtaposition of these motion/growth types is not “resolved” in the sense that only one remains, they are resolved in that the roles of the instrument and the electronics in terms of motion are now clearly defined. (Considering that this is only the end of the second movement out of three, a complete resolution of the musical discourse would make little dramatic sense.)

Spectromorphological processes clearly play an extremely important role in creating the perception of structural function in the closing section of this movement of Près. The combination of textural streaming and motion/growth processes, especially, help to define this.

In terms of texture streaming, the disruption of the prevailing textural paradigm and the reduction of the texture field into one prominent texture. The established conditions of motion and growth are also in flux during this segment, and the transition from cyclic to multidirectional

202 motion in the low range of the cello provides spectral cover for a transfer of motion between this low range and the electronics part. The result is the same overall field of motion types (in the sense that two streams are unidirectional and one is cyclic), but the cello is now unified in that both the low and high register exhibit the same motion processes. The electronics are again juxtaposed in terms of motion, preparing for the third movement. Each of these spectromorphological processes serves to create a sense of closure throughout this section.

Concluding Thoughts

Throughout this chapter, we have observed that the analytical methodologies developed in Chapter 3 (parametric analysis) and Chapter 4 (spectromorphological analysis) are equally viable for works that feature live instruments and electronics as they are for strictly acousmatic electronic music. We could have guessed at the beginning of the chapter that this would be the case, primarily because both of these methodologies were developed to work primarily with sound, irrespective of what (if any) musical genre that sound might fall into. From the perspective of the methodology, it does not “know” anything about the musical aesthetic that it is being used for, nor does it really require the analyst to be an expert in the aesthetic of the music that it being analyzed. I believe that this is one of its main strengths; all it requires of the listener is the ability to listen critically, reflect on what he or she is hearing and the factors that influence it, and listen again while paying attention to those salient factors.

Mario Davidovsky’s Synchronisms No. 6 showed the viability of parametric analysis for segmentation of non-acousmatic electronic music. In fact, it provided one of the clearest examples of a strong segmentation point out of all the analyses presented in this dissertation. It also allowed us to examine some of the inherent differences between stepped and transitory parametric intensity graphs as well as observe some strikingly similar parametric trajectories.

203 Kaija Saariaho’s Près showed how Smalley’s conception of spectromorphology and its processes remains a useful tool for musical and sonic analysis even with the addition of a live instrument.

Perhaps more importantly, our analysis of Près pointed to the fact that the methodologies outlined in this dissertation might prove useful in analyzing strictly acoustic music as well. Our analysis of this work certainly needs the electronics component in order to function and make sense, but it does not rely on technical descriptions of electronic processes. In fact, if the electronics were replaced with another acoustic instrument doing the same processes, our analysis would remain largely the same, and it would be just as convincing. The short remainder of this dissertation will focus on the question of this methodology’s viability in other sonic analytical situations.

204 CHAPTER 6

EPILOGUE

Uncharted Territory

We are now near the end of our analytical journey, and yet there always seems to be so much that was barely touched upon, or perhaps not at all. This short final chapter will seek to address some of those topics which were left unconsidered in the main body of this dissertation, but will primarily focus on asking questions, and not providing answers. It will contain no lengthy analyses nor broad methodological descriptions, but rather less formal thoughts and musings about the efficacy of the material presented in the first five chapters in relation to other genres of electronic and acoustic music. I will also provide a brief summary and reflection upon the major points made throughout the dissertation and explore which analytical avenues might be fruitful after the completion of this work.

Drone

One genre that I am particularly interested in exploring is drone music, especially the music of Éliane Radigue. While her music (and the music of other drone composers) often squarely fits under the terminological umbrella of “electronic music,” these works tend to be vastly different in terms of musical aesthetic than much of the pieces we have examined throughout this dissertation. As Demers states, “In drones…the use of stasis and noise runs counter to habitual expectations for how elements of musical syntax interact with one another.

These elements last too long and are too loud, and they disrupt the sense that music functions as a language by calling attention to physical aspects that music usually asks us to ignore.”192 While

192 Demers, Listening through the Noise, 91. 205 I do not necessarily agree that drone music is always too loud, Demers’s underlying point nevertheless stands: drone music disrupts normative notions of musical syntax. These underlying assumptions about the semiotic function of music, its ability to acts as a sign system, were key to the understanding of the musical works presented throughout this dissertation. Often our analyses sought to answer questions of musical structure and function, the what and how of musical grammar. However, drone music simply does not allow for these conventional notions; when a single sound can last for minutes, a sense of progression or teleology is inevitably lost.

Sound objects might become “closing” or “introductory” simply by virtue of their temporal position within the experience. In other words, there may be nothing spectromorphologically apparent about the closing sound of a drone work that makes it function as such, but rather it might be “closing” for no other reason than because it is the last thing heard.

Of course, one might argue that it is this sense of timelessness and aimlessness that defines drone music in the first place. This sensation is perhaps the primary aesthetic desire for composers of this style of music, and therefore an effective analytical methodology should engage with this aesthetic. Undoubtedly, we could utilize the analytical methodologies that have been developed throughout his project to examine drone music; as previously stated, these methods treat all sounding experiences as fundamentally equal analytical stimuli. The extent to which the exact processes I have outlined will produce something analytically satisfying is currently unclear, but the application of these tools to drone music (especially parametric analysis and segmentation) may prove to be an interesting future project. These tools might even be adapted. For example, one might choose to consider a parametric analysis not of an entire work, but perhaps a single drone within the work. It is often the case that individual drones in this style are not completely stagnant, but rather evolve over the course of their sonic lifespan.

206 What types of perceptual processes might be brought out if parametric listening is applied in this manner? Perhaps there are discrete, identifiable cognitive units within these large, seemingly aimless swaths of sound. Though this genre was not analytically considered here, there are nevertheless a multitude of interesting and rewarding avenues toward it.

Ambient

Ambient music is another primarily electronic genre that deserves its own spectromorphological and phenomenological analysis. In terms of its overall aesthetic, it is probably one step further removed from the prevailing musical aesthetic throughout this dissertation than drone music is. As Frank Lehman points out, “The ambient style is often framed as being non-hierarchical, non-teleological, and sometimes even non-intentional, issuing not from the conscious will of a musician but through agent-free processes—algorithmic, aleatoric, dream-inspired, or otherwise. If there is some that indeed floats in a vaporous Kramer-esque[193] ‘vertical time,’ with no figure/ground distinction, then formal analysis would seem a tenuous prospect.”194 Indeed, it is difficult to talk about the experience of musical structure in a genre which defies so many expected listening norms.

Lehman and I separately arrived at many similar conclusions, not the least of which is one of his ideas about form in this and similar repertoires: “form in this repertoire is a function of the way a particular piece plays out (and with) a generic time-course of musical attentiveness…we can take stock of whichever immanent features reveal themselves, in a way that bears fidelity to the listening experience and to musical ‘facts of the matter’.”195 In other

193 Jonathan D. Kramer, The Time of Music: New Meanings, New Temporalities, New Listening Strategies (New York: Schirmer Books, 1988). 194 Frank Lehman, “Form and Ignorability in Ambient Music” (National Conference of the Society of Music Theory, Arlington, VA, November 2017). 195 Lehman. 207 words, Lehman and I argue many of the same points, namely that the apprehension of musical form is an active process that relies on the salient aural characteristics of any given piece. By paying attention to the ways in which music and sound is experienced, we can engage the listening process with the analytical process.

A combination of parametric and spectromorphological approaches to ambient music may further prove to be useful. Unlike drone music, the sonic events in ambient music are generally short enough that perceptual grouping and hierarchical organization may be possible.

While this might not be universally true for all works that fit into this genre, it is certainly worth investigating and applying some of the methodological processes outlined here. Though there are undoubtedly aesthetic differences between those works examined throughout this project and pieces in the ambient genre, there are also many overlaps, especially in the avoidance of traditional notions of beat-based or harmony-based musical syntax. Lehman has convincingly shown that analytical procedures based in experience and phenomenology are effective in this genre, and I believe that the extension of the techniques I have developed throughout this project would prove equally useful.

Dance

Dance and beat-based electronic works are perhaps what most people initially think of when they hear the phrase “electronic music.” Indeed, the stereotypical image that comes to mind is probably a dance club or a festival stage and not the concert hall. Genres of (EDM) and (IDM) such as house, trance, , drum n’ bass, glitch hop, etc., are all full of interesting and valuable music and are certainly worthy of deep and thoughtful examination. This music is clearly different than the music that was examined throughout this project in a number of ways. Perhaps the most salient of these is that

208 these genres almost always feature a strong, clearly-defined beat. Thus, we might wonder how the analytical methodologies that have been outlined here are affected by the intrusion of beat and meter, as these are elements that are readily perceptible even without a score. Should we continue to utilize only spectromorphologically-based parameters when analyzing this type of music, or might there be additional categories that are aurally salient and relevant for analysis?

One might even make the case that much of the music that falls into this category is not even in the same aesthetic realm as the music that we have examined here, but rather is much closer to

“traditional” tonal acoustic music. While this may be true, it raises other questions about the efficacy of these analytical techniques even in regard to non-electronic musics. I will briefly touch on this below.

Certainly, other scholars have already examined a lot of this beat-oriented electronic music (see Butler196 and Garcia197, for instance), but to my knowledge, none have done so through any methodology similar to the one used throughout this project. Perhaps the analysis of these works through an experiential/phenomenological lens may provide new insights or observations about this rich and popular genre of music. Furthermore, if the methodologies prove to be viable in this aesthetic realm, it would provide a convenient and effective avenue to introduce students and new listeners to a variety of genres of electronic music by allowing them to first examine something familiar and slowly branch out to the unfamiliar. This as well may prove to be an interesting avenue for future analysis and research.

196 Mark Butler, “Turning the Beat Around: Reinterpretation, Metrical Dissonance, and Asymmetry in Electronic Dance Music,” Music Theory Online 7, no. 6 (December 2001). 197 Luis-Manuel Garcia, “On and On: Repetition as Process and Pleasure in Electronic Dance Music,” Music Theory Online 11, no. 4 (October 2005). 209 Acoustic

Although the methodologies described and utilized throughout this project were developed specifically with electronic music in mind, there is no substantive reason to suggest that they could not be applied equally effectively in acoustic music. Consider the fundamental principle that underlies the entire methodology: experiencing sound. If, for a moment, we set aside all of the analytical methodologies that have been developed throughout music-theoretical history, it is always possible to think of all genres of music as fundamentally sound. This is not to say that traditional notions of musical analysis that rely on cognitive structures like harmony, rhythm, melody, etc., are not useful. Rather, I believe that paying attention to relevant sonic characteristics of any given work in a Schaefferian way can help us to further understand the intricacies of its structure as experienced through listening. For example, we could undoubtedly revisit the analysis of Kaija Saariaho’s Près for cello and electronics presented in Chapter 5 and incorporate some existing and ubiquitous methodologies that analysts use to deal with notated atonal music. We might, for example, do an analysis using set classes or transformational networks. Indeed, it is entirely possible that an analyst might find something of value in this process, but by doing so, he or she may completely overlook the actual sounding experience of the work. The prevalent textural streaming and transfers of motion/growth processes that we observed are not necessarily present in the score or the notes, but rather these are elements which must be heard. Neither analysis is better than the other, but each tells something unique about the work, and they may be more effective when combined and synthesized.

By extending this phenomenological and experiential methodology back into music of earlier periods, we may discover additional ways to describe known musical paradigms, and we might even reveal new ones. For example, one might choose to examine the primary and

210 secondary themes of sonata forms not only through their harmonic and melodic content, but also through their spectromorphological attributes. This might reveal additional aural qualities that tend to be present in these different theme groups that may not be apparent from looking at the page or relying on traditional methods of analysis. Parametric analysis might prove similarly useful both in segmenting difficult sections of acoustic music. Perhaps there are two competing moments for the structural harmonic close of a work, and the analyst must choose one; he or she could examine the relevant aural characteristics of the piece and see which cadential candidate more adheres to criteria for musical segregation. This is only a hypothetical dilemma, but it is one that analysts often face and for which the methodologies outlined in this project may prove useful. Ultimately, these analytical methodologies were created to deal with sound. They do not take into account the aesthetic criteria into which sound falls, but rather they simply ask the listener to be in tune with their own listening experience. Thus, the techniques outlined are easily applicable to any sounding medium, whether it is another related genre of electronic music or even tonal acoustic music.

Conclusions

Electronic music, especially the type of “sound art” electronic music that was the main focus of this project, is often perceived as being radically different from so much of the music that makes up the canon of Western classical music. Indeed, there are many surface-level differences: pitch and harmony are not assumed to be structural elements, any sense of beat or meter is typically obscured or absent altogether, and the sound objects themselves are not even those that most listeners have come to associate with concert music. Even the concert situation itself is different, as these works are usually played back over in the performance hall, a process that strongly breaks with the highly-ritualized nature of the classical art music

211 world. Thus, it is not unreasonable that listeners and analysts alike assume that this music is radically different, and therefore it must require a similarly novel set of analytical tools in order to understand its structure and function.

The oft-overlooked but crucial element that electronic music has in common with more traditional forms of music making is that they are all fundamentally forms of sound. Ultimately, all music analysis is seeking structure and meaning in a series of auditory stimuli. For this reason, we began our investigation into the sounding world of electronic music by first examining the ways in which we hear and understand sounds. Pierre Schaeffer’s1967 Traité des objets musicaux outlined one of the first and most important theories relating the ways in which we hear to our modes of understanding under the acousmatic listening situation. The four modes

(écouter, ouïr, entendre, and comprendre) create an interconnected fluid matrix that represents the cognitive processes of hearing, perceiving, listening, and understanding. Typically, as we listen, our mind and our ears move freely throughout this space, engaging different modes as necessary. Schaeffer proposed that we should devote our analytical time in the third mode, entendre, paying special attention to the subtle sonic nuances of any given sound. It is this idea, what Schaeffer called reduced listening (écoute reduite) on which the methodologies I have proposed are based. If one accepts Schaeffer’s proposition that the appropriate way to listen to sounds in an acousmatic artistic environment is not for their meaning but rather for their spectromorphological properties, any effective analytical methodology must take these elements into account.

The first analytical process that was proposed was parametric analysis, a technique that involves listening to a work for its salient sonic parameters and tracking their relative intensities over the course of a work in order to create a model of the phenomenological experience of

212 listening to the piece. By examining these models and comparing them with one another, we were able to propose structural designs and segmentation points based upon two criteria. The first of these, the cohesion criterion, states that perceptual groupings of sounds objects into larger units occur during spans of parametric stability, periods where there is little to no change in parametric intensity. In contrast, the segmentation criterion argues that moments of high parametric change are strong candidates for formal boundaries. Each of these criteria operates under the simple yet fundamental principle of grouping like with like and segregating different from different.

We analyzed the large-scale structure of four different works, representing four different time periods and compositional styles, through the use of parametric analysis. Each of these analyses showed the efficacy of the parametric technique in a different way, whether it was suggesting formal structures, observing similarities in sonic trajectories, or determining relative levels of perceived stability or instability within a section. By bringing the listener and his or her experience into the process of analysis, the result is a product that contains elements of both the subjective listening process as well as objective examination, a true hybrid analysis. I believe that these types of analyses are extremely beneficial in that they allow each individual flexibility in incorporating their ideas and experiences while at the same time providing a rigorous enough methodology so that analyses can be compared and discussed. Ultimately, we saw that many works of electronic music may share similar structural designs to more familiar musical works, including two-part and three-part designs as well as some more ad-hoc designs. Though the musical markers that suggest these forms are certainly different, if we understand form as a process that lies at the intersection of extramusical knowledge and phenomenological experience, this should not be surprising. We each bring our own knowledge to the analytical

213 table when we listen to music; parametric analysis is a way to examine our own listening and apply that knowledge.

Sound objects, sound-object fields, and their properties proved to be extremely useful when examining the formal and structural functions of individual segments identified through parametric analysis. We defined a sound object as follows: a fixed sonic gestalt arrived at through the reduced listening of an auditory perception. Once we identified salient sound objects utilized within the musical discourse of a work, we extracted analytical tools from Denis

Smalley’s theory of spectromorphology, a set of methods to describe the spectral qualities of sound objects and the ways in which they are transformed over time. We examined formal segments in a variety of ways through source bonding, textural streaming, motion and growth processes, spatialization and spatiomorphology, and gestural surrogacy. While each of these properties is inherently present in any field of sound objects, we again made sure to choose spectromorphological processes that were salient within our own hearings of these pieces. Thus, the descriptive and analytic tools used for each segment were not arbitrary; rather, they stemmed from a careful consideration of our own listening experience. The resulting analyses showed a number of valuable and interesting properties of about their respective formal segments, including how they function rhetorically within the piece as introductions, transitions, etc., as well as the musical processes that underlie these functions. These analyses showed that there are perceptible transformative and teleological processes that aid in the understanding of rhetorical function. In other words, the spectromorphological attributes of individual sound objects and sound-object fields transcend mere description and actually imbue our listening experience with a sense of structure and function.

214 We also saw that the addition of a live instrument or group of instruments did not affect our methodology or its outcomes. In fact, having a live instrument as well as the electronic component allowed us to examine a number of new properties that describe the relationship between a live instrument and electronic sound. Furthermore, we observed that even though traditionally-notated musical scores exist for the works we explored, they were not necessary to create a convincing and nuanced analysis. All that is required is careful attention to and reflection on the experience of listening to the piece. Whether the work is for instrument and fixed media or live electronics, the analytical processes proposed throughout this project continue to work effectively.

Finally, there is no reason to suggest that the usefulness of this methodology extends only as far as music with electronics. If we adopt the approach that all music is fundamentally sound,

(and recall that sound is the only thing these analytical procedures require), we could apply it to an extremely wide range of music, including other genres of electronic music as well as acoustic music. How would our understanding of traditional tonal paradigms change if we examine them through a parametric or spectromorphological perspective? Could we find new and innovative ways to convey theoretical concepts to our students? We often hear the admonition that all musical analysis begins with listening to the piece, but sometimes it seems that this is the limit to the extent that listening can play a role in analysis. If listening to a work is one of the most crucial parts of understanding it, should it not also be one of the most important parts of analyzing it? Each individual listening experience is unique and interesting unto itself, and it is worth comparing our listening experiences with one another and engaging in meaningful, critical discourse.

215 BIBLIOGRAPHY

Abtan, Freida. “Where Is She? Finding the Women in Electronic Music Culture.” Contemporary Music Review 35, no. 1 (January 2016): 53–60.

Adams, Norman. “Visualization of Music Signals.” In Analytical Methods of Electroacoustic Music, edited by Mary Simoni, 13–29. New York: Routledge, 2006.

Alegant, Brian. “Listen Up! Thoughts on iPods, Sonata Form, and Analysis without Score.” Journal of Music Theory Pedagogy 22 (2007): 149–76.

Arthurs, Daniel J. “Applying Aspects of Form to Atonal Music.” Journal of Music Theory Pedagogy 18 (2004): 1–21.

Atkinson, Simon. “Interpretation and Musical Signification in Acousmatic Listening.” Organised Sound 12, no. 2 (August 2007): 113–22.

Bard-Schwarz, David, and Richard Cohn, eds. David Lewin’s Morgengruß: Text, Context, Commentary. New York: Oxford University Press, 2015.

Bernstein, David W., ed. The San Francisco Tape Music Center: 1960s Counterculture and the Avant-Garde. Berkeley: University of California Press, 2008.

Berry, Wallace. Form in Music: An Examination of Traditional Techniques of Musical Form and Their Applications in Historical and Contemporary Styles. Prentice-Hall, 1986.

Bossis, Bruno. “The Analysis of Electroacoustic Music: From Sources to Invariants.” Organised Sound 11, no. 2 (August 2006): 101–12.

Brend, Mark. The Sound of Tomorrow: How Electronic Music Was Smuggled into the Mainstream. New York: Bloomsbury Pub., 2012. http://site.ebrary.com/id/10863119.

Butler, Mark. “Turning the Beat Around: Reinterpretation, Metrical Dissonance, and Asymmetry in Electronic Dance Music.” Music Theory Online 7, no. 6 (December 2001).

Chadabe, Joel. Electric Sound: The Past and Promise of Electronic Music. Upper Saddle River, N.J: Prentice Hall, 1997.

Chion, Michel. “Concerning the Use of the Term ‘Sound Material’ in Tape Musc: A New Definition of Musique Concrète.” In Structure and Perception of Electroacoustic Sound and Music, edited by Sören Nielzén and Olle Olsson, 25–32. Amsterdam: Elsevier Science Publishers, 1989.

———. Guide to Sound Objects: Pierre Schaeffer and Musical Research. Translated by John Dack and Christine North. Paris: Buchet/Chastel, 2009.

216 ———. Sound: An Acoulogical Treatise. Translated by James A. Steintrager. Durham: Duke University Press, 2016.

Clarke, Michael. “Analysing Electroacoustic Music: An Interactive Aural Approach.” Music Analysis 31, no. 3 (October 2012): 347–80.

———. “Extending Contacts: The Concept of Unity in Computer Music.” Perspectives of New Music 36, no. 1 (1998): 221–46.

Clifton, Thomas. Music as Heard: A Study in Applied Phenomenology. New Haven: Yale University Press, 1983.

Cogan, Robert. New Images of Musical Sound. Cambridge: Contact International, 1998.

Cogan, Robert D., and Pozzi Escot. Sonic Design: The Nature of Sound and Music. Prentice- Hall, 1976.

Cook, Perry R., ed. Music, Cognition, and Computerized Sound: An Introduction to Psychoacoustics. Cambridge, Mass: MIT Press, 1999.

Couprie, Pierre. “Graphical Representation: An Analytical and Publication Tool for Electroacoustic Music.” Organised Sound 9, no. 1 (April 2004): 109–13.

Danto, Arthur C. After the End of Art: Contemporary Art and the Pale of History. Princeton, N.J: Princeton University Press, 1997.

Delalande, François. “Music Analysis and Reception Behaviours: Sommeil by .” Journal of New Music Research 27, no. 1–2 (June 1998): 13–66.

Demers, Joanna. Listening through the Noise: The Aesthetics of Experimental Electronic Music. New York: Oxford University Press, 2010.

Eimert, Herbert. “What Is Electronic Music?” 1 (1955): 1–10.

Eitan, Zohar, and Roni Y. Granot. “How Music Moves: Musical Parameters and Listeners’ Images of Motion.” Music Perception: An Interdisciplinary Journal 23, no. 3 (2006): 221– 248.

Emmerson, Simon. The Language of Electroacoustic Music. London: Macmillan, 1986.

Emmerson, Simon, and Leigh Landy. Expanding the Horizon of Electroacoustic Music Analysis. Cambridge: Cambridge University Press, 2016.

Ernst, David. The Evolution of Electronic Music. New York: Schirmer Books, 1977.

Escrivan Rincón, Julio d’, and Nick Collins, eds. The Cambridge Companion to Electronic Music. Cambridge Companions to Music. Cambridge: Cambridge University Press, 2007.

217 Fennelly, Brian. “A Descriptive Language for the Analysis of Electronic Music.” Perspectives of New Music 6, no. 1 (1967): 79–95.

Ferrara, Lawrence. “Phenomenology as a Tool for Musical Analysis.” The Musical Quarterly 70, no. 3 (1984): 355–373.

Ferreira, Giselle Martins dos Santos. “A Perceptual Approach to the Analysis of J.C. Risset’s Sud: Sound, Structure and Symbol.” Organised Sound 2, no. 2 (1997): 97–106.

Garcia, Luis-Manuel. “On and On: Repetition as Process and Pleasure in Electronic Dance Music.” Music Theory Online 11, no. 4 (October 2005).

Gayou, Évelyne. “Analysing and Transcribing Electroacoustic Music: The Experience of the Portraits Polychromes of GRM.” Organised Sound 11, no. 2 (August 2006): 125–29.

Goeyvaerts, Karel. “The Sound Material of Electronic Music.” Die Reihe 1 (1955): 35–37.

Hasty, Christopher F. “On the Problem of Succession and Continuity in Twentieth-Century Music.” Music Theory Spectrum 8 (April 1986): 58–74.

Heifetz, Robin Julian. On the Wires of Our Nerves: The Art of Electroacoustic Music. Lewisburg: Bucknell University Press, 1989.

Helmholtz, Hermann. On the Sensations of Tone as a Physiological Basis for the Theory of Music. Translated by J. Ellis. 4th ed. London: Longmans, Green, and Co., 1912.

Hirst, David. “The Use of Rhythmograms in the Analysis of Electroacoustic Music, with Application to Normandeau’s Onomatopoeias Cycle.” In ICMC, 2014. http://www.academia.edu/download/46300107/Proceedings_ICMC-SMC-2014_Hirst.pdf.

Holmes, Thom. Electronic and : Technology, Music, and Culture. 3rd ed. New York: Routledge, 2008.

Hugill, Andrew. “The Origins of Electronic Music.” In The Cambridge Companion to Electronic Music, edited by Julio d’Escrivan Rincón and Nick Collins, 7–23. Cambridge Companions to Music. Cambridge: Cambridge University Press, 2007.

Huron, David. Sweet Anticipation: Music and the Psychology of Expectation. Cambridge, MA: MIT Press, 2007.

Husserl, Edmund. Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy: First Book: General Introduction to a Pure Phenomenology. The Hague, Netherlands: M. Nijhoff, 1982.

Ihde, Don. Listening and Voice: A Phenomenology of Sound. Columbus: Ohio University Press, 1976.

218 Iverson, Jennifer. “Statistical Form Amongst the .” Music Analysis 33, no. 3 (October 2014): 341–87.

Kane, Brian. “L’Objet Sonore Maintenant: Pierre Schaeffer, Sound Objects and the Phenomenological Reduction.” Organised Sound 12, no. 1 (April 2007): 15.

———. Sound Unseen: Acousmatic Sound in Theory and Practice. New York, NY: Oxford University Press, 2014.

Kelly, Caleb. Sound. Cambridge: MIT Press, 2011.

Kendall, Gary S. “Meaning in Electroacoustic Music and the Everyday Mind.” Organised Sound 15, no. 1 (April 2010): 63–74.

Koffka, Kurt. Principles of Gestalt Psychology. London: Routledge & Kegan Paul Ltd., 1962.

Köhler, Wolfgang. Introduction to Gestalt Psychology. New York: New American Library, Mentor Books, 1959.

Kramer, Jonathan D. “Moment Form in Twentieth Century Music.” The Musical Quarterly 64, no. 2 (1978): 177–94.

———. “New Temporalities in Music.” Critical Inquiry 7, no. 3 (1981): 539–556.

———. The Time of Music: New Meanings, New Temporalities, New Listening Strategies. New York: Schirmer Books, 1988.

Krumhansl, Carol L. “Why Is Musical Timbre So Hard to Understand?” In Structure and Perception of Electroacoustic Sound and Music, edited by Sören Nielzén and Olle Olsson, 43–54. Amsterdam: Elsevier Science Publishers, 1989.

LaBelle, Brandon. , Second Edition: Perspectives on Sound Art. New York: Bloomsbury Publishing USA, 2015.

Lalitte, Philippe. “Towards a Semiotic Model of Analysis.” Organised Sound 11, no. 2 (August 2006): 93–100.

Landy, Leigh. Making Music with Sounds. New York: Routledge, 2012.

———. “The ‘Something to Hold on to Factor’ in Timbral Composition.” Contemporary Music Review 10, no. 2 (1994): 49–60.

———. Understanding the Art of Sound Organization. Cambridge: MIT Press, 2007.

LaRue, Jan. Guidelines for Style Analysis. 2nd ed. Sterling Heights, MI: Harmonie Park Press, 2011.

Lehman, Frank. “Form and Ignorability in Ambient Music.” presented at the National Conference of the Society of Music Theory, Arlington, VA, November 2017. 219 Leloup, Jean-Yves. Digital Magma: From the Utopia of Parties to the IPod Generation. New York: Lukas & Sternberg, 2010.

Lerdahl, Fred. “Cognitive Constraints on Compositional Systems.” Contemporary Music Review 6, no. 2 (January 1, 1992): 97–121.

Lerdahl, Fred, and Ray S. Jackendoff. A Generative Theory of Tonal Music. Cambridge, MA: MIT Press, 1996.

Lewin, David. “Music Theory, Phenomenology, and Modes of Perception.” Music Perception 3, no. 4 (July 1986): 327–92.

Licata, Thomas. Electroacoustic Music: Analytical Perspectives. Westport, CT: Greenwood Press, 2002.

Licht, Alan. Sound Art: Beyond Music, Between Categories. New York: Rizzoli International Publications, 2007.

Lochhead, Judy. “‘How Does It Work?’: Challenges to Analytic Explanation.” Music Theory Spectrum 28, no. 2 (October 2006): 233–54.

———. “Joan Tower’s Wings and Breakfast Rhythms I and II: Some Thoughts on Form and Repetition.” Perspectives of New Music 30, no. 1 (1992): 132–56.

———. Reconceiving Structure in Contemporary Music: New Tools in Music Theory and Analysis. New York: Routledge, 2016.

———. “Texture and Timbre in Barbara Kolb’s Millefoglie for Chamber Orchestra and Computer-Generated Tape.” In Engaging Music: Essays in Musical Analysis, edited by Deborah Stein, 253. New York: Oxford University Press, 2005.

Manning, Peter. Electronic and Computer Music. New York: Oxford University Press, 2013.

Mathews, Max V., and John Robinson Pierce. Current Directions in Computer Music Research. Cambridge: MIT Press, 1991.

McAdams, Stephen. “Contribution of Timbre to Musical Structure.” Computer Music Journal 23, no. 3 (Autumn 1999): 85–102.

Meyer, Leonard B. Emotion and Meaning in Music. : University Of Chicago Press, 1956.

Myatt, Tony. “New Aesthetics and Practice in Experimental Electronic Music.” Organised Sound 13, no. 1 (April 2008): 1–3.

Prendergast, Mark J. The Ambient Century: From Mahler to Moby - The Evolution of Sound in the Electronic Age. New York: Bloomsbury, 2003.

Proy, Gabrielle. “Sound and Sign.” Organised Sound 7, no. 1 (April 2002): 15–19.

220 Roads, Curtis. “Rhythmic Processes in Electronic Music.” In ICMC, 2014. http://speech.di.uoa.gr/ICMC-SMC-2014/images/VOL_1/0027.pdf.

Rodgers, Tara. Pink : Women on Electronic Music and Sound. Durham: Duke University Press, 2010.

Schaeffer, Pierre. “Acousmatics.” In Audio Culture: Readings in Modern Music, edited by Christoph Cox and Daniel Warner, 76–81. New York: Continuum, 2004.

———. La Musique Concrète. Paris: Presses universitaires de France, 1967.

———. Treatise on Musical Objects: An Essay across Disciplines. Translated by Christine North and John Dack. Oakland, California: University of California Press, 2017.

Schwartz, Elliott. Electronic Music: A Listener’s Guide. New York: Praeger Publishers, 1973.

Shapiro, Peter. Modulations: A History of Electronic Music : Throbbing Words on Sound. New York: Caipirinha Productions, 2000.

Silver, Nate. The Signal and the Noise: Why So Many Predictions Fail--but Some Don’t. New York, NY: Penguin Books, 2015.

Simoni, Mary Hope, ed. Analytical Methods of Electroacoustic Music. Studies on New Music Research. New York: Routledge, 2006.

Slawson, Wayne. Sound Color. Berkeley: University of California Press, 1985.

Smalley, Denis. “Defining Timbre — Refining Timbre.” Contemporary Music Review 10, no. 2 (January 1994): 35–48.

———. “Space-Form and the Acousmatic Image.” Organised Sound 12, no. 1 (April 2007): 35– 58.

———. “Spectromorphology: Explaining Sound Shapes.” Organised Sound 2, no. 2 (1994): 107–26.

———. “The Listening Imagination: Listening in the Electroacoustic Era.” Contemporary Music Review 13, no. 2 (January 1996): 77–107.

Stockhausen, Karlheinz. “Momentform: Neue Beziehungen Zwischen Aufführungsdauer, Werkdauer Und Moment.” In Texte Zur Musik, 1:189–210. Cologne: DuMont Schauberg, 1963.

Tanzi, Dante. “Extra-Musical Meanings and Spectromorphology.” Organised Sound 16, no. 1 (April 2011): 36–41.

Taylor, Timothy Dean. Strange Sounds: Music, Technology & Culture. New York: Routledge, 2001.

221 Tenney, James. Meta + Hodos: A Phenomenology of 20th-Century Musical Materials and an Approach to the Study of Form; and META Meta + Hodos. 2nd ed. [1992 printing]. Hanover, NH: Frog Peak Music, 1992.

Thoresen, Lasse, and Andreas Hedman. “Spectromorphological Analysis of Sound Objects: An Adaptation of Pierre Schaeffer’s Typomorphology.” Organised Sound 12, no. 2 (August 2007): 129–41.

Tiffon, Vincent. “Jean-Claude Risset: Sud (1985).” Musurgia 8, no. 3/4 (2001): 113–134.

Truax, Barry. “The Aesthetics of Computer Music: A Questionable Concept Reconsidered.” Organised Sound 5, no. 3 (2000): 119–126.

Tsabary, Eldad. “Which Aural Skills Are Necessary for Composing, Performing and Understanding Electroacoustic Music, and to What Extent Are They Teachable by Traditional Aural Training?” Organised Sound 14, no. 3 (December 2009): 299–309.

Voegelin, Salomé. Listening to Noise and Silence: Towards a Philosophy of Sound Art. New York: Continuum, 2010.

Weale, Robert. “Discovering How Accessible Electroacoustic Music Can Be: The Intention/Reception Project.” Organised Sound 11, no. 2 (August 2006): 189–200.

———. “The Intention/Reception Project: Investigating the Relationship Between Composer Intention and Listener Response in Electroacoustic Compositions.” De Montfort University, 2005.

Windsor, W. Luke. “Using Auditory Information for Events in Electroacoustic Music.” Contemporary Music Review 10, no. 2 (January 1994): 85–93.

Wishart, Trevor. On Sonic Art. New York: Routledge, 2016.

Young, John. “Sound in Structure: Applying Spectromorphological Concepts.” Electroacoustic Music Studies Network, 2005. https://www.dora.dmu.ac.uk/handle/2086/4756.

———. “Sound Morphology and the Articulation of Structure in Electroacoustic Music.” Organised Sound 9, no. 1 (April 2004): 7–14.

222 BIOGRAPHICAL SKETCH

Prior to the completion of his Ph.D. in music theory at Florida State University, Andrew Selle earned bachelor’s and master’s degrees in music composition from Bowling Green State

University. His compositions have been performed nationally and internationally at venues such as the International Computer Music Conference, the annual conferences of the Society for

Electroacoustic Music in the (SEAMUS), the National Student Electronic Music

Event, Electroacoustic Barndance, and the New York City Electroacoustic Music Festival, among others. As a theorist, Andrew focuses primarily on the understanding of musical structure and syntax in experimental electronic music as well as theories of music cognition and music theory pedagogy.

223