43-2410 - Module 7: Introduction Page 1 of 14

43-2410 - Aesthetics of the Motion Picture Soundtrack

MODULE 07: (Film) Music, Meaning, Emotion, Communication INTRODUCTION

t o p i c s

Focusing attention - Accents

Cognitive aspects of pitch and time in music -Melody - Contours - Tonality

Memory limits - Data reduction - Categories

Complexity, Prediction, Preference, Interest

Maximal variety with minimal elements - Musical scales as psychological constructs

Focusing Attention

Experiencing time

Our experience of time seems to often proceed independently from clock time. Musical organization of sounds may offer us means to explore this difference. The fleeting nature of non-linguistic sound events (in music, film, etc), accentuated by the potential absence of a referent, highlights the larger problem of defining the perceptual 'present.' Present is shaped by our selective attention to events or aspects of events. This selective attention and, therefore, our experience of 'present' events is always mediated by the memory of past events and the expectation of future ones, while at the same time reshaping both our memories and our expectations. The dialectic nature of the way we experience the present, past, and future is defining of the human experience of time and is explored within phenomenological research (e.g. Ricoeur, 1991; "Time and Narrative"). 43-2410 - Module 7: Introduction Page 2 of 14

Two competing theories of time within psychology I_ Storage-size theory (Ornstein, 1969). Memory storage-needs influence our estimates of time: a percept containing a large amount of information will require more storage capacity in short-term memory, generating the impression of greater elapsed time. II_ Attentional capacity theories (Hicks et al., 1976; Block, 1978; etc.). Assuming that perceptual activities compete for attentional center stage, the more we attend to stimulus aspects the less we will attend on time itself, leaving our 'cognitive clock' with the impression of less elapsed time.

Most psychological theories of time are too simple to capture the complex interrelations involved in time perception. Although phenomenological theories of time can only be examined reflectively, rather than through the process of empirical investigation, they offer more valid, plausible, and convincing models of our experience of time. We will briefly return to phenomenology at the end of the semester, when discussing .

Attention theories

I_ Due to limited channel capacity, perceptual filters only admit portions of incoming stimuli, blocking the remainder. Blocked stimuli never reach long term memory. Data related to the 'cocktail party effect' seem to support this theory, while evidence of future recall of 'unattended' information does not. Consistent with the 'limited channel capacity' approach, Dowling et al. (1987) argued that melodic expectations operate within a limited-width time-window. II_ Attention is a process that operates on memory rather than on stimulus, with all stimuli being submitted to memory. Observers can attend to more than one stream of information at a cost (time, strain, etc.). According to parallel-access theories, attention controls the flow of information beyond simply blocking it or letting it through. III_ Attention is guided by schemata that allow us to anticipate 'important' features of incoming streams of stimuli. Attention can be seen as a manifestation of the claimed interaction between explicit and implicit perceptual rules, an interaction that helps us parse incoming information in a meaningful, to us, manner. Memory, attention, and perception are intricately linked, with implicit rules assisting attention, which in turn supports transfer of information between short and long term memory.

Intra-cultural consistency and inter-cultural differences in the way listeners complete unfinished melodic (Carlsen, 1981) and harmonic (Bharucha and Stoekig, 1986) passages support the view that musical expectations and the associated schemata are culture dependent. Unyk and Carlsen (1987) showed that musical pieces that do not conform to individual listener's expectations are more difficult to process.

Listening to music appears to be a perceptual scanning process where attention shifts focus, allowing us to track more than one event through time. Implicit rules fill out the gaps caused by the perceptual 'zig-zag' that allows us to pick out isolated sound events out of a complex sound environment, creating the 'illusion' of simultaneous perception of more that one series of events. This scanning process varies from person to person and within the same person at different times, since implicit rules, explicit rules, and their 43-2410 - Module 7: Introduction Page 3 of 14

interaction vary according to previous experience and context. Devices that improve the efficiency of the scanning process during listening include: • repetition, • gradual introduction of musical layers, • selective variation of one or two layers at a time while the rest remain unchanged, • selective attenuation/removal and amplification/reintroduction of layers in time, • assignment of different contours/timbres/registers per layer, etc.. The complex nature of this process may explain why ,musical pieces that make use of such devices don't seem to loose their 'freshness' even after many hearings. Listen to "Good Vibrations" by the Beach Boys

Accents

Focusing attention on accents

Attention focuses on accents (points with perceptual salience/importance). Accents are created through contrasts; through changes of some sort (shape, color, direction, speed, volume, size, lighting, pitch, loudness, timbre, rhythm, etc.), which help us identify boundaries and parse incoming stimuli into meaningful units. Several empirical studies suggest that perceptually salient portions of a melody (i.e. accents) are attended to more often and are recalled more easily than other portions.

Melodic units / boundaries

The units that melodies are made of are not the individual notes. During a piece of music there are too many events taking place and at a very short time for short-term memory to be able to handle them. Rather, perception produces a coded version of a musical piece based on how our implicit and explicit rules interact to identify accents and 'create' local or global features. Melodic 'phrases' are in some respects analogous to language phrases, which are organized in structural units based partly on breathing restrictions (3-5 seconds per phrase). But, more importantly, melodies are perceived in terms of patterning that implies redundancies (repetitions). Redundancy fosters predictability which, in turn, gives rise to expectation.

Examining melodies as sets of distinct and isolated notes is inadequate, since it does not allow for the patterning that is at the heart of the experience/prediction/expectation/game-of-expectations cycle. Miller and Selfridge (1950) demonstrated that when we break a whole down to a Markov chain of order 0 (series of unrelated and isolated bounded units) we 43-2410 - Module 7: Introduction Page 4 of 14

destroy the level of meaning that is based on syntax (relationship among units that creates a larger, single, bounded event). [Note: A Markov chain is a sequence of random values whose probabilities at a time interval depends upon the value at the start of the interval. It outlines the probability of a system to move towards a particular new state, given its current state.] In the Miller and Selfridge experiment, participants were asked to contribute 1, 2, 3, etc. words to a text after reading 0, 1, 2, 3, etc. of the already existing words respectively. (A similar experiment was conducted by Davies (1978) using notes instead of words.) The more words the participants were able to read (i.e. the more developed the syntax) the more coherent and meaningful was their contributed text, since they were better able to rely on the context provided by the existing phrase. In other words, the more we are able to reflect on the past, act in the present, and project on / anticipate the future, the more meaningful our actions. This point may illuminate what is meant by the Gestalt psychology statement "the whole is greater/different than the sum of its parts," while highlighting the importance of the experience of time to our understanding of music as well as of the experience of music to our understanding of time.

To summarize: in both language and music, meaning is not simply lexical (it does not simply depend on what words / notes are used); it is also syntactical. It depends on the way the words/notes are patterned together; on how they relate to their past (previous words/notes) and what they anticipate as their future (words/notes to come). In addition, such relationships are always mediated by what the listener/reader brings to the experience. Melodies are not random collections of notes (Markov chains of order 0) but larger units tied together by syntactical rules and separated from each other by boundary-providing accents. The bounded musical units outlined by accents incorporate redundancies, and their periodic repetition is key to . Sound events in time are therefore organized in terms of contours (pitch, temporal, etc.), tracking accents and outlining flexible musical units.

Cognitive aspects of pitch and time in music

Melody - Contours

Melody can be thought of as the superimposition (layering/stratification) and interaction of pitch (melodic) and temporal (duration) contours. A more complex model would also include dynamic and timbral contours, outlined by accents 43-2410 - Module 7: Introduction Page 5 of 14

based on dynamic or timbral changes respectively.

Pitch (melodic) contour describes the pattern of pitch direction changes within a melody (pitch can either go up, down, or remain the same). Points of change in pitch direction (pitch contour inflections) are perceptually salient. They correspond to contrasts, which become the accents that outline bounded melodic units. Two melodies have the same pitch contour if they have the same sequence of pitch contour inflections, even if the interval sizes differ. Pitch changes in a melody that do not alter the pitch contour are perceived as melodic variations rather than new melodies. On first hearing, it is the pitch contour that is stored in memory, with the exact pitches being subsequently 'assigned' to this contour. If a pitch contour includes large pitch leaps, the resulting melody is considered 'disjunct'. If the opposite is true, the resulting melody is considered 'conjunct'. Pitch-contour periodicity outlines the pitch-contour clock.

Temporal (duration) contour describes the pattern of sound and silence duration changes (a sound/silence event can become longer, shorter, or remain the same). Points of change in 'sound-event' duration (duration contour inflections) are perceptually salient. They correspond to contrasts, which become the accents that outline bounded rhythmic units. Two melodies have the same temporal contour if they have the same sequence of temporal contour inflections, even if the actual durations differ. Duration changes in a melody that do not alter the temporal contour are perceived as melodic variations rather than new melodies. Temporal contour periodicity outlines the temporal contour clock.

Every temporal contour outlines a beat (underlining pulse of a piece of music) that follows a specific tempo (number of beats per unit time; the average personal tempo has been estimated to be ~100 beats per minute; Fraisse, 1982) and is organized into repeating accented patterns (meter), which reflect the interaction among temporal contour, clock, and beat.

Several studies (Werner, 1940; White, 1960; Monahan et al., 1987; Kendall and Carterette, 1996; Palmer, 1996) have indicated that pitch contour is not the most salient aspect of a melody. For example, a descending diatonic scale and the opening of "Joy to the World" have identical pitch contours but can be easily recognized as 2 different 'tunes' because of their different temporal (duration) contours. Similarly, changing the temporal contour of a melody while keeping its pitch contour intact will most likely result in what is perceived as new melody (e.g. "America" example discussed below). At the same time, there are limited cases where pitch contour changes alone do lead to the creation of a new melody (rather than a melodic variation) e.g. "Star Spangled Banner," and "Happy Birthday" examples.

Rhythm is a collective property of a piece of music, emerging out of the combination of all available patterns of sonic contrast (pitch, dynamics, duration, timbre) within the piece.

The terms pitch contour and temporal contour do not just refer to notes on paper but also to sounds in time, produced by the performer(s) (live or though a recording). During performance, deviations from or micro-variations in the

_ 43-2410 - Module 7: Introduction Page 6 of 14

intended (on the score) contours, especially in terms of time and dynamics, function as a performer's means of expression. This 'expressive intent' of the performer plays an important role in the 'message' communicated to the listener (s). The importance of rhythmic, dynamic, and other micro-variations through time to musical expression has been explored by several studies, including Gabrielsson (1988), Kendall and Carterette (1990), and Palmer (1996).

Monahan et al. (1987) modeled melody in terms of temporal and pitch contour layers. They found that it is easier to remember melodies when the pitch and temporal contour clocks 'line up'.

Examples

'Frére Jacques' (French children's song) opens with a simple and conjunct (no large pitch-leaps) pitch contour, that has a regular pitch contour clock (4 notes per repetition). The temporal contour of the opening phrase is relatively 'flat', with only minor duration changes (most pitches are quarter-notes with no rests inserted), and periodic temporal contour clock (1 note per repetition). The resulting melody involves absolutely no conflict between pitch and temporal contours. Pitch and time accent structures align, resulting in a simple and, at some level, uninteresting melody.

Similarly, 'Three blind mice' (another children's song), incorporates a conjunct pitch contour with regular pitch contour clock (3 notes per repetition) that aligns perfectly with the rhythmic contour clock (also 3 notes per repetition), resulting in another example of a simple (and dull) melody.

Such pieces, along with several nursery and pop tunes, are extreme examples of simple melodies. In most other cases there is no perfect contour alignment or clock periodicity, at least not for the entire duration of a piece or theme, since this would result in such a high degree of redundancy and make a piece not only easily codified but also uninteresting (more on redundancy and interest below). The 'Star Spangled Banner', for example, layers a pitch contour clock in 2s over a rhythmic contour clock in 3s, resulting in a relatively

_ 43-2410 - Module 7: Introduction Page 7 of 14

complex and interesting melody. As a pop music example, 'Yesterday', by the Beatles, involves a more complicated relationship between pitch and rhythmic contours, with clocks that are not perfectly periodic and melodic lines that are not always conjunct (increased complexity). Still, it is possible for listeners to trace an overall arch-like contour that becomes the piece's signature (decreased complexity). This balance in complexity helps sustain both high interest and high preference levels. *Yesterday (The Beatles)

Layering multiple non-aligned pitch and rhythmic contours results in polyrhythms. Musical pieces in Jazz, renaissance, 20th century, and several non-western traditions employ polyrhythms of varied degrees of complexity. Since contour repetition facilitates memory (Monahan et al., 1987), pieces with complicated contour relationships will compensate for the information overload through repetition [e.g. Star Spangled Banner; Yesterday; Theseus and Minotauros (Daedalus Project); In the Mood (Garland); A Night in Tunisia (Gillespie).] *Theseus and Minotauros (Daedalus Project) *In the Mood (Garland) *A Night in Tunisia (Gillespie)

Temporal vs. pitch contours

As suggested by the "Joy to the World" example mentioned earlier, it appears that temporal contour is more important than pitch contour in melody coding. In "America," for example, temporal and pitch contours are aligned (in 3s). If we superimpose a syncopated temporal contour in 2s (tango), the piece becomes unrecognizable, even if the precise sequence of pitches remains unchanged (Kendall and Carterette, 1996).

Due to this primacy of temporal contours, "originality" in a melody often depends on unusual rhythms (e.g. Pink Floyd's Money: 7/4, compound meter - Jackson 5's I Want you Back: multiple layers of temporal contours). Theme variation often involves changes in one of the contours while keeping the other one relatively unchanged. This results in a variety of melodic phrases that 43-2410 - Module 7: Introduction Page 8 of 14

are still understood as parts of the same composition. If single melodies are understood in terms of pitch and temporal contour layering, musical pieces can be understood as the layering of one or more melodies, with temporal contour clocks of various degrees of periodicity.

Note: Identifying the perceptual boundaries (accents) that help define contours depends on previous learning and experience. For example, a musical listening task that appears simple and as involving clear cut patterns to a native of Indonesia may appear complicated and unorganized to a Western trained listener and vice versa. Whether a pitch or temporal inflection will constitute an accent (a salient point) depends not only on physical contrasts but also on implicit rules of data organization that utilize schemata acquired through experience. This point notwithstanding, melodies are not being coded as individual pitch chromas at first hearing. Rather, implicit rules assist us in coding accent patterns, resulting in a melody being remembered in terms of the melodic motion implied by the identified pitch and temporal contours. Motion indicates more than shape (contour). More importantly It indicates time. Pitch or temporal contour inflections become accents after they have occurred, pointing to the importance of the syntactical/temporal aspects of musical organization that will be addressed again at the end of the course.

OPTIONAL SUB-SECTION

Tonal harmony, atonal contexts, and non-western music

Tonal harmony provides a learned context that guides perceptual organization of melodic and harmonic passages and supports operation of the various "gestalt" principles (more on Gestalt Psychology at the end of the semester).

Tonality: Organized set of pitch relationships around a central tone (key/tonic). Tonality outlines how likely or unlikely a note is to be included in a melodic or harmonic passage, given the notes that have already been played. The rules that guide tone relationships within a given key are not clear. Geometric models of pitch relationships such as Shepard's pitch spiral, Krumhansl's pitch cone, or Pythagora's circle of fifths represent attempts to describe tonality's organizational principles by outline tonal hierarchies. Consonance/Dissonance: Consonance (representing stability/release) and dissonance (representing instability/tension) are often defined in terms of interval relationships in the context of tonal hierarchy. In harmonic passages, consonance also describes the degree of a harmonic interval's/chord's pleasantness/fittingness/perceptual smoothness. For several musical traditions, one of the basic principles in melodic and harmonic development has been contrasts created by a back and forth motion between consonance and dissonance.

Within the context of tonality, the term harmony refers to the set of melodic and harmonic expectations outlined within a given key and to the melodic and harmonic implications (related to key-based expectations) of what has already been performed. Following the rules of tonality results in musical pieces that give the listener a sense of direction/progression. Tonality sets up expectations of what will come next and composers play with those expectations (some more successfully that others).

_ 43-2410 - Module 7: Introduction Page 9 of 14

When listening to unknown pieces, listeners trained in the Western musical tradition make tonal assessments swiftly and often implicitly (as shown in the studies by Butler and Brown), determining tonality based on the absence as well as presence of certain intervals. Butler's 1982 study on the influence of melodic motion to tonality judgments provides convergent evidence that the use of consonant harmonic intervals alone does not convey a tonal center as efficiently as the combination of consonant and dissonant intervals. Tonally knowledgeable listeners are able to extrapolate tonal harmony information from incomplete tonal cues, and any tone one hears may suffice as a tonal center, until the listener is probed by additional tonal evidence to opt for a more plausible choice.

In atonal contexts, intervalic similarity among chords does not translate to perceptual similarity. This result could have been expected based on Butler's (1982) melodic motion study as well as the experienced dissimilarity of major and minor triads in tonal music (both have the same intervalic content in different distributions). It may be that atonal chord similarity/dissimilarity judgments are based on a combination of musical context, previous experience and spectral distribution of the chord-signals in question.

Most musical traditions in the world employ hierarchical tone structures analogous to, even if quite different from Western tonal harmony. In addition, tone hierarchies within individual musical traditions are more salient to members of the same tradition than to "outsiders," while different traditions make equally strong claims of "good" musical organization for widely differing musical structures. These observations point to the conclusion that tone hierarchies, musical scale systems, consonance/dissonance or tension/release judgments, perceptual similarities/differences, music pattern recognition, etc. may, to a large extend, be culture-dependent. It can be hypothesized that, previous learning and experience support the creation of widely differing implicit rules among traditions, and give different results although they are processed through similar cognitive principles.

END OF OPTIONAL SUB-SECTION

Memory limits - Data reduction - Categories

All music making and listening is, at some level, an act of communication. The possible complexity of communicated messages is limited by the versatility of the message encoding process. Information received (input) is processed in memory, and behavior (output) is closely related to this process.

According to Information Theory, a perceptual system's efficiency is based on its ability to reduce the 'size ' of a message without significantly (i.e. noticeably)

_ 43-2410 - Module 7: Introduction Page 10 of 14

altering it; on its ability to identify patterns, reduce uncertainty, and increase predictability (Shannon & Weaver, 1949), while, at the same time, allowing for maximum possible variety.

There are two contrasting definitions of Information: a) number and variance/difference among the possible events within a system and b) anything that reduces uncertainty.

The human mind has limited channel capacity, imposing a limit to the amount of information that can be processed before we have an "overflow." Theoretically, the limit to the rate of information that short- term memory can deal with is expressed by Miller's 7±2 rule: According to Miller (1956), a maximum of only 7±2 unrelated / un-patterned / random events out of a uni-dimensional continuum can be stored in short-term memory. This means that the complexity of the outside world needs to somehow be reduced in order to become perceptually meaningful. Music would be impossible without some sort of data reduction. Implicit rules have evolved to help us deal with this 'data overload' by abstracting criterial attributes (: essential aspects) from some input, relative to existing schemata.

Criterial attribute: An attribute whose presence is necessary and sufficient to the existence of a category. Musical schemata: Knowledge structures, developed from prior experience, through immersion in one's musical culture from early childhood. Musical schemata help an observer organize perceptions into cognitions. Examples include melodic contours, intervals, pitch chroma (i.e. different notes within a single octave), and tonality (Dowling and Harwood, 1986; in Radocy and Boyle, 1988). Schemata are stable but not fixed and can be modified through further experience.

In , the accuracy with which we can resolve sound (e.g. in terms of Just Noticeable Differences) is so high that it allows for practically infinite possibilities, outlining a system that would create a 'data overload.' The musical systems we invent involve parameters that satisfy the need for data reduction, and processes that allow for maximum variety (: maximum possible combinations) within the resulting set of reduced data.

Data reduction: Process that reduces the infinite variability of the world of experience so that it can be workable in short-term memory. The term actually refers not so much to actual elimination of individual pieces of information but to a reduction in complexity / randomness / uncertainty, and an increase in predictability based on identifying redundancies. It refers to the organization / patterning of random information into larger units, called chunks or categories. Chunks are rather temporary and depend on immediate context and implicit rules, while categories are long-term organizations that depend on long-term previous learning and experience. Perceptual chunking can organize several existing categories into a single chunk that, should the same process be repeated sufficiently often,

_ 43-2410 - Module 7: Introduction Page 11 of 14

may become a new, higher level category.

Categories: Psychological constructs connected to the idea of data reduction. Large number of data can be enveloped in a single category which is then understood in terms of the criterial attributes that link these data into a unit. "To perceive is to categorize, to conceptualize is to categorize, to learn is to form categories, to make decisions is to categorize." (Brunner, 1996)

The octave interval is an example of data reduction in terms of categories. With the Octave as the basic category, the relevance of Miller's rule to the way musical systems are constructed is reflected in the fact that, in practically all music cultures, the pitch systems used to construct melodies employ a maximum of 7±2 individual pitches within the octave. This is not to say that only 7±2 pitches will be available. The Western musical system has 12 pitches available within an octave and other musical traditions have even more. No musical system, however, violates Miller's 7±2 rule in terms of the number of functional pitches used in melodic units. These functional pitches usually belong to pitch subsets called scales (scale: system prescribing the frequency relationships between pitches within an octave).

The unique sonic character or 'signature sound' of different pitch intervals outlines musical categories that support prediction during listening. Combinations of these signature sounds can create cognitively recognizable patterns which outline higher level perceptual groupings in the form of temporary chunks or more permanent new categories. Categories can be multidimensional, incorporating many interrelated criterial attributes, and may therefore vary in one or more attributes simultaneously without losing their identity.

Complexity, Prediction, Preference, Interest

W. Wundt (1832-1920) postulated the following relationship among complexity, preference, and interest.

_ 43-2410 - Module 7: Introduction Page 12 of 14

Relationship among preference, interest, and complexity (i.e. information content) (after Berlyne, 1971).

Preference decreases when there is extreme simplicity (redundancy; little information; maximal order; low entropy), as well as when there is extreme complexity (randomness; too much information; minimal order; high entropy).

Maximum preference corresponds to an optimum level of complexity, which is - in general- different for each individual and changes according to previous knowledge of the system in question. A system has maximum entropy when all possible events within the system are equally likely. If such a system involves a large number of possible events, then prediction becomes difficult, resulting in a high level of uncertainty. A musical example of maximum entropy is strict dodecaphonic, serial music.

Interest follows a similar relationship but with maximum interest corresponding to an optimum level of complexity that is -in general- higher than that for maximum preference. In a sense, successful / good art strikes a balance between low and high entropy. The fact that a system's degree of entropy and its relationship to interest and preference is relative to the observer may explain individual response differences when experiencing works of art in general and music in particular, as well as individual differences on what constitutes successful / good art.

Maximal variety with minimal elements Musical scales as psychological constructs

_ 43-2410 - Module 7: Introduction Page 13 of 14

Both major and minor scales contain 7 out of the 12 available pitches, distributed in such a way as to include five whole-tone intervals and two semitone intervals. Balzano [(1980). The group-theoretic description of 12-fold and microtonal pitch systems. Computer Music Journal, 4 (4): 66-84] has shown mathematically that, to get maximum variety (maximum interval-combination possibilities) from only 7 pitches out of an octave, we must follow the exact configuration present in the major and minor scales. That is, we have to divide the octave in 12 log- frequency units (21/12) and select 7 with the distribution present in the equal temperament major/minor scales. This distribution makes it possible for the major and minor scales to reproduce all the intervals present in the chromatic scale. The principle of coherence (: no distance/interval between any two successive scale notes/degrees should be larger than the addition of any two successive intervals of a scale) is closely related to maximum intervalic variety and the possibility of the development of full functional harmony within a scale.

Notation of the major scale from C4 to C5 showing the intervals in semitones. All 12 intervals are possible, indicating that this specific scale (subset of the twelve tones in a semitone-divided octave) satisfies Miller's rule while, at the same time, allows for maximal intervalic variety (in Kendall & Carterette, 1996).

Musical Scales - General Definition

Musical Scale: system prescribing the frequency relationships between pitches within an octave.

Selection of pitches can be arbitrary. It reflects a compositional / aesthetic choice, often guided by the properties of the musical instruments available and cultural standards, rather than a 'correct' set of tones that

_ 43-2410 - Module 7: Introduction Page 14 of 14

is guided by nature (e.g. harmonic series, construction of the ear, etc.).

Musical Scales as Psychological Constructs

The entire Western tuning system (equal temperament: octave divided in 12 equal log units) and Western scale system (major / minor) seem to be based on an apparently optimal combination of a) cognitive limits (short term memory/Miller's rule), b) data reduction/pitch circularity (octave), and c) maximum variety (interval distribution within major/minor scales).

The Octave and Miller's rule probably represent the only two musical universals.

Module 07: Introduction - Part I - Part II

Columbia College Chicago

_ 43-2410 - Module 7: Part I Page 1 of 10

43-2410 - Aesthetics of the Motion Picture Soundtrack

MODULE 07: (Film) Music, Meaning, Emotion, Communication Part I

t o p i c s

Music, Meaning, Emotion, Communication - Overview

Behavioral approaches to music and productivity, consumption, intellect, & emotion

Cognitive approach to the question of meaning and emotion

Approaches to musical meaning within music scholarship

Music, Meaning, Emotion, Communication - Overview

Introduction

THESIS: Meaning is neither a property of a stimulus (as a physical entity), nor the result of a physiological response to such stimulus, nor a contextual side-effect, nor a product of 'the mind' (whether understood as a "black box" or a collection of firing neurons and brain areas, activated in response to a physical/imagined stimulus or even spontaneously). Rather, meaning is a manifestation of the intersection and feedback among the stimulus as a physical entity, context, our physiological response to the stimulus, and our cognitive processing of the physiological response. It is partially received from and partially attached to the stimulus.

Addressing the question of music’s contribution to film in terms of meaning and emotion requires addressing the more general question of music’s ability to communicate meaning and emotion. In addition, any questions on music’s meaningful and feelingful potential cannot be addressed in isolation from general questions on meaning and emotion.

Central questions

Assuming that music performance and listening are meaningful and feelingful activities (i.e. activities that have specific meanings and elicit specific emotional responses), where do the meanings we identify in music and the emotions we experience when confronted with a musical work come from? More specifically, how do the specific meanings and emotions associated with a specific piece of music relate to this piece’s musical qualities? (Remember, we 43-2410 - Module 7: Part I Page 2 of 10

defined music as temporally organized sound and silence, a-referentially communicative within a context)

To ensure that our answer addresses musical as opposed to linguistic meanings and emotions, we will focus mainly on instrumental music. There are two reasons, beyond the fact that this is a film sound aesthetics class, that make it appropriate to illustrate the points raised using examples from film:

 Film provides the most popular forum for wide dissemination of the work of contemporary art music composers, a forum that both relies on and contributes to the context related to any musical experience, and is therefore an integral part of examining music as a form of communication and

 Issues of musical meaning and emotion come to the forefront in film, where musical meanings interact with meanings within the visual and linguistic domains, where meaning is more denotative, easier identifiable, and less fluid than in music.

Outline of our approach to the question of music's meaningful and feelingful potential

We will approach the topic in a systematic and controlled manner. On their own, answers of the type “This is my gut feeling about this piece of music” or “I know so”, or answers that invoke long-winded lyrical descriptions of musical passages (akin to programmatic music of the Romantic period) will not suffice. The key terms of the approach are “systematic” and “controlled” and are what separates research from common sense.

Both, research and common sense rely on the relationship among a series of concepts and conceptual schemes, outlined to explain and make sense of observations and experience.

The differences between research and common sense are that research will

1. explore systematically and in a controlled manner all possible implications of a suggestion/belief. For example, while examining a hypothesis of how we believe music may contribute to the meaning/emotion of one specific scene, we should also i) be prepared to see if/how our conclusions may help us interpret a different scene and ii) ensure that we can, in a confident manner, attribute our conclusions to a specific musical variable, by controlling all others.

2. actively explore avenues that contradict our beliefs. Rather than turning a blind eye on exceptions or dismissing them as “deviant,” we will systematically address the implications of such “exceptions” to our hypothesis, and be open to modify the theory underlying our hypothesis so that it can address them.

Depending on the methods followed to accomplish the above goals we can distinguish between scientific (involving the process of empirical investigation) or philosophical (involving reflective thought) research. All research involves a struggle between validity and reliability. As validity increases (i.e. as experimental contexts, in practical or thought experiments, approach music-experience contexts), the complexity of the model/thought increases and the reliability (i.e. degree of predictiveness / replicability of results, and therefore their claim to demonstrate facts/truths) decreases. Conversely, as the complexity of the model decreases, reliability increases but validity decreases.

Definition of Music (reminder from Module 01)

Before we can examine how music works in films, we need to agree on an initial definition that captures what is essential (i.e. necessary and sufficient) to what we experience as music. The definition proposed in the beginning of the class is: “Music is temporally organized sound and silence, a-referentially communicative within a context.” (in Kendall, 2005a).

Emphasis is placed on the temporal and communicative aspects of music. Music is an 43-2410 - Module 7: Part I Page 3 of 10

essentially temporal art, fact that may point to its significance [music as virtual time (Langer) - music articulating our emotional dimension (Reimer)]. We are interested in music as it relates to listeners and on how manipulating music affects “knowing” and “feeling.”

Among other things, music is therefore seen as a link between performer and listener. Listening to music always involves some sort of response, some kind of behavioral change, indicating that we “received” and/or participated in the creation of something. What is received is an intention communicated in terms of patterning/configuring/organizing (akin to the prosodic aspect of language).

The importance of context is a direct consequence of music as communication. In order for any communication to take place some sort of shared knowing is required and this is what we are going to broadly refer to as context.

The potentially a-referential/self-referential nature of music is what distinguishes it from other forms of communication (e.g. language). As an analogy, music may be seen as a “noun-less” language, made just from verbs (: potential, motion, action, narrative --> time; within this approach, musical associations may be considered equivalent to adjectives).

Observations / Beliefs about musical meaning and emotion

Every systematic examination of a phenomenon is based on initial observations drawn from our practical field of experience.

Our experience indicates that all music, including instrumental music, has both meaningful and emotional qualities. Different musical pieces both mean and feel different things to us. So, the instinctive reaction is to agree that: Musical meaning and emotion must be related to the specific musical piece in question.

This belief seems to be challenged by the following observation.

Meaning and emotion in the case of music are much harder to pin-point and agree upon than in the case of language. In fact, the same piece of music may give rise to widely different meanings and elicit widely different emotions to different people, or even to the same person at different times. This may also be the case with language and text (especially poetry) as well, but the level of agreement seems to, in general, be much higher in language than it is in music. Could this mean that: Musical meanings and emotions are qualities that we arbitrarily attach to individual pieces based solely on extramusical parameters (e.g. intellectual/emotional state of mind, context of musical performance/listening, general cultural background, etc.)?

Such a conclusion does not seem to make much sense either. Our experience of music (true for both musicians and non-musicians) and our intellectual and emotional responses to it appear to be so closely related to specific pieces (at least for individual listeners) that it is hard to accept that specific musical qualities (i.e. the way sounds are organized in time) have little or no relevance to what music means and feels to us. Even within the same context/listener/moment in time, different musical pieces do mean/feel different things, indicating that context/listener/time alone cannot account for music’s meaning. Actually, since in such an example everything other than the musical piece remains largely unchanged, the observed difference in meaning/emotion must be related to differences in the musical pieces.

The challenge is to find an answer to the question of musical meaning and emotion that a) links meaning and emotion to musical qualities of specific musical pieces, while b) can account for the observed variability in meanings and emotions associated with specific musical pieces.

Meaning, Emotion, and Communication

The concepts of meaning and emotion imply communication between us and some input from the external (to us) world, including other people. Such "input" may be current, remembered, or even imagined, blurring the boundaries of what is internal or external. 43-2410 - Module 7: Part I Page 4 of 10

"Behavioral" approach to communication

Behaviorism (pioneered by Pavlov, Skinner, Hull, and, in music, Seashore) is an approach based on the idea that human behavior obeys laws (e.g. Pavlov's study of conditioned reflexes (1927)). These laws outline the ideal of how our responses to a stimulus are expected to relate to this stimulus. In such a system, we are just an intermediate stage in the translation of a stimulus into a response, able to only 'contaminate' or 'distort' the output (meaning) which is already contained in its entirety in the stimulus (e.g. in a musical piece).

Behaviorism has had (and still has) many faces in musical scholarship, as exemplified in the various beliefs regarding what music is.

expect to understand everything Those who believe that the meaning about a musical piece by and importance of a piece of music exclusively studying and analyzing is found in its the

score music notation sound acoustical aspects of sound sound signal mathematical aspects of signals history historical background cultural context cultural background and circumstances performance practice performance action and context physiological effect physiological response emotional effect psychological/psychoanalytic response popularity commercial success

Most currently-offered music programs do address at least three of the above mentioned musical 'identities' (notational, historical, and contextual) through relevant courses (theory, history, and world music) but in ways that, most of the time, do not succeed in successfully communicating the benefit of one approach to the understanding of the other. Therefore they are not only essentially behavioral but they also succumb to the pitfalls of atomist and universalism (more on this later).

Behaviorism models the relationship between message (input) and meaning (output) in terms of an isomorphic mapping (below), with an input (e.g. musical score or sound waves with specific physical qualities) entering a translator (e.g. our ear or brain), where the "...wondrous transformation of matter to mind ..." occurs (Seachore, 1938), giving us a single and predictable output [e.g. musical piece with specific meaningful and emotional qualities, isomorphically related to the input’s graphic (score) or physical (wave) qualities]. Whenever such isomorphic relationship is not observed, the blame falls on the ‘translator’ (listener).

As described in the following section, this approach is more common than it may seem. It is behind all attempts to explain musical meaning and emotion in terms of cause-and-effect relationships between music's notational, sonic, or sonic organization qualities and our responses to them.

Behavioral approaches to music and productivity, consumption, intellect, and emotion (Davies, 1978 - Gorbman, 1987 - Radocy and Boyle, 1988) 43-2410 - Module 7: Part I Page 5 of 10

The resources listed in the title review and support or critique the claims of a cause-effect relationship between music and productivity, consumption, or emotions. Music's and 's effect on our general intellectual capacity is a larger issue and will only be addressed here in passing.

Productivity: "introducing music in the workplace has a direct and predictable effect on the workers' output."

Several studies where carried out during the 1930s and 1940s to try and prove the above statement. A typical study (e.g. Kerr, 1954; in Davies, 1978: 20) would include mainly 3 independent variables (i.e. variables manipulated by the experimenters): i) type of music (e.g. popular / semi-classical), ii) time of presentation (e.g. day / night), iii) presentation format (continuous or intermittent) and duration (total time of presentation). According to several such studies, it appeared that productivity in repetitive tasks, such as assembly-line production, increases by ~10% (on average) when popular music is introduced in the workplace for ~15% of the time during the day and for ~50% of the time during the night.

Such results have been largely dismissed because of the great variability of the data. Results were spread over such a large range that the 'average' could not give an adequate representation of the observations. As Davies (1978) points out, two very important factors are usually ignored in the interpretation of such studies:

 the relationship of the listener to a given piece of music (essentially what you know), and

 general contextual issues (general state of the worker/listener, type of task, etc.)

He therefore concludes that: "...Studies like these [introduce models that] have little scientific value, [i.e. they do not have enough predictive power to base firm decisions on] but appear in large part to be responsible for the 'music in industry' myth." (Davis, 1978: 21).

Consumption: "spending attitudes of consumers can be directly manipulated through music."

Muzak: N.Y. company that sells music programming, claiming to have a scientific psychological understanding of the ways music can affect people in the workplace and in commercial environments. The company creates its own musical pieces based on the idea that music you do not know does not attract your attention (allowing you to concentrate on other tasks/activities) but can still act on you subliminally (influencing your performance on those other tasks/activities).

Traditionally, Muzak pieces were always instrumental, familiar sounding, but not truly recognizable. They were arranged in such a way as to have gradual increases and sudden drops in stimulus intensity (: speed/tempo, density of orchestration, loudness levels, etc.), which repeat in ~15-minute-long intervals. The term Muzak is often used generically to describe for the workplace and elsewhere (e.g. in films), supposed to have causal/predictable/isomorphic relationship to the resulting behavior of the listeners, at a subconscious/subliminal level.

Muzak lost business in the mid 1980s and was forced to rethink/adjust its initial formula. The company experienced a revival in the early 2000s only to file for bankruptcy in 02/2009. Currently, Muzak focuses on general branding and communication corporate solutions http://www.muzak.com.

For some popular information on Muzak see the following links: 43-2410 - Module 7: Part I Page 6 of 10

http://en.wikipedia.org/wiki/Muzak http://www.bbc.co.uk/dna/h2g2/A497856 (overview and brief history) http://www.nonoise.org/library/muzak/muzak.htm (attack by the National Institute for the Deaf) http://serendip.brynmawr.edu/bb/neuro/neuro04/web2/dyi.html (student paper)

The idea that music or any other "stimulus" can act as a psychological or physiological 'pill' and help us achieve results that usually require hard work (e.g. improving our intellectual capacity, overcoming addictions, becoming better social beings, reaching emotional, mental, and physical health, etc.) is clearly very attractive. It will keep tempting us enough to support solutions such as those offered by Muzak in the 1950s, in spite of practical experience repeatedly reminding us that the simple input-output model is largely inadequate in explaining, predicting, and supporting human behavior and potential.

Intellect:: "listening to Mozart makes you smarter."

Recent claims that listening to Mozart makes you smarter have received a great deal of attention. More specifically, it has been claimed that students listening to Mozart prior to taking a specific set of spatio-temporal tests perform better than those who do not. The generalization of the results of such studies into "music makes you smarter" statements has been criticized extensively by researchers as (at least) irresponsible, and does not reflect the opinions of the authors of the study that initiated these results. Moreover, most related commentary fails to address questions such as: a) what specifically in Mozart's music is responsible for the observed change in behavior? b) would listening to some other type of music, or engaging in an other type of activity all-together be followed by similar changes in behavior? A well-written student paper by Gorman reviews several research studies addressing the issue.

Emotions: "music is the language of emotions."

Question: How can we define emotion in a way that does justice to its variability and multidimensionality? See the "Mood Cube" discussed in class (from Kendall, 2005b: 9.4)

Related questions:

 are all emotions simply magnitude variations a single emotion?

 if not, what differentiates emotions?

 do "emotions," "feelings," and "moods" describe the same thing?

 does the experience of music give rise to the same emotions as the experience of other aspects of everyday life?

 if not, can we construct a meaningful link between music and 'mood'?

Since the 19th century there have been numerous studies trying demonstrate music's effect on us through examinations of how it may change various physiological responses such as

 heart rate,

 respiration rate,

 GSR, (galvanic skin response - measuring the electric conductance of skin),

 ECG (measuring brain signals), 43-2410 - Module 7: Part I Page 7 of 10

 electromyography (measuring muscle response),

 blood pressure,

 etc.

In 1978, Dainow reanalyzed all available physiological data and showed that they do not reveal any consistent pattern/relationship among changes in specific sonic parameters, physiological states, and emotional states. As one of few exemptions we can mention electromyography studies by W. Sears, in the 1950's. Sears showed that for major/fast background music there was a relative increase in muscle tension, with subjects subconsciously assuming an erect posture, while for minor/slow background music there was a relative decrease in muscle tension, with subjects subconsciously assuming a more slumping posture. Still, and as was the case with the 'music and productivity' studies, what this study indicates is that some form of knowing is involved during these changes, mediating the external stimulus, making it meaningful, and supporting the observed physiological response.

Actually, as Schacter and Singer (1962) demonstrated, the behavior of an individual at any given time depends on his/her mental set as much as (if not more than) it depends on his/her physiological state. (Mental Set: set of our mental predisposition at a given time). Study participants were given a drug or a placebo, placed in a room with an actor that had supposedly taken the same substance, and instructed to act either drugged or normal. Participants behaved drugged or normal based more on the attitude of the actor than on the administered substance.

Cognitive approach to the question of meaning and emotion (musical or otherwise)

The cause-effect approach of the behavioral, machine-like view of understanding is inadequate at many levels, calling for a different model. The graph below is a simplified outline of such a model, replacing the singular translator with a system of two interacting and interdependent modes of knowing: explicit and implicit. The two hypothesized modes of knowing support new understandings, while being supported by existing understandings.

The main point of the diagram is that there is no simple one-to-one relationship between input (external world) and output (understanding). There are two interacting levels of knowledge involved, with the additional complication that the implicit level deals with knowledge units that we do not have direct access to. At the explicit level, we have rules that are verbalizable, such as rules of grammar, rules outlining what is and is not allowed in a game of chess, rules of music theory, etc. This level 43-2410 - Module 7: Part I Page 8 of 10

deals with symbols. At the implicit level, we have rules that are not verbalizable, such as the motor rules guiding the performance of an instrument, communication of musical expression, etc. This level deals with knowledge units (meta-symbols / schemata) that we may not have direct access to, but which are crucial to our awareness and actions.

Hypothesized model of the relationship between meaning and implicit processes

1. An external input is first partitioned into events. This partitioning (or 'chunking') involves identification of boundaries, process assisted by contrasts (using the term 'contrast' points to the need of addressing issues of similarity versus dissimilarity, identity, etc.)

2. Event partitioning is followed by event categorization into meaningfully grouped units. Categorization is based on existing schemata, which may be innate or acquired through experience. If for example we suppose that events proceed in a continuum (e.g. frequency: from low to high), categorization effectuates a data reduction by breaking (or 'quantizing') this continuum into bounded units, within which all events represent (and are represented by) a single category. (e.g. melodic motion is not a motion on a continuum from low to high but a motion along discrete note categories.)

3. Chunking and Categorization allow for the synthesis of patterns that makes the input data meaningful and help us determine a 'most likely outcome'. The outlined steps occur implicitly, with little or no conscious control.

Consciousness of the external world (environment) is aware not of the external world but only of what our implicit knowledge makes available to us from the external world. In other words, our explicit understanding of and communication in and about the external world is always mediated by implicit rules we do not have access to. Awareness of the world is the result of synthesis. Input from the external world is mediated at the implicit level before it reaches our explicit awareness and is processed again at both the explicit and implicit levels before it can be expressed (output) as a response, reaction, or understanding. It is processes at the implicit level that allow us to predict and that give rise to expectations, based on the general assumption that, in organized systems, not all events are equally likely.

Situations where there is a conflict between explicit and implicit rules help examine their interaction and interrelationship, and support the transformation of implicit rules into explicit.

The 'perspective' experiment (a circle diminishing in size while the intensity of the accompanying sound increases, decreases or stays the same), provides an example of conflict between explicit rules (constant intensity corresponds to constant loudness) and implicit rules (in a context that calls upon our experience of perspective, constant intensity corresponds to increasing, constant, or decreasing perceived loudness, depending on whether the source of sound appears to move towards us, stand still, or move away from us respectively), Such conflicts are usually resolved through the transformation of implicit knowledge into explicit statements that can permit communication and manipulation of what eventually becomes a set of explicit rules.

Explicit and implicit rules may be shared or unshared during communication. Introducing the idea of unshared explicit and implicit rules to understanding may help explain how we can have different valid interpretations of the same event, without resorting to whim or personal preference.

Distinctions such as explicit/implicit are not rigid and do not imply clear-cut boundaries. There is a constant interaction between explicit and implicit rules. Which ones will operate at any given time depends on previous knowledge, experience, and mental set, while whether an event will be processed implicitly or explicitly depends on motivation, purpose, context, and therefore attention. It is this interaction between implicit/explicit knowledge and the external world that may explain how and why, the whole is usually different from the sum of its parts, statement that holds true not only for music and art, but for all experience. 43-2410 - Module 7: Part I Page 9 of 10

To the right you first see a dot bouncing from left to right, followed by a dot moving in the same direction along a straight line. When both moving dots are presented together, the motion will most likely not be described as a combination of the individual motions, due to context.

Research that addresses such issues often makes use of ideas from the areas of Gognitivism and Gestalt psychology. "Gestalt" is German for "form" or "pattern." In the context of Gestalt psychology, the term means "unified whole" or "configuration." We return to this in Part II of this module.

Gestalt theory hypothesizes a series of "laws" or "rules" of perception. These rules identify and model basic implicit processes involved in the data reduction that allows us to collapse the complexity of the world into categories/schemata. It is thanks to such rules that we are able to comprehend our environment and make predictions. Gestalt laws are therefore best understood as rules of prediction. In addition, Gestalt rules are deduced from rather than prescribe behavior. They are based on previous learning and experience and allow us to construct valid models of reality.

Rephrasing the thesis stated at the top of the page, meaning is not a 'property' of an object or event (whether musical or otherwise) that is transmitted to a receiver (e.g. a listener). Rather, it is a quality that we, as observers, configure during our encounter with any given object or event, and is therefore a function of our relationship with it.

Approaches to musical meaning within music scholarship

Two main philosophical approaches to musical meaning (from Meyer, 1956)

REFERENTIALIST: "musical meanings refer to the extramusical world of concepts, actions, emotional states, and character." ABSOLUTIST: "musical meaning lies exclusively within the context of the work itself."

"Referential meanings” are meanings related to the context surrounding a musical piece or performance, such as a) the life of a work's composer, b) the general socio- cultural situation contemporary to the work's production/reception, c) the function of a piece (wedding/dance/mourning songs etc.), d) its libretto/lyrics if any, d) the location of a performance (church, festival), etc. Referential meanings already attached to a piece within a given context can also carry over to other contexts.

While what we mean by "referential meanings" is relatively clear, the absolutist approach is not as easy to grasp.

The Absolutist approach is represented by two main positions: FORMALIST: "the meaning of music lies in the perception and understanding of the musical relationships set forth in the work of art. Musical meaning is primarily 43-2410 - Module 7: Part I Page 10 of 10

intellectual and self-referential. EXPRESSIONIST: "the relationships outlined in the Formalist position are in some sense also capable of exciting feelings and emotions in the listener"

But how can musical relationships that are supposed to refer to nothing but themselves give rise to meanings and emotions that are relevant to us?

Problems with the approaches to date

Hedonism: the confusion of aesthetic experience with sensuously pleasing experience -- that is, the idea that musical "beauty" or "goodness" simply boils down to pleasure. "[A] Beethoven symphony," says Meyer, "is not a kind of musical banana split, a matter of purely sensuous enjoyment." Atomism: the tendency to explain music by reducing it to "a succession of separable, discrete sounds and sound complexes." Universalism: the error of regarding a given music organization as “good for all times and all places.” “Western music is not universal, natural, or God-given but a product of learning and experience.” (Meyer, 1956: 6).

The question "how can musical relationships give rise to meaning and emotion?" points to the earlier statement that meanings and emotions associated with objects and events are not based on the isolated objects/events but, rather, on our relationship (as observers) to objects/events. What do we mean by that?

As Meyer notes (1956: 20), "The sensation of falling through space, unconditioned by any belief or knowledge as to the ultimate outcome, will, for instance, arouse highly unpleasant emotions. Yet a similar fall experienced as a parachute jump in an amusement park may, because of our belief in the presence of control and in the nature of the resolution, prove most pleasing." “…there are no pleasant or unpleasant emotions. There are only pleasant or unpleasant emotional experiences."

At the same time we need to differentiate between emotion felt and designated emotion. The latter is largely based on cultural norms and has communication as its ultimate goal. It cannot therefore be understood as isomorphic to emotion felt.

REFERENCES (in addition to the Module's readings)

Davies, J. B. (1978). The Psychology of Music. Stanford, CA: Stanford University Press.

Gorbman C. (1987). Unheard Melodies: Narrative Film Music. London: BFI Publishing.

Gorman, A. (1999). The "": hard science or hype? [http://l3d.cs.colorado.edu/~agorman/pdf/mozart-effect-survey.pdf ]

Radocy, R. E. and Boyle, J. D. (1997). Psychological Foundations of Musical Behavior (3rd edition). Springfield, IL: Charles C. Thomas Publishing.

Schacter, S. and Singer, J. (1962). “Cognitive, social, and physiological determinants of emotion,” Psychological Review, 69: 379–99.

Seashore, C. E. (1938). Psychology of Music. New York: Dover Publications, Inc.

Module 07: Introduction - Part I - Part II

Columbia College Chicago 43-2410 - Module 7: Part II Page 1 of 6

43-2410 - Aesthetics of the Motion Picture Soundtrack

MODULE 07: (Film) Music, Meaning, Emotion, Communication Part II

t o p i c s

'Conflict' Theory of Emotion - Musical Expectations

Gestalt Principles of Perception (implicit rules of prediction) [opens in a new window]

Musical Expectations - Musical Meanings

Communication of Musical Meaning and Emotion in Film

Conflict Theory of Emotion

Introduction

(Reminder:) Meanings and emotions associated with objects and events are not based on the isolated objects/events but, rather, on our relationship (as observers) to objects/events. What do we mean by that?

For an answer we will turn to Dewey (1894) and MacCurdy (early 20th century, in Meyer, 1956) and their claim that "[e]motion or affect is aroused when a tendency to respond is arrested or inhibited." This kind of theory, also termed the ‘conflict theory of emotion,’ addresses whether and how expectations are formed, delayed, inhibited, and fulfilled.

For many, a controlled/'safe' game of expectations is seen as a defining aspect of art, separating it from all other experience, where expectations and resolutions, or lack thereof, are haphazard (at least as far as we can understand) and are aspects of life we have little control of. 43-2410 - Module 7: Part II Page 2 of 6

'Conflict' Theory of Emotion - Expectancy Theory and Music

Meyer (1956) applied the above theories to music and suggested that musical meaning and affect (emotion felt) arises when an expectation (a tendency to respond) activated by musical or contextual organization is temporarily inhibited or permanently blocked. Different levels of expectation and inhibition are experienced as ambiguity, suspense, or surprise, leading eventually to some kind of resolution that gives rise to meaning and emotion.

Music is seen as a game of expectations, a playful manipulation of the inherent human tendency to identify patterns, exercise control, and achieve certainty, with its meaning and value depending on the success of this very game. In music, the term 'conflict' refers to patterns of 'tension' and 'release;' to alterations between uncertainty (high cognitive load) and redundancy (low cognitive load) that build expectations and make 'expectation games' possible.

The term 'a-referential' in our definition of music highlights one of the main aspects that distinguish music from language. Music can, and often is a-referential (i.e. without a referent outside itself). While the meaning of language is its referent (i.e. an idea, object, or event that the sounds of language point to - semantic meaning), the meaning of music has the potential to be solely related to its temporal organization; to the way its sonic patterning unfolds in time (syntactical meaning). Musical patterns of tension and release constitute music's "embodied meaning", a meaning that arises from within and is directly related to the temporal organization of sound and silence that music essentially is. This is reflected in the fact that, while we describe a sentence by outlining its subject, we often describe a musical phrase by outlining its organization. Therefore, when listening to music, meaning arises out of our efforts to organize a series of sound events into patterns that effect data reduction, reduce uncertainty, and "make sense.".

Our discussion does not imply that there are no other types of meaning in music. Referential musical meanings (discussed throughout the course) are both common and important, and can be traced with relative ease to sources other than sonic organization. Our main concern, however, is to a) explain the potential of music to be meaningful through its organization, without necessarily having a specific referent outside itself and b) understand how/why some referential meanings in music are more successful and 'intuitive' than others.

Expectations activated by musical organization

a) Expectations based on cultural or other associations that are often arbitrary in nature (in terms of their relationship to sonic structures).

b) Expectations based on explicit historical and formal musical knowledge: temporal and pitch contours as general stylistic features (e.g. typical rhythmic patterns or melodic lines) or as specific features established during the unfolding of a musical piece in time.

Global features: features that evolve throughout a piece of music and characterize/schematize it as a whole (e.g. major/minor tonalities, overall rhythmic features, overall mood, etc.).

Local features: germinal, small-scale musical events that serve as building blocks / points of departure for the construction/emergence of 43-2410 - Module 7: Part II Page 3 of 6

global features (e.g. signature intervals, rhythmic gestures, contours, etc.).

c) Expectations based on implicit knowledge of the structural organization of music in time (e.g. gestalt principles of organization, outlined below). Implicit rules allow for prediction based on the fact that, in an organized system, not all events are equally likely. The Gestalt principles of perception outline cognitive strategies that help us organize the continuous stimuli bombarding our senses (whether musical or otherwise) into groups and patterns that set up expectations of how observations will progress and, consequently, what these observations mean.

Gestalt principles of perception (implicit rules used in perceptual prediction) - [opens on a new page]

Musical Expectations - Musical Meanings (based on Meyer, 1956)

As mentioned above, the Gestalt principles of perception outline cognitive strategies that help us organize the continuous stimuli bombarding our senses (whether musical or otherwise) into groups and patterns that set up expectations of how observations will progress and, consequently, what these observations mean.

Dowling & Harwood (1986) refer to these expectations and to expectations related to explicit knowledge and rules of organization when they say that: "Perceptual learning throughout a listener's life, as well as during listening to a specific piece of music, leads to the formation of structural schemata (i.e. a series of implicit and explicit rules) that embody expectancies which novel music violates. Those violations trigger autonomic nervous system arousal that activates further cognitive activity in a search for meaning. The meaning, when found, merges with the arousal in an experienced emotion."

Musical meanings and emotions related to expectations activated by musical organization

a) Referential: Meanings/emotions that are arbitrarily associated with musical features, based on temporal contiguity and/or reinforcement. Behavioral approaches are only fit to 43-2410 - Module 7: Part II Page 4 of 6

handle the examination of such musical meanings and emotions. [e.g. any national anthem and the relevant nationality; The Beatles' song "Hello-Goodbye" and the "Target" stores; Davis' notion of D.T.P.O.T. (acronym for "darling, they are playing our tune"); major/minor tonalities and happy/sad connotations, etc.].

b) Formalist: Meanings arising through explicit knowledge of musical structure (music theory/analysis) and context (music history). Many music theorists would argue that it is here that music's 'true' meaning lays.

c) Expressionist: "Embodied" musical meanings that are based on music's temporal organization of sound and silence and arise from patterns of tension and release within the music's unfolding in time.

Meyer's taxonomy of musical meaning owes to a model by Charles Peirce (1940's) that defined 'meaning' semiotically, in terms of signs. The demarcation principles underlying the above taxonomy are not consistent, reducing its applicability and usefulness to research. Kendall (2005a) modified this taxonomy, as outlined in the following section.

Communication of Musical Meaning and Emotion in Film

Roger Kendall (2005a) borrows the concepts of 'referentialism' & 'expressionism' from Meyer and links them to Peirce's concepts of "index' and 'icon'. These 2 concepts, along with what Kendall calls 'syntax', are understood not as discrete categories but as flexible 'stages' on a continuum of referentiality that has referentialism at one end and expressionism at the other. Music then, in its game with expectations, moves fluidly along this continuum.

The three main 'stages' in this continuum are:

a) Index: Related to Meyer's 'referential' meaning. It denotes an arbitrary association between signifier and signified (i.e. between a symbol and its referent). For example, lexical units are largely indexes. There is no relationship between the word 'cat' (signifier) and the animal it points to (signified) other than arbitrary association / convention. Index therefore denotes an arbitrary association between the musical message and an extramusical meaning.

Examples include national anthems, the idea of leitmotif or idée fixe in the manner of Berlioz or Wagner, etc. Many examples of musical indexes can be found in film music. The key aspect of indexes is that the connection between the music and the external visual concept or idea is entirely arbitrary (i.e. not related to music's internal construction) and simply a matter of learned association. 43-2410 - Module 7: Part II Page 5 of 6

Film clips: (discussed in class) Casablanca: All five scenes Pink Panther Kill Bill 1: Scenes 1, 3, 4, & 7 Kill Bill 2: Scene 3 The Wizard of Oz: Scenes 1 & 2 2001: Scenes 2 & 4

Even musical meanings that seem natural to us may actually be dependent on previous learning and experience and may have been initiated either arbitrarily or for extramusical reasons (as, for example, is the case with musical meanings related to the organization of scales, tonal implications and patterns of tension and release, expectations for resolution, etc.).

b) Icon: Partly referential. It denotes a connection across frames of reference suggested by common patterns or forms (e.g. as in naming a tree 'weeping willow'). In the case of music, sound patterns/forms (in terms of pitch, dynamics, tempo, timbre, etc.) can suggest connections to iconic features in other frames of reference. Unlike index, icons outline connections that are not arbitrary but are based on some sort of resemblance in terms of form, shape, pattern, or motion, transferred from one domain to another.

Recognition of iconic features can be based on both explicit and implicit rules, with the following 3 'iconic archetypes/prototypes' being part of implicit knowledge (at least in Western European culture): a) rising pattern, b) falling pattern, c) arch.

Film clips: (discussed in class) The Wizard of Oz: Scenes 3 & 4 Shrek E.T. Finding Nemo: Scene 3 2001: Scene 3

c) Syntax: Purely a-referential, related to Meyer's 'expressionist' meaning. It denotes meaning that arises out of our efforts to temporally organize a series of events into a coherent whole by identifying boundaries, categories, and structures. Syntactical meanings precede indexical and iconic ones. Before any arbitrary or formal association becomes possible, we must perceptually identify boundaries and uncover/impose some sort of organization to any incoming information.

In audio/visual composites, the interaction between musical/visual accent structures (as found in some animation, dance sequences etc., where we essentially have to deal with patterns of contrast in the visual and the auditory domains) is the best expression of syntactical meaning. The accents do not suggest forms, shapes, or motion as in icon, nor denote arbitrary associations as in index. Syntactical meanings arise from pure synchronizations among time-ordered elements.

Film clips: (discussed in class) West Side Story: Scenes 2 & 3 Monsters Inc. Gladiator: Scene 2 Kill Bill 2: Scenes 1 & 2 Matrix: Scenes 1 & 4

Iconic and syntactical features of music are related to what S. Langer refers to as 'physiognomic' features. According to Langer's theory of emotion, music has 43-2410 - Module 7: Part II Page 6 of 6

physiognomic emotional qualities. Its contours, for icons, or simply its patterns of tension and release, for syntaxes (: signifier), suggest visual contours or emotional patterns of tension and release respectively (: signified). Most icons have also syntactical features. Before a contour resemblance is determined between the visual and aural domains (i.e. icon), there has to be some sort of accent structure matching that supports it (i.e. syntax). The key difference is that icons are closely linked to general contours rather than precise accent pattern matching and are therefore more likely to occur in larger scale sonic/visual pairings (e.g. at the musical phrase rather than individual note level).

In most cases, multiple types of meaning construction are operating in single events, resulting in hybrid stages along the model's referentiality continuum.

Film clips: (discussed in class) Finding Nemo: Scene 1 Matrix: Scenes 2, 3, & 5 Gladiator Scenes 1 & 3 Kill Bill 1: Scenes 2 & 5

[Optional] Read through an introduction to ideas on the topic of music's a- referential potential from within Philosophy and Phenomenology.

REFERENCES

Dowling, W. J., & Harwood, D. L. (1986). Music Cognition. San Diego: Academic Press.

Kendall, R. A. (2005a). Music and video iconicity: theory and experimental design. J. Physiol. Anthropol. Appl. Human Sci., 24(1):143-149. http://www.jstage.jst.go.jp/article/jpa/24/1/143/_pdf [free online access]

Meyer, L. B. (1956). Emotion and Meaning in Music. Chicago: University of Chicago Press. [L. Meyer died on 12/30/2007 - click below for his obituary from the New York Times http://www.nytimes.com/2008/01/02/arts/02meyer.html]

Module 07: Introduction - Part I - Part II

Columbia College Chicago

Module 7 - Part II Supplement Page 1 of 8

43-2410 - Aesthetics of the Motion Picture Soundtrack

MODULE 07: (Film) Music, Meaning, Emotion, Communication Part II Supplement

Gestalt principles / rules

Proximity

Items placed in close proximity tend to be grouped together as a unit.

Grouping in rows versus grouping in columns

In music, this principle usually works in terms of frequency and temporal proximity. Miller & Heisse (early 1950s) showed that rapid alterations between sequential pitches (trill) can result in the sequence being split apart (melodic fission / stream segregation / streaming) if the musical interval is larger than a minor third (approx.). When the interval is smaller or the alteration slower, the sequence is perceived as a single unit (coherence / fusion). Van Noorden (1975, PhD Thesis, Eindhoven University of Technology) conducted extended studies examining temporal coherence and fission. In general, the slower the trill, the larger the frequency difference necessary for fission to occur.

Fission example I (Sonata for Alto Flute by Telemann, 17th century) Example of fission (occasional break of the flute's melodic line into two separate melodies) resulting from rapid alteration between pitches separated by intervals larger than a minor 3rd. Fission example II (Houtsma et al., 1987) Interleaved Melodies I (Dowling, 1973) Can you tell which two melodies have been interleaved?

_ Module 7 - Part II Supplement Page 2 of 8

Interleaved Melodies II (Dowling, 1973) How about now?

Dowling's experiments with interleaved melodies also indicate that perceptual salience (due to previous knowledge) can direct attention in a way that may override the proximity principle. Listeners are able to recover two known interleaved melodies (fission), even when the melodies are presented at a frequency proximity that supports perceptual merging into a single melody (fusion). This is a good example of our ability to switch between explicit and implicit rules (explicit: previous knowledge - implicit: proximity principle) and reassemble the two melodies out of a single pitch sequence. Attention can therefore be seen as a manifestation of the claimed interaction between explicit and implicit rules, an interaction that helps us parse incoming information.

Similarity

Similar items tend to be grouped together as a unit.

Grouping by shape.

Operationally defining similarity is non-trivial. This is reflected in the fact that similarity-groupings that might appear simple to us may present an almost impossible task to a computer. In music, this principle is most often connected to timbre, but may also operate in terms of dynamics (perceptual grouping of notes into melodic lines based on loudness) or tonality (perceptual grouping of notes into melodic lines based on key). Sequences of pitches performed by different instruments will most likely be grouped according to instrument.

"Loops" (by R. Erickson - UCSD) is an example of competition between the proximity and similarity principles. Although the pitch sequence incorporates large frequency leaps, it does not involve rapid alterations between widely separated pitches (satisfies the proximity principle). At the same time, each pitch is assigned to a different timbre (violating the similarity principle). Perceptual organization according to proximity results in different pitch groupings than organization according to similarity.

_ Module 7 - Part II Supplement Page 3 of 8

Good continuation (common direction)

We tend to continue contours whenever the elements of a pattern establish an implied direction. The related principle of "common direction" makes a similar claim in terms of motion: Elements sharing motion attributes (direction, speed, or both) are usually grouped together.

A shape or pattern will (other things being equal) tend to continue in its initial mode of operation. Among other things, this principle may help account for our ability to perceive discrete visual or aural stimuli as continuous.

The principle of good continuation is behind the success of digital sound editing processes that correct/fill-in/recreate missing audio data based on implications by existing data. In music, this principle is reflected in our tendency to follow separate melodies even when they cross each other. At the same time, however, if both melodies are performed by the same instrument and at the same register, the similarity (in terms of timbre) and proximity (in terms of pitch) principles might override the good continuation principle (in terms of melodic motion)

_ Module 7 - Part II Supplement Page 4 of 8

The 2 crossing melodic lines in (A) are perceived as staying within a perfect fourth range (see B). Studies (Deutsch, 1975) indicate that the grouping in (B) will persist even if panning (i.e. Left vs. Right channel in stereo presentation) alternates for each note in (A) (see below) (in Butler, 1992: 108; Figure 7-5).

Listen to the Deutsch example

_ Module 7 - Part II Supplement Page 5 of 8

Even when the panning alternates for each note, listeners reorganize perceptually the stimulus and assign each melodic line in (B) to a single ear (left / right). This perception persists even when both panning and timbre alternate for each scale note in (A), with each melodic line in (B) being assigned to a single ear and a single timbre. This example illustrates the use of implicit rules to process incoming information and send a synthesized version of reality to conscious awareness (explicit knowledge).

Listen to another example (Houtsma et al., 1987), involving a high and a low tone (you must use headphones). What do you hear? Two tones bouncing from left to right? One tone steady while the other one bounces? two intermittent tones, each presented in each ear? etc.

If the timbre difference between the left and right channels becomes extreme, the illusion breaks down and listeners cannot synthesize the conjunct melodies of (B). Instead, they hear what is actually happening: constant leaps of pitch (C) and timbre in each ear. In other words, after a dissimilarity threshold between the two lines in (C) has been crossed, the similarity principle overrides the proximity principle, and we shift from one implicit rule to another and from synthetic to analytic listening. Whether an actual shift from implicit to explicit rules will occur depends strongly on context and existing knowledge. For example, although non- musicians and Western-trained musicians usually hear the conjunct melodies in (B), musicians skilled in the 12-tone system often hear the actual pitch leaps, even without the help of timbral dissimilarity. Having developed, through training and experience, explicit rules that can deal with the disjunct melodies presented, they can attend to them by listening analytically.

In 1974, Dannenbring demonstrated the good continuation principle using stimuli that were interrupted at regular time intervals, with noise filling the gaps. Subjects resolved the ambiguity in the stimuli by reporting that the fragmented stimuli had no gaps, perceiving them as continuing under the noise. Listen to two audio examples of noise bursts masking gaps in a steady or frequency modulated tone (after Dannenbring, 1974).

Closure

We tend to enclose a space by completing a contour and ignoring gaps in a figure.

_ Module 7 - Part II Supplement Page 6 of 8

In music this can be connected to tonality: the idea that certain pitches within a system of pitches (scales) serve as focal points for a piece (or section of a piece) of music. The concept of 'closure' is closely connected to that of 'expectation.' A small number of events may imply a much larger structure, providing we have enough previous knowledge of the larger structure in order to synthesize it. Closure is, therefore, largely culturally defined. As Meyer notes (1956: 131), "standard material within a given musical tradition establish a norm of melodic completeness. When the practiced or cultivated listener becomes aware that one melodic step has been bypassed, s/he expects (even if subconsciously) that the missing tone will appear and the structural gaps created will eventually be filled in."

Prägnanz (good figure)

We tend to organize stimuli into figures that are as 'good' as possible. In this context, 'good' can mean symmetrical, simple, regular and, most importantly, familiar. Both the closure and good continuation principles are related to the prägnanz principle. According to Koffka (1935: 138), of the possible ways in which a percept can be organized, the one that actually occurs will possess "the best, simplest and most stable shape." According to Meyer (1956), the law of Prägnanz states that "psychological organization will always be as "good" as the prevailing conditions allow.'" (p.86) "... the mind, governed by the law of Prägnanz, is continually striving for completeness, stability, and rest." (p. 128).

Rather than appearing as a set of several complicated or distorted shapes, the

_ Module 7 - Part II Supplement Page 7 of 8

figures appear simple based on experience (e.g. a square and a triangle overlapping and the same image in different 3D orientations).

In music this is illustrated in the fact that no matter how complex or un- patterned a piece may be, some sort of pattern will be imposed by the listener based on his/her experience. Whenever this is not possible, the listener may perceive the piece of music as meaningless.

Often, several principles are operating simultaneously, with decisions based on multiple cues and/or selective attention. In the graph to the right, subjects may see a white vase or two black faces, depending on directed attention (figure-ground alteration). In the example below, proximity may be overridden by context, while in the second it will most likely not..

An example of figure/ground selective designation is the 'cocktail party effect'. The term was coined by E. C. Cherry (1953) to describe our ability to selectively pick out a single sound stream (foreground) out of a very active/complex sound environment (background). If you would like more information on the 'cocktail party effect', click here for a review. In music, this is illustrated in our ability to distinguish individual parts within a complicated orchestral piece, sometimes even despite frequency proximity or timbral similarity, an ability closely related to selectively focusing attention.

Implicit Rule Violations

Serial, dodecaphonic compositions provide examples of musical pieces that violate the majority of the principles discussed as well as exceed hypothesized cognitive limits. Such pieces present listeners with aesthetic

_ Module 7 - Part II Supplement Page 8 of 8

and intellectual challenges that are often hard to respond to, making it difficult to impose any kind of organization and therefore meaningfulness. Listen, for example, to the Opening of Symphony 21 by Webern. Through extensive exposure, we can increases our ability to cognitively impose organization on such pieces, as well as focus our attention to musical aspects that are often downplayed in pre-20th century music (e.g. timbre variations explored in Spectral music, musical stasis and timbre explored in Minimalism, etc.).

Implicit Rules and Expectation Games

Most musical pieces will have aspects that go against one of more of the principles discussed. Like all implicit rules, gestalt principles outline listener expectations, while art can be defined as a game of expectations. In this game, art delays, inhibits, satisfies, or potentially changes expectations and implicit rules.

The applet below (by J. Levin, UIUC) illustrates several of the gestalt principles discussed. Can you identify them? Can you link this illusion to any aspects of musical experience?

Changing figure Changing filler properties Controlling animation properties 'D' change background filler start/stop 'R' toggle figure rotation shape animation 'T' toggle figure 'C' change background filler randomize display translation color '<' or '>' decrease/increase 'S' toggle figure 'F' toggle background density (square/circle) filling

Columbia College Chicago

_ 43-2410 - Phenomenology primer Page 1 of 4

Phenomenology primer

According to our definition of music, one of the main aspects that separate it from other forms of communication or art is its potential to be a-referential, or better, self- referential. In other words music does not have to (although it can and often does) point to anything outside its own structures or patterning; it can have 'embodied meaning' and therefore 'be meaningful in itself.' Based on this statement one can raise several questions:

(a) How can something self-referential ever communicate anything?

If self-referentiality is understood in terms of a closed system (i.e. a system that is meaningful in itself) then, by definition, there is no link between the inside and the outside, and 'inside' and 'outside' cannot have any impact on one another. In other words self-referential music cannot have an impact on anything outside itself, including us. How can self-referential music have any efficacy or significance? Why should/do we care?

(b) Is music's self-referential potential something 'extra' or a criterial attribute?

It clearly cannot be just an 'extra' since, according to our definition, without it, music would not fulfill anything different from several other forms of communication/art. It would be 'redundant' and therefore insignificant and expendable. If self-referentiality is of major significance in music and music has (from experience) a major impact on us, then question (a) becomes even more pressing.

(c) What exactly do we mean when we say that the listener creates the music no less than the composer and the performer do? Could a careful examination of this question point towards a possible answer to the first question?

(d) Although music does not 'cause' emotions in a deterministic fashion, most of us would agree that music's emotional dimension goes far beyond Davies's DTPOT. In this class we were introduced to the very important idea of music suggesting/gesturing emotions. What do we mean by that?

If this 'suggestion' or 'implication' is possible simply through the 'iconic archetypes' discussed in class then one can easily reduce the emotional significance of music to that of a cat & mouse game. Existing emotional states are 'recycled' in various archetypal combinations and with various degrees of clarity (or lack thereof). Listeners join the game, being in a state of uncertainty (inhibition of expectation) and therefore excitement, until they are in the position to either pinpoint the 'emotional suggestion' or pick out an existing experience that best matches it. This essentially 'recycling' approach, although reliable (and far more valid than sole referentialism) cannot account for 'profound' musical experiences that music lovers are familiar with. It cannot account for cases where we experience an unfamiliar emotion, one we were unaware we were capable of experiencing until we were confronted by a specific work of art. 43-2410 - Phenomenology primer Page 2 of 4

What is the basis for 'profound', 'enlarged,' 'new' emotions in music, if not 'iconic archetypes', and what is the role of music's self-referentiality, or better, non- representational character to this emotional or iconic augmentation?

(e) Music is created by composers/performers (praxis), by the inner workings of the piece itself (formalism), AND by the listener. In the midst of all this, good musical performance draws attention to the music and not the performer, context, or listener. In other words everything (context) and everyone (composer / performer) is working towards making the formal understanding/appreciation of music possible. If this is true why is it so and what does it tell us about music?

What follows is an extensive collection of excerpts from an interview with Paul Ricoeur (1998: "Aesthetic Experience," in Critique and Conviction. New York: Columbia University Press). In these excerpts, one may find potential answers to the above issues, from within philosophy and phenomenology, that complement the approach of empirical investigation. Note: Although the interview discusses at times literary rather than musical works, a simple word substitution can help apply the arguments to music (e.g. author → composer; reader → listener; text → piece). [Text in brackets is editorial.]

".... [What the author of a work configures, the reader/listener refigures.] .. refiguration expresses the capacity of the work to reconstruct the world of the reader in unsettling, challenging [ways], remodeling the reader's expectations. ...it does not consist in reproducing reality but in reconstructing the world of the reader, in confronting him or her with the world of the work; and it is in this that the creativity of art consists, penetrating the world of everyday experience in order to rework it from the inside.

...[the function of art] is not to help us recognize objects but to discover dimensions of experience that did not exist prior to the work.... If one can speak of truth in relation to the work of art, it is to the extend that this designates the capacity of the work of art to break a path in the real by renewing the real in accordance with the work itself, so to speak... Music permits us to go even further in this direction than painting, even nonfigurative [i.e. abstract] painting......

....it is when music is not in the service of the text having its own verbal meanings [or in the service of any other referent], when it is no longer anything but this tone, this mood, this color of the soul, when all external intentionality has disappeared and when it no longer has a signified [i.e. when it does not point to anything outside itself] that it possesses its full power of regenerating or recomposing our personal experience..... For this [retreat from the real] to be possible, it [is] necessary that the signs [the term refers to anything that points to something else] are emptied of any external designation; only then [can] they enter into all sorts of imaginable relations with other signs; between them there is now a sort of infinite availability for incongruous associations. [And the listener creates music by configuring meaning; by 'creating' new plausible associations from this infinite menu of incongruous associations.]

...Let us not forget the twofold nature of the sign: retreat from and transfer back into 43-2410 - Phenomenology primer Page 3 of 4

the world. If art did not have, despite its retreat [i.e. self-referential potential of music] the capacity to come bursting into our midst, into our world, it would be completely innocuous; it would be struck with insignificance and reduced to sheer entertainment, it would be confined to a parenthesis in our concerns....[In other words music's embodied meaning must be and is somehow transformed by the listener into a meaning/message that we can 'care' for.] ...the capacity to make a return into the world [through the listener] is carried to its greater intensity by the work of art, precisely because the retreat made here is infinitely more radical than in ordinary language.... As the representational function is lessened -..this is the case with music when it is non-descriptive [i.e. self-referential]- as the gap with reality grows wider, the biting power of the work on the world of our experience is reinforced. The greater the retreat, the more intense the return back upon the real, as coming from a greater distance, as if our experience were visited from infinitely further away than itself.

....Music...extends our emotional space, it opens in us a region where absolutely new feelings can be shaped. [And these are related to the intentions of the composer non-isomorphically but by] analogy in the sense of resonance, not of proportionality.....I would say that a work, in what is singular in it, frees in one who tastes it an emotion analogous to that which produced it, an emotion of which that individual was capable, but without knowing it, and which enlarges his affective field once he experiences it. In other words, so long as the work has not cut a path through to the analogous emotion [i.e. if there is no shared implicit knowledge], it remains uncomprehended and we know that this frequently happens. [The term 'singular' refers here to the process of creating a work understood as a unique, singular 'response' to a singular 'question'.] What constitutes the success of a work of art is the fact that an artist has grasped the singularity [uniqueness] of a conjuncture, a problematic, knotted for her in a unique point, and that she responds to this by a unique gesture [here we see the important concept of art as gesture rather than reference.]

..The subject of aesthetic experience is placed in a relation comparable to the relation of adequation that exists between the emotion of the creator and the work that conveys it. What he [the creator] experiences is the singular feeling of this singular suitability. ..The referential function is exercised in the singularity of the relation of a work to that to which it renders justice in the living experience of the artist. The work refers to itself in an emotion that has disappeared as emotion but which has been preserved in the work [as mood].

..[If we go beyond the referential aspect of music] it becomes obvious that the work expresses the world in a manner other than by representing it; it expresses it by iconizing [gesturing/suggesting/referring iconically to] the singular emotional relation of the artist to the world, which I have called the mood.

..[This 'singular' experience of a creator is in itself incommunicable]; but, as soon as it can be problematized in the form of a singular question which is adequately answered in the form of a response that is singular as well, then it acquires communicability, it becomes universalizable. The work iconically augments the lived experience [of the creator, which is] inexplicable, incommunicable, closed upon itself. It is this iconic augmentation, as augmentation, that is communicable.... In 43-2410 - Phenomenology primer Page 4 of 4

Kantian terms, one would say that it is the "play" between imagination and understanding, as it is incarnated in [a specific] work, that is communicable....."

Columbia College Chicago