<<

Subtle Semblances of : Exploring , Emotional Theory, and

Dissertation

Presented in Partial Fulfillment of the Requirements for the Degree Doctor of in the Graduate School of The

By

Lindsay Alison Warrenburg, M.A.

Graduate Program in Music

The Ohio State University

2019

Dissertation Committee:

Daniel Shanahan, Advisor

David Huron

Anna Gawboy

Dónal O’Mathúna

Copyright by

Lindsay Alison Warrenburg

2019

2

Abstract

Music, perhaps more than any form, is able to influence moods and . There are limitless accounts of music eliciting of , transcendence, and other seemingly ineffable . In the scientific study of music and , however, only five music-induced emotions have been studied in depth: , , , , and tenderness (Juslin, 2013). Although these emotions are certainly important and can be expressed and elicited through music listening, a pertinent question becomes the following: do these five words accurately capture all affective states related to music? Throughout my dissertation, I argue that in order to better understand emotional responses to musical stimuli, we need to change the way we use emotional terminology and examine emotional .

In the first part of the dissertation (Chapters 1-4), I review how emotional music has been theoretically characterized and which excerpts have been utilized in research. I show that the field of is fraught with conceptual difficulties and that passages of music expressing a single emotion (e.g., sadness) span an unmanageably large area of emotional space. The second part of the dissertation (Chapters 5-8) provides an in-depth analysis of music that has been classified by other researchers as sad. I will show that previous research has conflated at least two separable emotional states under the umbrella term sadness: melancholy and . Through a series of behavioral

ii experiments, I argue that melancholic and grief-like music utilize different kinds of music-theoretic structures, are perceived as separate emotional states, and result in different states. In the last part of the dissertation (Chapters 9-11), I offer two possible interpretations of the research findings, drawing first from the field of to show that melancholy and grief could be separable emotion states that have different biological functions and vocal characterizations (e.g., Huron, 2015). Then, I advocate for the adoption of a psychological phenomenon called emotional granularity (e.g., Barrett,

2004). Emotional granularity refers to the specificity with which a person labels their emotional states, and is both an individual characteristic and a learnable skill. The dissertation concludes with ideas for research, including the investigation of how the musical structure may result in subtle shades of emotion previously unrecognized in the music literature.

iii

Acknowledgements

Those who know me best are familiar with the story of how I became interested in music research. When I was in college, I attended a performance of Bach’s St. Matthew

Passion by the Philadelphia . Although I had never heard the work before, when the bass soloist began to sing “Mache dich, mein Herze, rein” in the second act, tears immediately came to my eyes and I felt a of total tranquility. After the concert concluded, I conversed with fellow concert-goers and we all shared the sentiment that everyone in the audience became friends simply because we had all experienced the masterwork together. Intrigued, I began to question my family and friends about whether they had experienced a similar phenomenon from listening to music. Most—if not all— said yes. Between these conversations and my research on

Mahler, I felt like I had to understand why (and how) music can lead to strong and emotional experiences.

The truly amazing part of the story involves the reactions from those close to me when I relayed the fact that after eight years of intensive physical training and medical school preparation, I was going to enter a graduate program centered on . In response, I only received positive feedback, hugs, and supportive smiles.

Anyone who has attended graduate school can attest to the fact that it is a journey.

Committing yourself to an academic program for a large portion of your twenties means

iv that you will not only learn about your chosen area of study, but you will also discover who you are as a person. Accompanying me on this journey were the most wonderful people, whom it would be impossible to thank adequately in words. To these people, please know how grateful I am to have you in my life and how fortunate I feel to have you to keep me going through all the highs and lows of this period.

My first thanks, of course, goes to those who have mentored me throughout the years. Particular shout-outs go to Diane Dollak, for teaching me performance, aural skills, and music theory over many years, Andrew Hayes, for instilling in me a of statistics and behavioral research methods, Amelia Aldao, for introducing me to the science of emotion, David Clampitt, for his in all aspects related to the theory, and Baldwin Way, for his willingness to take me on as a graduate student, even though I was based in a different department. Thank you also to Anna

Gawboy and Johanna Devaney, who have been incredible sources of academic and personal support, and who motivate me every day with their strength and .

Special thanks goes to Lawrence Bernstein, who is one of the most brilliant and eloquent scholars I know, and is certainly the one with the most gracious spirit. You were the person that encouraged me to foster my love of music and to turn it into a professional career. Your classes still remain the most pivotal experiences in my academic life and certainly changed the way I view the world. To Daniel Shanahan, you have continually made me feel academically inspired and proud of my work and of myself. Your patience, generosity, and vision have made this last year of graduate school the best it could possibly be. I owe you so much and am excited to see how you continue to lead students to their maximum potential. Finally, to , thank you for

v believing in me and for guiding the way I think about music and about the world. I have never met anyone with a more curious spirit or with a more jovial . It has been one of the greatest of my life to see how you take an intuition and develop it into a fully-formed and realizable theory (the pleasurable-compassion theory) over a period of five years. Like many others before me, I am forever indebted to you for your expertise as a mentor.

Thank you to my Allied friends, Penn friends, Monell family, and my colleagues in the Cognitive and Systematic Lab. It has been a to learn alongside you and each person has contributed to my development as a scholar. Among many others, I would like to express for Nataliya Nedzhvetskaya, Amanda

Howson, Nitasha Khanna, Chandni Bardolia, Sindhoori Nalla, John Thiel, Rachel

Mathisen Fink, Megan Coffin, Elaine Fitz Gibbon, Yunica Jiang, Baker Beers, Davis

Butner, Zhenyu Zhao, Joel Mainland, Casey Trimmer, Wendy Yu, Deborah Lee, Nick

Shea, David Orvek, Andrew Brinkman, and Hubert Léveillé Gauvin.

Special thanks to Amalya Lehmann for keeping me grounded and for an incredible friend through thick and thin. To my twin-of-mind, Lindsey Snyder, you have opened your heart and home to me more times than I can count. Each time, you have made me feel at peace and like I am loved. Lissa Reed, you have given me the strength and to be authentically myself, for which I can never thank you enough. Lindsey

Reymore, you have proven to be one of the most important people in my life. In addition to your incredible dedication and intellect, you encourage me to be better, in every sense of the word. Finally, to Caitlyn Trevor, with whom I started this journey half a decade ago. Your light, fire, and ability to make me smile no matter what has meant the world to

vi me. I couldn’t imagine going through grad school without you by my side to keep me motivated, inspired, and happy.

One of my favorite quotes is by Charles Dickens, who says in Doctor Marigold,

“No one is useless in this world who lightens the burden of another.” Amalya, both

Lindseys, Lissa, and Caitlyn, you lighten my burden every day and make life filled with . Thank you.

I am blessed to have the most wonderful family imaginable. Throughout my entire life, each one of you has been an endless source of support and reassurance. To

Terrill, my best friend, I never feel complete until you are with me. I know I am never alone, because you will fight for me no matter what—a sentiment that I reciprocate. It has been the greatest joy of my life to see you grow into such a beautiful woman, inside and out. Mom, you have the most caring and loving of anyone in the world. Your heart of gold has inspired me to always try to do the right thing, to show compassion for others, and to treat myself with love and respect. Dad, I couldn’t imagine being able to complete this degree without you. In addition to all of the academic and moral support you’ve provided over the years, you have shown me by example what it means to lift others , how to joy from , and how to protect and advocate for those you love.

To Honey and Johnny, who have always been champions of my for music. It is an honor to be your granddaughter and I try to make you proud every day. And finally, to my precious little Evie, thank you for bringing love into my life. I would do anything to keep you happy, including as many early-morning play sessions as you want.

Truly, I am very lucky. Onto the next chapter!

vii

Vita

2009…...... Academy of Allied Health and Science

2013……………………………………...…B.A. Music, University of Pennsylvania

2014………………………………………...Research Technician, Monell Chemical

Senses Center

2016………………………………………....M.A. Music, The Ohio State University

2015 to 2018………………………………...Graduate Teaching Associate, School of

Music, The Ohio State University

Publications

Warrenburg, L. A. (in press). Comparing musical and psychological emotion theories. Psychomusicology: Music, Mind, and Brain.

Warrenburg, L. A., & Huron, D. (2019). Tests of contrasting expressive content between first and second musical themes. Journal of New Music Research, 48(1), 21-35.

Warrenburg, L. A., & Huron, D. (2019). Fitness and musical : Do physically fit listeners prefer more stressful music? Empirical Musicology Review, 13(1-2), 21-38.

viii

Warrenburg, L. A., & Way, B. (2018). Acetaminophen blunts emotional responses to music. In Proceedings of the 15th International Conference for Music and

Cognition (pp. 483-488). Montréal, .

Warrenburg, L. A., & Huron, D. (2016). Perception of structural features in first and second musical themes. In Proceedings of the 14th International Conference for Music

Perception and (pp. 132-137). San Francisco, CA.

Warrenburg, L. A. (2016). Examining contrasting expressive content in first and second musical themes. Master’s , Ohio State University.

Fields of Study

Major Field: Music

Specialization: Music Theory, Cognition, and Perception

ix

Table of Contents

Abstract ...... ii

Acknowledgments ...... iv

Vita ...... viii

List of Tables ...... xii

List of Figures ...... xiv

Chapter 1: Introduction ...... 1

Chapter 2: Comparing Musical and Psychological Emotion Theories ...... 10

Chapter 3: The PUMS Database: A Corpus of Previously-Used Musical Stimuli in 306 Studies of Music and Emotion ...... 73

Chapter 4: Choosing the Right Tune: A Review of Music Stimuli used in

Emotion Research ...... 82

Chapter 5: Assembling Melancholic and Grieving Musical Passages ...... 119

Chapter 6: Redefining Sad Music: Music’s Structure Suggests at Least Two

Sad States ...... 132

Chapter 7: Melancholic and Grieving Music Express Different Affective

States ...... 147

Chapter 8: People Experience Different Emotions from Melancholic and

Grieving Music ...... 168

x

Chapter 9: Downstream Pro-Social Tendencies from Melancholic and

Grieving Music: An Ethological Perspective ...... 191

Chapter 10: The Importance of Utilizing Emotional Granularity in Music and

Emotion Research ...... 214

Chapter 11: Conclusions ...... 239

References ...... 260

Appendix A: Additional about Chapter 4 ...... 305

Appendix B: Additional Information about Chapter 5 ...... 320

Appendix C: Additional Information about Chapter 6 ...... 333

Appendix D: Additional Information about Chapter 7 ...... 340

Appendix E: Additional Information about Chapter 8 ...... 344

xi

List of Tables

Table 2.1. Summary of Proposed Characteristics of the Four Major Emotional

Theories ...... 72

Table 5.1. Hypothesized Grief and Melancholy Acoustic and Musical Features

According to David Huron (2015) ...... 130

Table 6.1. List of Stimuli for the Study Regarding Structural Features ...... 138

Table 6.2. Ratings on the 20 Dimensions for the Pilot Study ...... 142

Table 6.3. Logistic Regression Results Predicting Melancholy and Grief-like

Music from 18 Musical Features ...... 145

Table 7.1. List of Musical Stimuli Used in Studies 1 and 2 ...... 150

Table 8.1. List of Musical Stimuli Used in Study 1 ...... 172

Table 8.2. Categories of Responses to Melancholic and Grieving Stimuli from the in Study 2...... 186

Table A.1. Complete List of Emotion Terms in the PUMS Database ...... 306

Table A.2. List of “Past Studies” where Experimenters Gathered Their

Stimuli ...... 308

Table A.3. List of Musical Styles or Genres in the PUMS Database ...... 310

Table A.4. Examples of Stimuli Used in Musical Mood Induction Procedure

(MMIP) Studies ...... 311

xii

Table A.5. Comparison of Induced-Perceived Stimuli with Their Duration in the

PUMS Database ...... 317

Table A.6. Top 25 Emotions by Condition (Induced, Perceived, Both) in the

PUMS Database ...... 318

Table B.1. and Sad Selected by Untrained Participants ...... 321

Table B.2. Trained Student-Selected Works for Four Emotion Categories ...... 327

Table B.3. Melancholic and Grieving Passages Rated by Three Experts ...... 331

Table C.1. Additional Features that Could Contribute to the Emotional

Character of Melancholic and Grieving Passages ...... 333

Table D.1. Component Loadings for the Two Components of Perceived

Emotion in Study 2 ...... 341

Table E.1. Component Loadings for the Four Components of Induced

Emotion in Study 1 ...... 345

Table E.2. Categories for the 793 words, Presented by Each Assessor and the Final Reconciled Categories ...... 347

Table E.3. Final Categories of Terms for the Melancholic and Grieving

Stimuli ...... 349

xiii

List of Figures

Figure 2.1. Dimensional Graph of Where Music Scholars Lie on a Continuum ...56

Figure 2.2. Proposed Preliminary Emotion Model ...... 64

Figure 7.1. Number of Emotional Responses for Each in Study 1 ...... 153

Figure 7.2. Distribution of Terms Across the Four Stimuli Types in Study 2 .....162

Figure 8.1. Induced Emotions from the Four Stimuli Types ...... 180

Figure 9.1. Inclusion of the Self in the Other (IOS) Scale from Aron, Aron, and

Smollan (1992) ...... 199

Figure 10.1. High and Low Emotional Granularity Reproduced from Barrett

(2004) ...... 218

Figure 10.2. The Differentiation Model from Widen & Russell (2008) ...... 222

Figure A.1. Biplot of the Correspondence Analysis ...... 319

Figure D.1. First Two Components of the PCA of Perceived Emotion from

Study 2 ...... 342

Figure D.2. Analysis of the 16 Stimuli Across the PCA Space in Study 2 ...... 343

Figure E.1. Analysis of the 16 Stimuli Across the First Two PCs for

Induced Emotions in Study 1 ...... 346

xiv

Chapter 1: Introduction

In a recent study, a team of researchers in Canada and discovered that a small group of people did not report feeling any emotion when listening to music (Mas-

Herrero, Zatorre, Rodriguez-Fornells, & Marco-Pallarés, 2014). When they questioned these individuals about their music-listening experiences, it appeared that these music anhedonics could correctly identify which emotions the music was portraying, but that this cognitive understanding of emotional representation did not translate into music- induced feeling. The idea that not everyone experiences emotions from music listening was surprising enough to catch the of a writer at NPR, who wrote an article summarizing this finding, aptly entitled “Strange But True: Music Doesn’t Make Some

People Happy” (Shute, 2014). The title alone is some indication of the personal connection that many people feel when listening to music; the fact that some people do not experience these emotions is “strange.” Writings that discuss the interaction between music and emotion are not only a recent phenomenon—such philosophical musings date back to the fourth century B.C.E., and possibly even further back in time (Levinson,

2014). This ability of music to express and induce emotion in listeners has remained a popular topic within the public and scholarly discourse through the twenty-first century, with a proliferation of scientific research in this area in the last three decades.

1

The kinds of emotion that can arise from music listening are contested. Some cognitivist researchers adamantly deny the idea that listeners can experience any emotion in response to music; rather, they believe that any experienced emotions that occur while music is playing must be due to extramusical causes (e.g., Kivy, 1990; Konečni, 2008).

Others believe that music can, in fact, cause emotional feelings. Among those who believe that music can cause emotions, there is a wide variety of beliefs regarding how many (and what kinds of) emotions can be perceived or induced from music listening.

While some researchers believe that emotions arising from music are the same as emotions arising from non-aesthetic circumstances, others believe that music causes different emotions than do “everyday” events (e.g., Juslin, 2013b; Scherer, 2004). These music-specific emotions have also been called “refined” emotions or “aesthetic” emotions—emotions for which there may not be overt physical manifestations or behavioral tendencies (e.g., Frijda & Sundararajan, 2007).

Additionally, some researchers believe that listeners may only perceive (or feel) a few emotions from music listening, such as happiness, sadness, fear, anger, and tenderness (e.g., Juslin, 2013a). Others believe that there are scores of emotions that music can represent (or elicit). In a review of musical passages explicitly used by researchers to represent or induce emotions, I found that music has been related to over one hundred emotional states. These seemingly music-related emotions are as disparate as , irritation, , grief, and elation. Finally, there are some people who believe that whatever emotions music portrays, these feelings cannot be accurately described using words. Victor Hugo’s famous sentiment that “music expresses that which

2 cannot be put into words and that which cannot remain silent” seems to suggest that music-related emotions may be ineffable.

The aim of this dissertation is to investigate what types of emotions may be perceived and experienced from music listening. The dissertation begins with a wide, theoretical focus, which narrows in later chapters. Approaches from the fields of music theory, empirical musicology, and psychology are integrated in order to examine music- related emotions using diverse . Theoretical perspectives, models of emotion, corpus studies, and behavioral studies are all presented. The is to contribute to the centuries-long discussion regarding music and emotion and to suggest productive avenues for future research.

Chapter 2 begins by asking the age-old philosophical question, “What is an emotion?” This question has remained unanswered since proposed it in

1884. We will discover that, although there have been many posed theories and models of emotion, there is no consensus about the definition of an emotion. In a 2006 survey, 39 international emotion experts were asked to provide a definition of the term “emotion.”

Of the 33 scholars who responded to the question, there was no consensus (Izard, 2007).

This chapter explores how various theories of music and emotion compare to major psychological emotion theories. The psychological literature is governed by four theories of emotion: basic emotions theory, , psychological construction theory, and social construction theory. In this chapter, I will first summarize the main tenets of these psychological theories of emotion. I will then highlight how music scholars conform to and deviate from these psychological theories. I further create a dimensional graph of where music scholars lie on a continuum from basic emotions, appraisal theory,

3 psychological construction, to social construction. Finally, I conclude with a description of my own preliminary emotion model.

While Chapter 2 focuses on speculative emotion theory, Chapter 3 presents a somewhat more objective way to examine the music and emotion literature. A new corpus of Previously-Used Musical Stimuli (PUMS) is presented. The PUMS database is an online, publicly-available database where researchers can find a list of 22,417 musical stimuli that have been previously used in the literature on how music can convey or evoke emotions in listeners. A total of 306 studies on music and emotion are included in the database. Each musical stimulus used in these studies was coded according to various criteria: its designated emotion and how it was operationalized, its length, whether it is an excerpt from a longer work, and its or genre. In the PUMS, there is also information regarding the familiarity of the original participants with each musical sample, as well as information regarding whether each passage was used in a study about perceived or induced emotion. The name of the passage, , track number, and specific measure numbers or track location were noted when they were identified in the original paper. The database offers insight into how music has been used in psychological studies over a period of 90 years and provides a resource for scholars wishing to use music in future behavioral or psychophysical research.

Chapter 4 uses the corpus described in Chapter 3 in order to illuminate how musical stimuli have been used to express or elicit emotions in the academic literature from 1928 through 2018. The chapter investigates how emotional music stimuli vary across disparate research designs (e.g., induced versus perceived emotion). The results are consistent with the idea that the literature relies on approximately nine emotional

4 terms, focuses more on perceived emotion than induced emotion, and contains mostly short musical passages. I suggest that some of the inconclusive results from previous reviews may be due to the inconsistent use of emotion terms throughout the music community.

The next four chapters only investigate music-related sadness. Specifically, these chapters pose the question of whether nominally-sad music can be better understood by instead using (at least) two affective labels: melancholy and grief. related to human crying suggests the of these two different, yet complementary, states (melancholy and grief) (Vingerhoets & Cornelius, 2012). These emotions are thought to have separate and physiological characteristics. Melancholy is a low- emotion associated with relaxed posture and rumination, whereas grief is a high-arousal emotion associated with crying. Although psychological research has acknowledged this between melancholic and grieving emotional states, this difference has not been recognized in the music cognition community. When characterizing nominally sad music, in fact, listeners appear to offer a wide range of descriptions. These four chapters investigate whether the large variance in responses to sad music is a consequence of the failure to distinguish melancholy from grief.

Chapter 5 describes the collection of melancholic and grieving musical passages.

In the first study, suggestions of sad (melancholy) and crying (grief) musical excerpts were solicited from . In this study, participants were allowed to define these emotions themselves. The second study also asked participants to provide suggestions of melancholic and grieving music, but in this study, the experimenters defined these two terms for the musicians. In order to minimize potential bias, the emotions were defined

5 through pictures of facial expressions. The resulting list of participant-selected works was supplemented by another source of stimuli, the database of Previously-Used Musical

Stimuli (PUMS) described in Chapters 3 and 4. Three experts (including the author) independently labeled some of these passages as grieving or melancholic, according to the criteria identified by Huron (2015). The collection of melancholic and grieving music from these three studies were used in Chapters 6-8 in order to investigate music-related sadness.

Using some of the melancholic and grieving passages collected in Chapter 5,

Chapter 6 examines trained listeners’ of the structure of sad music.

Participants with superior aural skills were asked to rate 18 structural parameters of melancholic and grieving passages on 7-point unipolar scales in order to examine the musical differences between these sad states. The results suggest that different musical parameters are consistent with the distinction between melancholic and grieving states.

The findings are consistent with the idea that what has been previously defined as sad music may, in fact, be conflating more than one emotional state.

The focus of Chapter 7 is how listeners perceive sadness in music. Specifically, the chapter asks whether listeners can differentiate melancholic music from grief-like music. Two correlational studies are described, with the aim of presenting converging evidence. The first study utilizes a five-alternative forced-choice , where listeners are asked to choose which emotion—melancholy, grief, happy, tender, or none—is best represented in a particular musical passage. The second study asks participants to identify all of the emotion(s) a musical sample conveys from a list of fourteen emotions. The results of both studies are consistent with the idea that listeners

6 can distinguish grief-like expressions from melancholic expressions and similarly can distinguish happy musical passages from tender musical passages.

If listeners can perceive differences between melancholic and grieving music, it is appropriate to ask whether listeners experience different emotions in response to these passages. Chapter 8 describes two behavioral studies, each of which examines induced emotion. Listeners responded to three questions in the first study: the first and second questions asked listeners to rate the extent that music made them feel positive and negative, while the last question asked participants to identify which emotion(s) they felt from a list of twenty-four emotions. The results are consistent with the hypothesis that listeners experience different emotions when listening to melancholic and grieving music

(all ps < 0.05). The second study asked listeners to spontaneously describe their emotional states while listening to music. Two coders conducted content analysis on the participant responses in order to find any underlying dimensions of the identified responses. The analysis replicated the finding that melancholic and grieving music lead to different feelings states, with melancholic music leading to feelings of

Sad/Melancholy/Depressed, Reflective/Nostalgic, Relaxed/Calm, and Rain/Dreary

Weather, while grieving music leads to feelings of /Uneasy,

Tension/Intensity, Crying/Distraught/Turmoil, Death/Loss, and

Epic/Dramatic/Cinematic.

The next two chapters offer two possible interpretations of the research findings presented in Chapters 5-8. The first interpretation, presented in Chapter 9, is drawn from the field of ethology. The ethological interpretation is that melancholy and grief should be considered to be two separable emotional states because they have different biological

7 functions and vocal characterizations. In this view, grief may be interpreted as an ethological signal, that people may overtly express feelings of grief through multimodal channels and that observers are biologically hardwired to respond to these displays of grief (Huron, 2015). On the other hand, melancholy may be understood to be an ethological cue (Huron, 2015). Ethological cues may not utilize overt vocal or visual physiological features and are therefore harder to be detected by others. Melancholy, in other words, may be confused with other low-arousal emotions, such as sleepiness or . One conclusion of the ethological theory is that listeners should respond to grieving music, but not melancholic music, with feelings of compassion and pro-social behaviors. In order to test this hypothesis, a correlational study was conducted. On two separate occasions, participants were asked to listen to melancholic and grieving passages. Participants were deceived about the intention of the study and were told that the researchers were investigating musical preferences. The true measures of , however, were three operationalizations of pro-. The results were not consistent with the hypothesis that melancholic and grieving music result in different amounts of downstream pro-social behaviors. The idea that listeners would feel more feelings of compassion in response to grieving music, compared to melancholic music, was tested in two studies of induced emotion (first presented in Chapter 8). Once again, the results were not consistent with the ethological hypothesis.

Chapter 10 advocates for the adoption of emotional granularity in music research.

Emotional granularity is a well-known psychological phenomenon that describes the specificity with which a person labels their emotional states (Barrett, 2004). Although emotional granularity can refer to either an individual characteristic or a learnable skill,

8 the focus of this chapter is how researchers and participants can learn to utilize more specificity when describing their emotional states. Within this perspective, a person might begin labeling emotional states with general terms or phrases, such as “I feel bad.”

Through training, the person can learn to differentiate sadness from fear from anger. As the person continues noticing details about their affective states, they might learn to differentiate melancholy from grief. By training researchers and participants to notice these different emotions, the chapter suggests that there are (potentially) many more emotions that can be expressed and elicited by music than the standard five to nine emotions discussed in Chapters 1-3 of this dissertation. By using more granular terms, it is possible to minimize the problem of semantic under-determination in music and emotion research. I further suggest that some of the inconclusive results from previous emotion reviews may be due to the inconsistent use of emotion terms throughout the music community; the adoption of emotional granularity could alleviate some of these problems. The chapter concludes by providing a methodology for future researchers to use when choosing emotional terms for their study.

The final chapter summarizes the previous ten chapters and presents potential future studies for the area of music and emotion. By showing how emotional responses to music can be better understood through the use of emotional granularity, this dissertation will contribute to the fields of and music theory.

9

Chapter 2: Comparing Musical and Psychological Emotion Theories

Music and emotion has been an area of sustained interest in the music psychology literature in the last few decades (e.g., Eerola & Vuoskoski, 2013; Gabrielsson &

Lindström, 2010; Garrido, 2014; Juslin & Laukka, 2003; Juslin & Sloboda, 2011;

Schubert, 2013; Västfjäll, 2002). Elegant theories and well-executed experiments have provided insight into how listeners perceive emotion in music and how emotions are induced in music listeners (e.g., Huron, 2015; Juslin, 2013; Juslin & Laukka, 2003;

Zentner et al., 2008). Research has come from the fields of psychology, music theory, and music information retrieval (e.g., Juslin & Sloboda, 2011). The proliferation of research on music and emotion has encouraged researchers to ask specific questions, yielding conjectures that have become nuanced and detailed. While this maturing of the field is ultimately desirable, the rapid growth of the field also brings challenges that need to be faced. Research studies are designed using different models of what actually constitutes an emotion (e.g., Vuoskoski & Eerola, 2011). These distinct models of emotion lead to different methodologies and interpretations about music’s role in emotional experiences. Consequently, the proliferation of research on music and emotion has not led to a consensus about how emotions are perceived or produced in music listeners.

10

The intention of this chapter is to take a step back and examine where various theories of music and emotion lie in the realm of psychological emotion models. There is no single theory of emotion that represents a consensus within psychology, let alone music cognition. Rather, the literature is dominated by four models of emotion: basic emotions theory, appraisal theory, psychological construction theory, and social construction theory (Gross & Feldman Barrett, 2011). As I will discuss below, these four models can be arranged along a continuum and overlap in some regards. Some of the pertinent characteristics of these four psychological emotion models can be found in

Table 2.1 on page 72, which summarizes the literature in how these four models differ with regard to methodology, evolutionary perspectives, brain implementation of emotion, development in young children, differences that arise due to languages, and how cultural context affects emotional interpretations. The aim of this chapter is to compare the emotion theories of psychological researchers and music researchers in order to provide context for others investigating music and emotion.

Basic Emotions

Psychological Research

The traditional basic emotions perspective posits that emotions are innate and universal “motivational amplifiers” (Tomkins, 1980), meaning that emotions cause us to react appropriately and effectively to fundamental life tasks that require change in some way. Historically, emotions have been thought to be fundamental “units” that arise from dedicated cortical or subcortical mechanism(s), distinct from other and (e.g., Ekman, 1992). These neural networks are thought to be domain-specific,

11 namely, that they exist separately for different emotions and features. Furthermore, basic emotions scholars often suggest that emotions are innate and are present at birth or shortly after birth (e.g., Ekman, 1972; Izard, 1971; Tomkins, 1962). Under this view, emotional states are considered to be universal, so that when a person from any is

(say) sad, their brains are activated in the same manner.

To a basic emotion theorist, emotions are functional states that cause certain feelings, behavioral expressions, and . Ekman (1992) states that we have these emotional behavioral tendencies because those actions proved to be adaptive in our ancestral past. Emotional behavior can therefore be causally explained through an understanding of emotion states because emotions evolved in order to deal with innate fundamental life tasks, such as achievements and losses. Each subjective emotional experience can be explained with regard to these life tasks. Happiness, for example, arises when a goal is achieved, while sadness is a result of a failure to attain a goal. Fear emerges when there is an expectation that a goal will not be achieved (Ekman, 1992).

In some cases, it is advantageous for a person to convey their emotional state to others. When a person is experiencing an emotion like grief or anger, displays of emotion can be used to communicate to observers that they should respond to the experiencer in a certain way. In the case of grief, displays of crying should evoke an altruistic response from an observer. is thought to have crafted these overt emotions to contain distinctive physiological states and behaviors. Over time, similarities in physiology and behaviors among these emotion class (like anger) developed. Displays of emotions therefore provide information to others about what is happening in the environment and how others should behave. When an overt emotion occurs, there are changes that

12

(usually) occur in physiology and expressions. Some of these changes will be common across people and some will differ, depending on social learning.

In other circumstances, it is not advantageous for an experiencer to display their emotional state to observers. A person may not others to know that they are experiencing , for example. These private, or clandestine, emotions may not result in observable behaviors or physiological states. For more on this idea, see the discussion in

Chapter 9 on ethological signals and cues, as well as Huron, 2015.

As discussed above, to a basic emotion theorist, emotions are innate states of mind—the emotion itself is a functional state that is inherited from our ancestors. By this definition, emotions are not defined as subjective feelings or conscious experience.

Rather, emotions are defined as cognitive states arising from the activation of subcortical and neural structures (e.g., Panksepp, 1998). The study of “emotions” is also differentiated from studies of and from studies of emotion-related behaviors and expressions. Adolphs (2017) uses an analogy to explain the functional view of emotional states. Specifically, he compares the study of emotional brain states to the study of planets. The analogy is as follows: If you want to study planets (emotion states), you use a telescope ( techniques). If you want to study what people think of planets, you use psychological methods (conscious experiences of emotions). A consequence of Adolphs’ viewpoint is that you may think you are in a state of fear, but you could be wrong. Your neural circuits are either in a state of fear or they are not— your subjective feelings do not matter. Ekman (1992) similarly ignores the subjective experience of emotion because, he claims, there is not enough known about how subjective feelings relate to the other aspects of an emotional experience.

13

Many of these ideas—universal emotions, inclusion of unique neural substrates for each emotion, presence of emotions at or near birth, and strong evolutionary basis for emotions—remain central to basic emotions theorists today. In the roughly 50 years since basic emotion theories gained traction, however, basic emotions theories have taken many different directions. Two examples will clarify recent modifications to basic emotions theories. The two articles presented represent different ways basic emotions theories have changed over time. The first article, by Ralph Adolphs, shows how advances in since the 1960s and 1970s have affected basic emotion theories. The second article, by Joseph LeDoux and Stefan Hofman, represents how recent philosophical principles affect the theory of basic emotions. Despite their modifications to the traditional basic emotions tenets, the authors still self-identify as basic emotions theorists.

First, recent theories do not always suggest that brain activity during an emotional state is “modular” in the sense that fear is “located” in the amygdala (Adolphs, 2017b).

Adolphs claims that it is now accepted that fear states (and other states) involve distributed networks across the brain. In his view, the amygdala would be considered a necessary component of an emotional state, but it is not the only feature of interest in fearful states. He further explains that some current basic emotions models involve neural reuse, predictive coding, and dynamic routing.

Second, LeDoux and Hofmann (2018) disagree with the traditional idea that subjective emotional experiences are irrelevant to the study of basic emotions. They state, in fact, that the subjective emotional experience is the “essence of an emotion and that objective manifestations in behavior and in body or brain physiology are indirect

14 indicators of these inner experiences.” In short, they claim that behavior and physiology moderate emotions, but are not the emotions themselves. For them, if you are not aware that you are afraid, you are not afraid. You can also never be wrong about your emotional experience. These authors take a neuro-cognitive approach to emotions, in which they draw on current theories of consciousness. LeDoux and Hofmann claim that the only difference between cognitive and emotional conscious states is the type of inputs (like , emotion , and arousal) that are processed by higher-order structures (see the HOTEC theory, LeDoux & Brown, 2017). They further claim, contrary to many basic emotions theories that focus on subcortical neural circuits, that emotion arises in cortical circuits (“general networks of cognition”).

Much of the evidence for basic emotions theories comes from the field of . Areas of the brain are examined for neural footprints using methodologies like fMRI, EEG, and PET (e.g., Moll et al., 2002; Saarimäki et al., 2015).

In addition, basic emotions theories gather support from psychophysiological indexes, like coding of facial behavior (e.g., Baltrušaitis, Robinson, & Morency, 2016). These physiological studies are often compared to a person’s self-report of emotional experiences, in order to reach converging evidence (e.g., Harmon-Jones, Bastian, &

Harmon-Jones, 2016). Cross-species designs (e.g., Panksepp, 2016) and developmental studies of emotion in humans (e.g., Rodger, Vizioli, Ouyang, & Caldara, 2015) are also common methodologies. In addition, some studies (e.g., lesion studies) examine dissociations in neurological and psychiatric patients (e.g., Farinelli et al., 2015). A critical experiment cited in the basic emotions literature comes from Ekman (1971), who conducted a study that showed that emotional facial expressions appear to be universal.

15

In the original studies, actors were photographed making facial expressions of emotions such as surprise, happiness, sadness, , anger, and fear. These photographs have been shown to people in various around the world. Researchers ask people to choose the appropriate facial expression to match a specific emotion word, like sadness.

The fact that non-Western people have performed well on this task has given rise to the claim that such emotional expressions are “universal.”

More evidence from this line of research comes from work on prosody.

Infants are known to discriminate between happy and sad and angry voices around the age of 7 months (e.g., Vaish et al., 2008). Such research indicates that children are born with (or are able to acquire) this vocal discrimination ability easily, without the influence of language or social norms. Finally, comparative researchers dating back to Darwin

(1872) have studied the use of emotions in non-human primates and have speculated that similar brain networks are responsible for the production and generation of emotions across these species and our own. Emotional states are thought to have arisen in response to the environment long ago and have evolved and been refined through the process of evolution—emotions are adaptive behaviors that contribute to the fitness of the experiencer (e.g., Tomkins, 1980). Therefore, we are biologically predisposed to respond to the environment and the challenges it produces. Whenever a stimulus “triggers” a basic emotion, we will exhibit certain features of emotional states, like facial expressions, action tendencies, and subjective feelings.

Psychologists who subscribe to the basic emotions model include Ralph Adolphs

(2017), Jaak Panksepp (1998), Joseph LeDoux (2018), Silvan Tomkins (1980), Paul

16

Ekman (1992), Carroll Izard (1993), Robert Levenson (1994), and Antonio Damasio

(1999).

Music Research

In the music cognition literature, the basic emotion theory is dominant, with many studies operating within this theoretical perspective. Instead of calling themselves basic emotions theorists, though, many researchers say that they subscribe to a discrete emotion model, which states that there tend to be categorically distinct emotions that music expresses or evokes in listeners. By saying they operate within a discrete emotion model context, authors are claiming that they believe in common, separable emotion states (like the basic emotions model), but they do not necessarily make any strong claims regarding their feelings about tenets of basic emotions theory, such as an evolutionary past and the use of common neural substrates. Some commonly listed basic emotions in the music literature include anger, fear, surprise, happiness, sadness, and—perhaps surprisingly— tenderness (which is considered by some to be an “aesthetic emotion,” rather than a basic emotion) (e.g., Baumgarner, Esslen, & Jäncke, 2006; Etzel, Johnsen, Dickerson, Tranel,

& Adolphs, 2006; Juslin, 1997; Kallinen, 2005; Krumhansl, 1997). A strength of this perspective is that music corresponding to these nominal emotional categories is more easily recognized and has higher interrater agreement than music that corresponds to more “ emotions” such as , , or solemnity (e.g., Juslin, 1997; 2013;

Laukka & Gabrielsson, 2000). Researchers note that the better-identified emotions correspond with traditional basic emotions and believe that we are hardwired to identify these types of emotions in stimuli.

17

Additional evidence for the discrete model of emotion comes from comparing the speech literature with the music literature. In emotional speech, for example, it is known that continuous variation in vocal expressions is nevertheless processed as belonging to distinct categories of emotion (e.g., de Gelder & Vroomen, 1996; Laukka, 2005).

Therefore, it is unsurprising that certain emotions are more reliably perceived in music than are others.

Some support for the discrete emotion model has been collected in my corpus of

Previously-Used Musical Stimuli (PUMS) (see Chapters 3 and 4). I found that, in a collection of 306 studies on music and emotion, around fifty percent of these studies concentrated on only two emotions: sadness and happiness. Additionally, for emotions that were represented more than one percent of the time, only nine common terms are represented—sad, happy, anger, relaxed, pleasurable chills, fear, chills (general), peace, and groove. Of particular interest is the fact that the term tender appeared about 1% of the time; this is surprising, given the theoretical prominence of tenderness (e.g., Juslin,

2013).

It should be noted that several prominent researchers who are sympathetic with the basic emotions model have nevertheless recognized some weaknesses in how it has traditionally been understood and implemented. Instead of simply adopting the standard basic emotions paradigm, these scholars present expanded theories of emotions, which draw on additional theories and perspectives. For example, David Huron (in preparation) suggests that rather than having six or seven basic emotions, evolution has crafted scores of basic emotions that can each be differentiated in various ways. For example, although most people believe that anger is one emotion, he cites cold anger and hot anger as

18 separate basic emotions, each with distinct neural, somatic, and behavioral fingerprints.

In this perspective, being angry and wanting to yell, scream, and slam doors would be a classic example of hot anger. An example of cold anger would be when you are furious with a person in a position of power, but instead of yelling at them or throwing objects, you seethe internally and wait to complain to your friends later in the day. Huron suggests that all affective states, including itchiness and hunger, can be described as basic emotions, as they all have distinct feeling states and motivate us to act in ways that benefit the person having the emotion (Huron, 2006).

Patrik Juslin also classifies himself as a basic emotion theorist (Juslin, 2013). Like

Huron, he also presents an expanded version of the basic emotion model. Juslin predicts that there are different configurations of musical features for different emotions. Namely, features such as fast , wide interval leaps, and the major mode can help reliably communicate happiness to listeners. He further posits that the features of the music that signal a certain emotion overlap to some extent with signals of speech prosody. Similarly, there are universal psychophysical emotional cues, such as critical bands, relating to perceptual dissonance and an unpleasant feel. Juslin also interestingly suggests that features intrinsic to the music’s structure, like , , and melodic progression, are correlated with a listener’s interpretation of emotion in music. These intrinsic features are likely to differ from culture to culture and can be learned through repetition over the course of one’s lifetime. The way music can convey more “complex”

(non-basic) emotions, he reasons, is through features like motion, leitmotifs, and culture-specific idioms. The combination of these musical nuances add layers of

19 information to the underlying basic emotion expression, leading to a wide array of possible expressed emotions in music.

In order to predict the emotion a listener perceives in music, Juslin has proposed a musical version of Brunswik’s Lens model (Brunswik, 1956; Juslin, 2000). The Lens model examines the relationships between the performers’ intentions, the notated score, and listeners’ affective responses to the music. The listener—a decoder—combines a large number of partially redundant cues (e.g., speed, intensity, ) in order to figure out what the encoder is attempting to communicate. For example, sadness can both be encoded and decoded by using cues like slow tempo, low sound level, little sound level variability, little high-frequency energy, low pitch level, little pitch variability, falling pitch contour, slow tone attacks, and microstructural irregularity (Juslin & Laukka,

2003). This modified Lens model, based on a multiple regression framework of various performer cues, intrinsic structural cues, and psychoacoustical cues, has led to an explained variance around 70% (Juslin, 2000).

Stefan Koelsch and colleagues have proposed an emotion model called the

Quartet Theory of Affect, where they claim that different kinds of emotion originate from different brain structures (Koelsch et al., 2015). It is called the Theory because affective states are through to arise from four different brain areas: feelings of relaxation and energy originate in the brainstem, feelings of pleasure and pain arise from the diencephalon, feelings of attachment come from the hippocampus, and feelings of moral affects (e.g., gratitude, jealousy, nostalgia) originate in the orbitofrontal cortex. The authors claim that music can elicit responses from all four systems: brainstem activity results in feelings of arousal and relaxation, diencephalon activation results in feelings of

20 pleasure, anterior hippocampus involvement results in emotions like being moved, and orbitofrontal cortex activation is a result of violations of expectations like surprising harmony. According to Koelsch and colleagues (2015), these four neurobiological systems are thought to interact with each other, with “effector systems” (e.g., motor systems, memory systems, attention systems, physiological arousal systems), and with other sociocultural factors. In terms of music, this model suggests that when a person listens to a passage, four distinct brain systems are activated. However, a person’s stylistic understanding and familiarity with the music are likely to alter the activity in these neural systems. They claim, furthermore, that music-related emotions act prior to the reconfiguration of emotional feelings into language, which is part of what gives music its ineffable .

Similar research by and colleagues has suggested that there are certain psychophysical cues in music and in speech prosody that do not depend on enculturation (Balkwill & Thompson, 1999; Ilie & Thompson, 2006; Thompson &

Balkwill, 2006). They have suggested that when environmental (e.g., nature sounds, animal calls, machine noise) mimic the acoustic features that signal and evoke specific emotions in speech and music (i.e., changes in frequency spectrum, intensity, and rate), the environmental sounds are also able to induce similar emotions in listeners (Ma

& Thompson, 2015). The results give credence to Darwin’s musical protolanguage hypothesis that a common emotional system could have arisen based on imitation of environmental sounds.

Although Thompson and colleagues’ research falls in line with the idea that there are universal basic emotions, their research also shows how cultural traditions and

21 learning impact the emotions communicated and expressed in different communities.

Their Fractionating Emotional Systems model acknowledges culture-specific conventions that are important in musical experiences. In this model, they emphasize that each psychophysical cue is utilized to a different extent in different cultures. The amount of variation of a certain cue (e.g., pitch) in a culture leads to different ways of communicating emotion in both speech prosody and in music. For example, there are considerable differences in how scales are formed across cultures and how intervals are, in turn, formed by scale notes. These various scale constructions are experienced differently by those in the culture and by those from other cultures (Thomspon &

Balkwill, 2008). In other words, these researchers posit that there may be certain universally-expressed emotions in music, there are also culture-specific cues that interact with or modify these basic emotions to produce different emotional expressions.

Additional research using a basic emotion perspective comes from physiological research. In order to study music and emotions from a basic emotion perspective, researchers have used experimental or physiological studies (e.g., Krumhansl, 1997), brain imaging and EEG studies (e.g., Blood & Zatorre, 2001), and studies of peak experiences (e.g., Gabrielsson, 2001; Panksepp, 1995). There is no that people experience physiological changes when listening to music. Well-executed studies have shown emotion-related physiological changes in EEG activity (e.g., Schmidt & Trainor,

2001), fMRI and cerebral blood flow activity (e.g., Alfredson et al., 2004; Blood &

Zatorre, 2001), and somatic responses in cardiac, vascular, electrodermal, and respiratory functions (e.g., Krumhansl, 1997). However, these changes are not consistent from study

22 to study, and taken together, they do not suggest that each emotion has a physiological

“fingerprint” (Barrett, 2017).

Basic emotions theories will likely retain a robust presence in the music literature.

One of the reasons why this may be true is because it is difficult to disprove basic emotion theories. This difficulty to disprove basic emotion theories can be summarized with the precept “absence of evidence is not evidence of absence.” Namely, even if data from studies are inconsistent with suggested basic emotion neural pathways, there is always the possibility that there is an untested neural circuit or hormone that is consistent across all people experiencing the same basic emotion. For example, it is possible that

“Hormone X” becomes active in all cases of human sadness, whether experienced in non- aesthetic contexts or through music listening. However, until such a “Hormone X” is identified, the basic emotions theory may face scrutiny from other, competing emotion theories.

Appraisal Theories

Psychological Research

Appraisal theorists claim that humans use cognitive processes (“appraisals”) to make sense of and to understand their external surroundings and internal states (see

Table 2.1 on page 72). Appraisals are considered to be cognitive actions, which in turn elicit biologically-based emotional responses (e.g., Ellsworth & Scherer, 2003; Frijda,

Kuipers, & Ter Schure, 1989). Theorists differ in whether they believe cognitive appraisals are characteristics of emotions or whether they are separate processes from emotions that trigger emotion-elicitation (for a discussion on this, see Ellsworth, 2013).

23

In addition, there are two kinds of appraisals that are suggested to exist: fast appraisals, which are automatic and implicit, and slow appraisals, which are deliberate and conscious

(e.g., Clore & Ortony, 2008; Moors, Ellsworth, Scherer, & Frijda, 2013).

Appraisal theorists, like basic emotion theorists, share the assumption that emotions are adaptive and motivate responses to a particular environmental challenge

(e.g., Ellsworth & Scherer, 2003). Appraisal processes are thought to inform the individual about life-station issues that need to be addressed (e.g., Lazarus, 1966). An individual socially learns which environmental information is diagnostic and relevant to their goals (e.g., Ellsworth & Scherer, 2003). Similarly, both emotion camps believe that an eliciting event will activate particular brain structures, which will be the same for every experience of a particular emotion (e.g., sadness) (e.g., Moors et al., 2013). Unlike basic emotion theorists, however, many appraisal theorists conceptualize emotions as a continuous spectrum, with infinite possibilities of the types of emotions (e.g., Moors et al., 2013). Moreover, appraisal theorists believe that an emotional episode can be colored by culture, a person’s dispositional features and experiences, and current goals (e.g.,

Ellsworth, 2013). This important distinction means that separate people may respond differently to the same stimulus event—a point with which basic emotions theorists would disagree. However, if two people’s appraisals are the same, they will experience the same emotion.

Appraisal theorists believe that the social context of an event could lead to different appraisals. Individual differences between people, such as personality traits

(-, differences, coping potential, sensation-seeking), innate characteristics (CNS/ANS, cognitive styles), and process characteristics (speed, degree of

24 cognitive effort, attentional deployment) all affect how one subjectively perceives an event (e.g., Ellsworth & Scherer, 2003). These different appraisals of an event will lead to cultural variability and individual variability within a given emotion. For example, there are likely cultural differences among what makes something pleasant (valence appraisal). The criteria to meet certain appraisals differs across situations, prior experience, and expectations. However, across cultures, if an event is appraised as the same, it will lead to the same emotion. That is to say, “emotions and appraisals are likely to be culturally variable, but the relationship between appraisals and emotions is culturally general, perhaps even universal” (Ellsworth & Scherer, 2003).

In appraisal theory, instead of talking about “emotions,” theorists tend to discuss

“emotional episodes” (Moors et al., 2013). An emotional episode indicates that emotions are continuous experiences that happen in a fixed time course, beginning with a stimulus.

The detection of a stimulus is the first appraisal—the first emotional change. Once novelty has been appraised, further appraisal sequences occur (whether in sequence or in parallel depends upon the theory), such as valence and goal-relevance. Once this appraisal process starts, the experiencer is changed, physiologically and psychologically.

This perceived change is the emotional experience. The state of the experiencer changes continually throughout the emotional episode.

Different appraisal processes are thought to occur for different evolutionary functions. Fast and automatic appraisal processes (called associative processes) help humans be prepared for the environment, while slow and conscious appraisal processes

(called rule-based processes) help attain flexibility (Clore & Ortony, 2008). A critical study that tests the idea that appraisals help with evolutionary adaptation comes from

25

Lerner and Keltner (2000). These researchers compared the emotional responses of a group of people with high dispositional anger and another group with high dispositional fear. According to these researchers, anger and fear are high in arousal and have negative valence (although, see Harmon-Jones, Harmon-Jones, Abramson, & Peterson, 2009;

Harmon-Jones, Schmeichel, Mennitt, & Harmon-Jones, 2011; and Hess, 2014 for discussions of how anger—“righteous ”—may be considered to be a positive emotion). The appraisal structures of anger and fear, according to Lerner and Keltner, are thought to be different. In their study, participants were asked to guess the average number of deaths from a variety of situations. In this case, anger is supposed to be negatively related to pessimistic risk assessment (i.e., they would guess lower death rates), whereas fear is thought to be positively related to pessimistic risk assessment (i.e., they would guess higher death rates). The results are consistent with the hypothesis that fear and anger contain different appraisals, signifying that it may be evolutionarily adaptive to experience emotions with different consequences.

Evidence for appraisal theory has come primarily from studies using self-report.

In these studies, participants are asked to an emotion and then consciously assess whether they had experienced certain appraisals or action tendencies using questionnaires

(e.g., Frijda et al., 1989). Other studies have examined facial, vocal, and physiological responses to an emotion (e.g., Banse & Scherer, 1996; Wehrle, Kaiser, Schmidt, &

Scherer, 2000), although these kinds of studies are rare, as many appraisal theorists believe that there is not a one-to-one mapping of CNS or PNS activity with subjective experiences. As in many other fields that utilize self-report as their primary methodology, appraisal theorists have been criticized for the use of self-report methods (see Chan,

26

2009, for a discussion of the validity of self-report methods). For example, it is not clear whether people are able to reconstruct the emotional episode in the way that it was originally experienced, or whether people would be able to consciously access processes that are inherently fast and automatic in nature. In addition, the fact that people are able to experience emotions in response to music has been cited as an issue for appraisal theories (Ellsworth & Scherer, 2003). Although there is an external stimulus to which a person can respond emotionally, most appraisals are irrelevant to musical stimuli. In response to this question, some appraisal theorists believe that “perhaps musical and phrases create physiological responses that mimic the physiological and non-cognitive aspects of appraisals and emotions so that, by association, the emotion itself is elicited” (Ellsworth & Scherer, 2003).

Important appraisal-based perspectives come from like Magda

Arnold (1960), (1991), Nico Frijda (1986), Howard Leventhal (1984),

Klaus Scherer (1984), and Phoebe Ellsworth (2013).

Music Research

In the music cognition literature, Klaus Scherer is one of the most prominent appraisal theorists (e.g., Scherer, 2004; Scherer & Zentner, 2001). Scherer’s appraisal theory—the Component Process Model of Emotion (Scherer, 1984)—discusses how emotions include cognitive changes, physiological arousal, motor expressive behavior, and subjective feelings. His work on music states that the emotions produced by music can be affected by the musical structure (e.g., mode, tempo), performance features (e.g., timbre or dynamics, ability of performer, stage presence), listener features (e.g., the

27 individual and socio-cultural identity of the listener, the symbolic coding of the listener’s culture), and contextual features (e.g., location of performance, type of event, listening together or alone) (Scherer & Coutinho, 2013).

Scherer refers to two mechanisms for emotion induction in music, both of which are standard in the appraisal literature (Scherer & Zentner, 2001; Scherer & Coutinho,

2013). First, he cites a “central route production” of emotion: an (musical) event occurs, which is subjected to cognitive appraisal by a person. The central nervous system is activated and a person experiences an emotion. This process can be mediated by , memory, entrainment, or . Second, he cites a “peripheral route production” of emotion: proprioceptive feedback from the somatic and autonomic nervous systems (e.g., changes in respiration and ) can help facilitate physiological reactions (e.g., crying). As an example, Scherer mentions how entrainment to music might influence a person’s breathing patterns. These respiratory changes may then spread to other neurophysiological components and ultimately stimulate (or alter) the subjective feeling of an emotion.

Although self-classified as a basic emotion theorist, David Huron also presents an emotional model that could be considered an appraisal model (Huron, 2002). In this model, Huron presents a sequential model of emotion elicitation in response to an auditory stimulus. In response to a sound, a person will (usually) first experience some sort of reflexive response, like an orienting response or a startle reflex. Next, a person will utilize two fast and automatic appraisals try to identify the stimulus: an appraisal based on past associations (denotative responses) and an appraisal through physical properties like energy and timing (connotative responses). Then, a person will appraise

28 the stimulus through learned associations and memory (associative responses) and through animacy cues and signals (empathetic responses). Finally, a person will consciously appraise the intentions of the agent making the sound (critical responses).

Through this six-stage process, Huron identifies both low- and high-level processes that are in line with other appraisal theories, such as Scherer’s Component Process Model

(1984).

In appraisal theory, some scholars also believe that there are discrete emotions that fall into two categories: “coarse” and “refined” (e.g., Frijda & Sundararajan, 2007).

Coarse emotions refer to traditional basic emotions, but refined emotions are not thought to have overt physical manifestations or behavioral tendencies. Music would fall into the category of refined emotions. Namely, in music, “one moves into a mental space that is detached from pragmatic, self-related concerns, [so] emotions lose their urgency but retain their inner structure and action tendencies” (Zentner, Grandjean, & Scherer, 2008).

Other scholars claim that musical emotions are not distinct from non-, as many listeners react to music in a way that is not “refined,” per se (Huron, in preparation).

Psychological Construction

Psychological Research

In contrast to basic emotion and appraisal theorists, psychological construction theorists do not regard emotions as special mental states (e.g., Barrett, 2013; Russell,

2003). Instead, they suggest that all mental states–including emotions–emerge from an ongoing cognitive and physiological process called “construction” (see Table 2.1 on page

29

72), meaning that our brain creates a unique emotion, either perceived or experienced, during a particular situation. The theory of constructed emotion states that we evaluate our surroundings and stimuli using information from different domains, including previous experience, learned associations, and innate behaviors (e.g., Barrett, 2017). The brain updates this prediction based on the current situation, giving rise to new emotional experiences. Each emotion, therefore, is a result of how the cognitive brain processes and weights different pieces of information. The result of this cognitive construction is a unique emotional experience. Under this view, every instance of sadness, for example, is unique. A person can experience multiple forms of sadness, depending on the current situation. Even if two similar events (e.g., failing a test and failing to get a job) give rise to a state of sadness, the subjective feeling, neural activation, and physiological responses of the two emotions will be unique.

The of psychological constructionists relies on (1) constructive analysis (instead of reductionism) and (2) population thinking (instead of typologies) (e.g., Barrett, 2013). Constructive analysis means that emotions are perceptions that arise due to normal cognitive processes: an emotion is uniquely constructed during each emotional instance. One of the ingredients of an emotional state is called “core affect”—a baseline state of valence and arousal that is always occurring, whether or not a person is in an emotional state (e.g., Russell, 2003). As Russell (2003) explains, core affect is a product of evolution, as it has a function to inform the experiencer about their internal bodily sensations. Core affect is thought to be involved in cognitive functions like attention, perception, judgment, mental simulation, and memory.

30

Population thinking means that broad categories of emotion can be likened to a species (e.g., Barrett, 2013). Each instance of an emotion, in turn, can be likened to an individual. In these population-thinking theories, each individual instance of an emotion is allowed to vary. In other words, each instance of an emotion is thought to vary in its physical manifestations and in its subjective feeling-state (e.g., Barrett, 2017). Each separate occurrence of an emotion (within a person and across people) is optimized for the particular eliciting condition (e.g., Russell, 2003). As Barrett (2013; 2017) describes, the fact that each instance of an emotion varies within and across individuals may be caused by stochastic processes (e.g., mutation) and by environmental/eliciting differences

(e.g., natural selection). Each instance varies in order to have “maximal utility” in particular contexts (Barrett, 2013). Unlike basic emotions theories (and some appraisal theories), an emotion is not at all modular. Instead, it is distributed, usually cortical, acquired (not innate), and consists of domain-general processes (like memory and categorization) (see Figure 1 of Adolphs, 2017, for more details on how these theories differ). Emotions, to the psychological constructionist, are cognition dependent and depend on physical properties of the body (e.g., Barrett, 2017; Russell, 2003). There is thought to be a biological basis to emotional states, but the correspondence between the biology and emotion is not one-to-one. Instead, the theory focuses on degeneracy: many combinations of neural systems can result in the same emotional state (e.g., Barrett,

2017). What basic emotions theorists think of as a “fear” state is instead suggested to be a mathematical average of a variety of fear brain states.

One of the most pronounced proponents of psychological construction is Lisa

Feldman Barrett. Barrett (2017) claims that our current emotional states are constructed

31 based on the role of social values in society, our current social role, and our subjective feelings. It seems, therefore, that rather than representing a discrete entity, a nominal emotion (like sadness) represents a continuum of emotions that can vary in arousal, valence, and a number of other dimensions. Barrett has supplemented her theory by showing empirical evidence that the microwiring of neural pathways, facial expressions, and subjective components of emotions (like sadness) vary among each particular emotional event. In the psychological construction literature, the idea of neurally-specific

(“fingerprint”) basic emotions has lost its credibility. As one example, in a meta-analysis of over 200 physiology studies, Barrett concluded that “even after a century of effort, scientific research has not revealed a consistent, physical fingerprint for even a single emotion” (Barrett, 2017). In the world of psychological construction, then, there is no such thing as emotional accuracy. With no objective fingerprints to pinpoint an emotion, it is impossible to know the emotional state of another person. All that is possible is to see whether people agree on an emotional state. If people do agree, it means they must share the same mental concept for that emotional state. Emotions, therefore, are a social : an emotion like fear exists only because people have agreed that they are real. As humans, we impose meaning on emotions that would otherwise not exist.

Unlike basic emotions theorists, who claim that subjective reports of emotions are unreliable measures of an emotion (e.g., Adolphs, 2017; Izard, 1992), psychological constructionists believe that subjective reports are the only way to measure an emotional state (e.g., Russell, 2003). Emotions are thought to be products of and experience. Mental concepts help form the boundaries of emotions (e.g., Barrett, 2017).

32

In order to form a concept when there are few statistical regularities (like emotions), you need words. Language, therefore, is a central component of emotional states.

As with proponents of the other major models of emotion, there are variations of opinion among psychological constructionists. Although these scholars follow the same general theoretical framework about emotional construction, certain psychologists suggest modifications to the general psychological construction account of emotion. As one example of how psychological constructionists theoretically differ, we can turn to the disagreement about the use of language to describe an emotional state between Lisa

Feldman Barrett (2013; 2017) and James Russell (2003). Barrett claims that you can never be wrong about the emotional state you believe you are experiencing. She that there is a difference in the experience of emotions driving different word use. Her theory, called emotional granularity, suggests that some people may experience more nuanced affective states than others (Barrett, 2004; also see Chapter 10). If a person has a more granular experience of emotion, they will also use more granular terms to describe the states they are in. She uses the idea of anger to highlight this point. Barrett points out that, while English only has one word for anger, German has three words for anger and

Mandarin has five words for anger (Barrett, 2017). She claims that, in order to experience these different anger experiences, you would need to acquire the language in order to construct these new emotion concepts. Effective communication, in her view, depends on the synchronized concepts between two people. If one person’s anger is different than another person’s anger, there cannot be successful communication between these people.

One potential criticism of Barrett’s idea is that, although it may be true that there are different anger-related concepts in different cultures, the communication of these

33 different angry states is mainly dependent upon vocabulary. For example, although

English words like , fury, hot anger, and wrath may represent one (negatively- valenced and high-arousal) concept of “anger,” English also contains words like —which represent a different (positively-valenced) concept of

“anger”—and seething or cold anger, which represent another (low-arousal) concept of

“anger.”

Unlike Barrett, Russell posits that one can be wrong about one’s meta-experience of emotion. To him, a person’s emotional state is thought to exist in some objective, non- linguistic, state. A person’s emotional state, according to Russell, is fully determined by their core affect, associated bodily changes, and action tendencies. Despite the relative of an emotional state, the verbal label a person uses to describe this emotional state is more subjective. This verbal emotional label is based on a different cognitive process than the ones that determine a person’s emotional state. Russell calls the cognitive process that helps a person choose an emotional word to describe what they are feeling their “emotional schema” (Russell, 2003).

Some antecedents to the psychological construction theories include the James-

Lange theory of emotions (James, 1884; Lange, 1885), the Cannon-Bard theory of emotions (Bard, 1928; Cannon, 1929), and the Schachter-Singer theory of emotions

(Schacter & Singer, 1962). According to Gross & Feldman Barrett (2011), these antecedents also include psychologists such as (1907) and William

James (1884). Current psychological constructionists include Lisa Feldman Barrett

(2017) and James Russell (2003).

34

Music Research

As discussed above, the results from the PUMS database show that studies of perceived and induced emotion tend to rely on approximately nine emotion terms (see

Chapters 3 and 4). This may initially seem to be in line with discrete models of emotion, which tend to rely on basic emotion terms. However, the results can also be interpreted through the lens of the psychological construction of emotion. One of the ways we can use the theory of constructed emotions to understand music and emotion is by considering the idea of emotional granularity (see Chapter 10 for more elaboration on this idea). Broadly speaking, emotional granularity is the ability to specify one’s own and others’ particular emotional states, as opposed to more general valence-dominated terminology. Emotional granularity requires the ability to identify distinct emotional terms, physiological responses, and possible eliciting conditions that cause a subjective feeling state. In other words, it requires the ability to interpret nuanced feeling states and discriminate among them. As one example, emotional granularity results in the ability to distinguish and interpret different sad emotional states, and gives rise to nuanced feelings of loss, melancholy, and grief. I argue that the high agreement for certain “basic” emotional categories could be due to a lack of emotional granularity in the typical participant base. It could be that musically-conveyed emotions do not correspond to basic emotions, but rather that those words (e.g., sad, happy) are more easily understood by participants than more “complex” emotion terms (e.g., sorrow, elation).

As preliminary support for the idea of a nuanced spectrum of emotions in music, results from Krumhansl (1997) suggest that music may cause changes in cardiac, vascular, electrodermal, and respiratory functions. Her analysis resulted in the following

35 statement: “The most important finding of this analysis is that the musical excerpts span the entire range of emotions found in non-musical studies. Thus, the emotional responses to music do not appear to occupy a restricted range of the [affective] space” (Krumhansl,

1997, p. 342). Additionally, some studies have shown that non-basic emotions can be communicated successfully in performance, but that performers and listeners use different vocabulary to describe these affective states (Canazza et al., 1997; Orio &

Canazza, 1998).

Another example of a music study that uses a psychological construction framework is Dibben (2004). In this article, Dibben focuses on how physiological arousal can affect the intensity and experience of emotion. Her theory was that peripheral physiological feedback from one source (e.g., exercise) can be misattributed to emotion from another source (e.g., music listening). Namely, people construct their emotional experience from multiple internal and external sources. In the study, participants’ pulse rate, heart rate, and self-reports of emotional experiences were measured while listening to music after being assigned to an exercise task or a relaxation task. The results were consistent with the hypothesis that a personal factor, namely increased physiological arousal, can help account for part of a person’s emotional reaction to music. Although she cites appraisal theories of emotion in her paper, the results and methodology are more in line with psychological construction models of emotion. The idea that emotions are partially caused by bodily feedback is a major tenet of psychological construction models. Of course, the results of any study are dependent upon the musical materials and specific methodologies used and cannot be said to generalize to music and emotion research writ large.

36

Recent research by Julian Céspedes Guevara and Tuomas Eerola (2018) suggests that there are several problems with the basic emotions research paradigm and calls for constructionist models to be used to study emotions instead. They cite Lisa Feldman

Barrett’s conceptual act theory, which states that when physical bodily sensations can be meaningfully attributed to a cognitive and perceptual situation, an emotion emerges

(Barrett, 2006). One of Céspedes Guevara and Eerola’s main arguments is that although we may perceive continuous phenomena (like emotion and color) in a categorical fashion

(Barrett, 2006), we will experience emotion (“core affect”) in a continuous manner. Core affect is a combination of valence and arousal (Russell, 2003). Céspedes Guevara and

Eerola suggest that, when removed from cultural conventions, music only conveys arousal and valence. All other distinctions are made in the mind of the listener.

Much like how many music researchers cite discrete emotion models instead of basic emotions models, many music researchers utilize dimensional models instead of psychological construction models. A dimensional model of affect is a model that allows emotion to vary continuously on one or more emotions (e.g., arousal, valence).

Researchers who utilize theoretical dimensional models do not necessarily subscribe to the details of psychological construction theory, although they do adopt the idea that emotions exist on continuous spectra. For example, Russell’s Circumplex Model of emotion (1980) and Russell and colleagues’ Affect Grid (1989) can be linked to psychological construction theory (Russell, 2003). As mentioned earlier, researchers who use the Affect Grid model do not necessarily believe that emotions are constructed. The

Affect Grid structural model of emotions characterizes emotions along two dimensions: activation (arousal) and pleasantness (valence). The Circumplex Model is a schematic

37 map of core affects, where emotions such as alertness, happiness, , sadness, , and tension, are placed in a circle around the arousal/valence continuum. A large number of music cognition scholars have utilized Russell’s Circumplex Model of emotion or his Affect Grid (e.g., Ali & Peynircioglu, 2006; Bigand, Vieillard, Madurell,

Marozeau, & Dacquet, 2005; Dibben, 2004; Eerola & Peltola, 2016; Gorn, Pham, & Sin,

2001; Taruffi & Koelsch, 2014; Witvliet & Vrana, 2006; Zentner, Grandjean, & Scherer,

2008). Again, by using a dimensional model of emotion, such as the circumplex model, a researcher is assuming that emotions exist on continuums, rather than a of discrete, basic emotions.

The appraisal and psychological construction camps overlap in many aspects. It is unsurprising, therefore, that many theorists self-characterize as belonging between the appraisal and psychological construction camps. Examples of scholars who may fall between these two camps are Tuomas Eerola, Sandra Garrido, and Jonna Vuoskoski

(personal communication). These researchers focus on dimensional models of emotion, rather than on discrete feeling states. Along with Russell’s (1980) circumplex model

(valence, arousal), dimensional papers employ Thayer’s (1989) model (tension-calmness, energy-tiredness), Wundt’s (1896) model (pleasure-displeasure, arousal-calmness, tension-relaxation), and Schimmack & Grob’s (2000) model (valence, energy arousal, tension arousal). By framing hypotheses in terms of dimensional models, researchers are able to employ methodologies that could correspond to either appraisal or psychological construction emotional camps.

The dimensional approaches to music, including appraisal and psychological construction approaches, can also be seen in some instruments created to measure a

38 person’s responses to music. While a basic emotions perspective would be that emotional states are binary—a person is experiencing an emotion or he is not—appraisal and psychological construction approaches posit that a person can experience an emotion to different degrees. In other words, a person can feel a little bit sad, moderately sad, or extremely sad. The amount of an emotion (like sadness) that a person experiences lies on a continuum, ranging from “not at all” to “extremely.” By positing that a person’s emotional reactions vary on one or more continuous scales, one is assuming that the emotions themselves must vary continuously.

One example of an instrument that is used to measure a person’s emotional reactions to music in a continuous manner is Bartel’s (1992) Cognitive-Affective

Response Test—Music, a semantic differential instrument used to measure people’s responses to music on two dimensions, cognitive and affective. Another example of such a measurement scale comes from Asmus (1985), who posited that a person’s responses to music can be classified along Nine Affective Dimensions (potency, activity, humor, evil, depression, longing, pastoral, sedative, and sensual). Giomo (1993) created children’s semantic differential scales for musical mood. Giomo used cartoon faces to establish a continuum of responses along softness/intensity, pleasantness/unpleasantness, and solemnity/triviality dimensions. Hevner’s (1936) adjective clock roughly follows the valence-arousal dimensions of the circumplex model. Her circle has eight clusters with several music-related emotions in each cluster (most recently updated by Schubert,

2003). Another model, proposed by Wedin (1972), posits that musical emotions can be explained by three dimensions: gaiety/gloom, tension/relaxation, solemnity/triviality.

Finally, the Geneva Emotional Music Scale (GEMS) falls in line with psychological

39 construction models (Zentner, Grandjean, & Scherer, 2008). The GEMS is a music- specific emotion model that accounts for a person’s experienced emotions when listening to music. The researchers found that nine factors (tension, sadness, joyful activation, power, , transcendence, tenderness, nostalgia, and peacefulness) can account for a person’s emotional feeling states. Importantly, the researchers found that most emotions evoked by music are experienced often in a blended manner—people experience more than one emotion at once.

Although the creators of these scales may believe that there is a continuum of emotional states, the use of these scales in research studies does not prove that there are infinite numbers of emotions. It is currently impossible to determine whether a person’s emotional states are truly continuous or whether that person simply experiences many discrete emotions. For example, a psychological constructionist would posit that there is a continuum of sad emotional states. A basic emotions theorist would suggest that there are many discrete types of sad emotions, like sadness, melancholy, and grief. Using continuous scales in a between-subjects design does not provide enough evidence to support the psychological construction theory over the basic emotions theory. Among other methodologies, future research will need within-subjects designs in order to test whether an emotional response to a stimulus can be better described as continuous or discrete.

40

Social Construction

Psychological Research

The last major of emotion theories is the social construction model. In this case, researchers claim that emotions are social artifacts and are products of culture (see

Table 2.1 on page 72). Therefore, whether or not a person constructs an emotional experience depends on its social consequences (e.g., Averill, 2012). In these theories, researchers claim that people tend to experience emotions that are associated with a typical person in his or her culture (e.g., Mesquita, Boiger, & De Leersnyder, 2016). In every culture, these theorists posit, there are specific emotions that help a person thrive: the goal of an emotion is to help maintain and regulate social relationships for the benefit of the experiencer. For example, in Western cultures, people tend to be lauded on being autonomous and independent, whereas in Eastern cultures, people tend to be prized for being harmonious (e.g., Mesquita et al., 2016). These different cultural contexts are thought to lead to different frequencies of these emotions in Western vs. Eastern cultures.

Namely, Western citizens are more likely to feel independent and Eastern citizens are more likely to feel harmonious (e.g., Mesquita et al., 2016).

Like basic emotions theories, in social construction theories, emotions can be automatic and involuntary (e.g., Keltner & Haidt, 1999). Within a particular culture, a person acquires emotion concepts based on the goals of the culture (e.g., Keltner & Haidt,

1999). These concepts become enmeshed in a person’s perceptions and understandings about the world and are learned through processes like statistical learning. Once these concepts have become learned, a person is able to experience the emotions for the first time (e.g., Barrett, 2017). Emotions are a “” and depend on the

41 intentionality of a group of people (e.g., Barrett, 2017; Mesquita et al., 2016). Considered to be dynamic processes that can last from seconds to years, emotions “help perform culturally central tasks [that] are afforded and promoted” above other emotions (Mesquita et al., 2016).

Social constructionists suggest that emotions are instigated by social problems and change as those problems develop or are met. Averill (2012) claims that emotions are elicited through “social norms (shared beliefs and rules) [which] provide the prototypes according to which emotions are constructed.” In the social construction theories, biological functional claims do not work. There is no “overarching selection mechanism culling out inefficient or poorly adapted cultures” (Keltner & Haidt, 1999). Therefore, there cannot be evidence that every emotional exchange serves a “vital” function. There is no equivalent of natural selection in these theories. Emotions are still thought to be functional, however. Consequences of emotional states affect the self and others. For example, love can inform someone about the level of commitment to the relationship

(e.g., Frank, 1988), emotions such as fear or disgust can help sharpen group boundaries

(e.g., Frijda & Mesquita, 1994), and emotions such as motivate conformity to certain cultural values (e.g., Goffman, 1967).

Instead of being defined by evolution, then, cultural differences in emotion are thought to depend on what values are important to the society. As mentioned above,

Western society tends to value the individual, whereas Eastern society tends to value interpersonal relationships (e.g., Mesquita et al., 2016). Participants from the U.S. are thought to understand emotions as belonging to an individual, whereas participants in

Japan are thought to understand emotions as existing within a relationship (between

42 individuals) (e.g., Mesquita et al., 2016). According to the researchers, this finding is in line with thoughts of in the two cultures: Western societies place emphasis on an individual as an agent, whereas Eastern societies place emphasis on the group as an agent.

Emotions, therefore, help people perform culturally-specific tasks (e.g., Mesquita,

Boiger, & Leersnyder, 2016). The social constructionist assumes that people are social by nature and that relationships aid in survival (e.g., Lutz & White, 1986; Keltner & Haidt,

1999). Emotions are central to interpersonal relationships. They help define and maintain relationships between parents and children, romantic partners, and those in your social groups. Emotions such as anger, embarrassment, and love affect the actions and physiology of others (e.g., Averill, 1980). Emotions can coordinate social interactions and help establish a dominant/submissive relationship, shift others’ attention, and help others learn (e.g., Keltner & Haidt, 1999).

People are thought to experience emotions more often when they need to be a good member of society (e.g., Mesquita, Boiger, & Leersnyder, 2016). Humans need to evaluate their behavior with reference to the norms of a salient group, as well as with reference to their social identity and self concept. Emotions, therefore, have a moral dimension and value relevance, which can be compared to external (social) and internal

(self) standards. Certain emotions are thought to fit group and cultural goals, and people’s emotions better fit the pattern of their own culture than the pattern of other cultures (e.g.,

Mesquita, Boiger, & Leersnyder, 2016). As one piece of evidence for this theory, people who experience socially “normative” emotions are thought to meet the expectations of the society and tend to be more well-adjusted than others (e.g., Mesquita, Boiger, &

43

Leersnyder, 2016). As people move to different cultures, moreover, they begin to perceive and experience new emotional states, a process called emotional

(e.g., Barrett, 2017).

Evidence for social construction models comes from research across cultures.

Like psychological construction and appraisal theories, social construction theories use verbal self-reports to provide evidence for their models (e.g., Keltner & Haidt, 1999).

Instead of only asking people how they feel, however, social constructionists also study ethnographies of groups of people or animals, examine emotional lexicons, evaluate historical documents, and survey cultural myths and legends (e.g., Abu-Lughod, 1986;

Elias, 1978; Miller, 1997). They also observe interactions of humans (and other species) in laboratory and naturalistic settings (e.g., Dimberg & Ohman, 1996). Researchers additionally examine emotional communication through facial, vocal, and postural channels, as well as through synchrony, greeting , and attachment and caregiving behaviors (e.g., Bowlby, 1969; Eibl-Eibesfeldt, 1989). Furthermore, it is not uncommon for social constructionists to manipulate emotional behavior in order to see how observers respond (e.g., Dimberg & Ohman, 1996). For example, researchers can prime participants with the “self” or a “family member” and then measure emotions in response to an emotionally-laden stimulus. In one study using this technique, Westerners felt more intense emotions after the self prime, whereas Easterners felt more intense emotions after the family prime (e.g., Mesquita, Boiger, & Leersnyder, 2016). Finally, neural research is sometimes used. An fMRI study suggested that Eastern and Western individuals had different neural implementations of emotion during emotional film clips (Mesquita,

Boiger, & Leersnyder, 2016).

44

One interesting notion from the idea of social construction is that anything can be an emotion as long as more than one person agrees that it is an emotion. This theory would suggest that different cultures have different numbers of emotion. Evidence supporting this idea comes from the fact that difficult cultures have different numbers of words for the experience of anger: as mentioned earlier, there is one word for anger in

English, two in Russian, three in German, and five in Mandarin (Barrett, 2017).

Similarly, there is no known universal emotion concept: there is not a single emotion that is known to exist in all identified cultures (Barrett, 2017). Another line of evidence for social construction theory is that cultures disagree on what constitutes an emotion.

Although Westerners perceive of an event as experienced internally (e.g., I am angry, I am sad), other cultures (e.g., the Ifaluk people of ) understand emotions as an event that occurs between people, rather than residing within a single person (Barrett,

2017). Even the concept of happiness is different among cultures. In Western cultures, happiness is an individual and personal experience, whereas happiness signals a social and relationship-oriented experience in Eastern cultures (Mesquita et al., 2016). Finally, some cultures (e.g., the Himba people of Namibia) do not even have a word or concept that would refer to a collection of emotional experiences (like the word “emotion”)

(Barrett, 2017).

Social construction psychologists include Randolph Cornelius (2000), Robert

Solomon (2003), Batja Mesquita (2010), James Averill (2012), and Rom Harré (1986).

45

Music Research

Many music cognition scholars believe that cultural context is involved in emotional responses to music. In this vein, post-modern music theorists, Adorno scholars, and scholars could be classified as social constructivists (e.g., Paddison,

1996). Henna-Riikka Peltola also offers a social constructionist perspective, as she investigates much of the intersubjective dimensions of musical emotion, including how people express experienced emotions from music-listening through words and non-verbal cues. In one study, Peltola (2017) examines emotions as acts of (shared) conceptualization, a process that had been previously used in accounts of both psychological and social construction (e.g., Barrett, 2006). The methodology for this study included group interviews regarding self-selected and unfamiliar sad music. The results were consistent with the idea that the definition of “sadness” was variable and depended upon many contextual factors (e.g., whether one is listening to the music in public or in private). Also consistent with social construction , Peltola found that social norms and cultural conventions affected the listeners’ experiences of sadness.

Future research should follow in Peltola’s footsteps and continue to investigate how emotions can function as interactional exchanges among music listeners.

Other Approaches

In general, it appears that music-related research aligns closely with the four general emotion models first popularized in the psychological literature. Although music researchers do not usually specify that they belong to a particular psychological emotion camp, the research methods and explanations given in music studies allows one to place

46 scholars in particular music camps (see Figure 2.1 on page 56). There are two major exceptions to these generalizations: recent work has been highlighting (1) an ethological approach to emotion and (2) 4E approaches to emotion. Although these two models are beyond the scope of this chapter, they will be mentioned in brief, mainly with regard to how they overlap with the four models already presented.

Ethological Theories of Emotion

The first exception involves the recent research of David Huron. Huron adopts an ethological perspective of emotion, which, to my knowledge, has been overlooked by many psychologists. Although this ethological theory is in line with many tenets of basic emotions theory, the ethological theory also adopts additional features, which separate it from the basic emotions perspective. The ethological model of an emotion hinges on the difference between an ethological signal and an ethological cue (Huron, 2015; also see

Chapter 9). Huron singles out certain physiological manifestations of emotion, such as weeping, as display signals. In general, signal displays are aimed at changing the behavior of an observer. In the case of weeping, the display aims to solicit help from others. Such behaviors (e.g., weeping in grief) are distinguished from ethological cues

(e.g., mumbling in sadness). Ethological cues have no overt communicative function.

When a person speaks in a low, soft, mumbling voice, they are not necessarily trying to communicate that they are sad. In fact, research has shown that a “sad voice” is an artifact of low physiological arousal (Huron, 2015).

According to this theory, we are thought to be biologically hardwired to respond to certain emotional signals, but not to emotional cues. In order to elicit a response from

47 an observer, a signal must be conspicuous. One way of achieving this clarity is through the use of multi-modal features. Namely, a signal will make use of more than one of sensory systems. Crying, for example, has a clear visual component (tears), but also has a clear auditory component (ingressive phonation). Similarly, when trying to signal aggression, one will lower the pitch of the voice (auditory component) and lower the eyebrows and the head (visual component) (Huron & Shanahan, 2013; Huron, Dahl, &

Johnson, 2009). Huron uses ethological signals and cues as a way to distinguish between two kinds of music: musical melancholy, which may function as an ethological cue, and musical grief, which may function as an ethological signal. Chapter 9 explores this idea further by describing a series of behavioral studies that formally test this conjecture.

4E Theories of Emotion

The second theoretical approach to emotions that does not follow the four major models mentioned above is the emerging field of “4E emotion” (embodied, embedded, enacted, and extended emotion). The roots of enactivism come from phenomenological , such as that of Merleau-Ponty and Husserl, as well as from and (Colombetti, 2014). In these theories, aspects of the human body and its interaction with external environment enact (bring forth) the mind. Like theories of psychological construction and social construction, the central thesis of these enactive theories is that “cognition is necessarily already affective…there is no difference between cognition and emotion, [as] both cognition and emotion turn out to be instances of the relentless sense-making activity” of the person (Colombetti, 2014).

48

Similar to social construction theories, emotional episodes are discussed as relational structures that span more than one organism—emotions are socially-extended

(Krueger & Szanto, 2016). In these theories, emotions occur at an interpersonal level, an intergroup level, and a sociocultural level (to compare with a social constructionist perspective, see Keltner & Haidt, 1999). Krueger and Szanto (2016) provide an example of shared grief—two parents of a dead child share facial and tactile expressions of grief such that the other person’s grief modulates their own grief. The grieving emotion then is a dyadic experience that spans across individuals, giving rise to an interpersonally extended emotion.

4E perspectives differ from psychological construction and social construction in that they tend to relax the constraints that affectivity must be conscious and that affectivity comes from the nervous system. Instead, enactivist claims can be applied to autonomous creatures without a nervous system (see pages 101-104 of Colombetti, 2014 for a more complete discussion of this idea). Furthermore, unlike the language dependence of psychological construction theories, 4E scholars claim that there are recurring patterns of affectivity that are language independent (Colombetti, 2014).

4E emotion theorists also describe emotional episodes through dynamical systems theory, a temporally-situated branch of mathematics that allow the study of emotion to be both stable and flexible at the same time (see Camras & Witherington, 2005; Fogel &

Thelen, 1987; and Lewis, 2005 for how these systems can apply to emotion). In short, dynamical systems theory is a method of explaining emotional episodes that allows these affective states to be affected by context, other people, the body, genes, and the environment. The study of a single enactive mind also relies on an “affective neuro-

49 physio-phenomenological method” for studying emotion, which includes (trained) passive self-, intersubjective validation (e.g., semistructured interviews and the use of scales), recording brain and bodily activity (e.g., temporal fMRI, EEG, muscular tension, ANS activity), the correlation of first- and third-person data by using temporally-sensitive continuous measurements of emotion, and the use of third-person methods to refine self-observation (e.g., biofeedback techniques) (Colombetti, 2014). As mentioned before, the interaction of the self and others is an important part of 4E methods, and concepts such as empathy, the perception of emotion in expression, emotional sharing, self-other overlap, and social bonding are fundamental to the idea of an embodied emotion. It is possible to measure these interpersonal emotions through tasks such as facial mimicry, entrainment, and measures of pro-social behavior.

There has been an increasing amount of music articles operating from an embodied or enactivist approach (see the review by Schiavio et al., 2016). Some of these articles include continuous measurements of emotion (Schubert, 2001 and the development of EMuJoy software by Nagel et al., 2007) and studies of the interpersonal relationship in the Danish (Salice, Høffding, & Gallagher, 2017).

Additionally, some scholars believe that making music in a group aids in emotional extension through a functionally integrated, gainful system (Krueger & Szanto, 2016; also see Leman & Maes, 2014; van der Schyff, 2013). In these theories, a who is “actively integrated with her instrument may be able to realize emotional experiences with a particular intensity, depth, and diachronic character that are possible only when she is part of this materially-extended musical feedback loop” (Krueger & Szanto, 2016).

These authors extend their arguments to music listening as well, for sometimes a listener

50 and the music become “actively integrated” (Krueger & Szanto, 2016). It seems as if 4E perspectives on emotion are gaining traction, as many recent articles being published use these theoretical perspectives. Although similar in many ways to social constructionist theories, 4E theories ultimately differ with regard to experimental methods, theory, and design.

Continuum of Emotion Models

Gross and Feldman Barrett (2011) have compellingly summarized the four major models of psychological emotion (basic emotions theory, appraisal theory, psychological construction theory, and social construction theory). In an attempt to organize the many disparate approaches to emotion published in psychological articles, these researchers show how the four major emotion perspectives can lie on a continuum, going from basic emotions, to appraisal theory, to psychological construction, and finally to social construction. Although they do not dispute that theoretical perspectives on emotion are multidimensional, Gross and Feldman Barrett are able to capture some essential features of these four theories with a single dimension (their Figure 1). Roughly, the single dimension symbolizes how distinctly the brain and body represent emotional and non- emotional states. Researchers whose theoretical alignment lies on the left (“basic emotions and appraisal”) side of the graph tend to view emotions as discrete, unique states, that can clearly be delineated from other bodily and brain states. On the other hand, researchers whose theoretical alignment falls on the right side of the graph

(“psychological and social construction”) tend to believe that emotions are emergent and constructed—they cannot (easily) be distinguished from any other neural or physiological

51 state. What makes emotion states unique, from the perspective of the researchers on the right side of the graph, is the way people communicate and describe these states (Gross &

Feldman Barrett, 2011).

It is useful to adopt Gross and Feldman Barrett’s idea of a theoretical continuum of emotion perspectives. In their Figure 1 (2011), they populate their continuum with representative theorists and researchers. For example, the researcher with the “leftmost” position on the continuum is William MacDougall, closely followed by Jaak Panksepp, both of whom have published emotion theories and research studies that tend to align with basic emotions tenets. Between the basic emotions and appraisal camps lies Magda

Arnold, whose research contains some basic emotions tenets and some appraisal tenets.

Fully in the appraisal camp are Nico Frijda and Klaus Scherer. Between the appraisal and psychological construction zones are Gerald Clore and Andrew Ortony. Firmly in the psychological construction zones are Stanley Schacter, Jerome Singer, and James

Russell. Between the psychological construction and social construction zones lies

William James. The social construction portion of the graph contains James Averill and

Rom Harré.

Two things are important to notice about these scholars’ placement on the continuum. First of all, most of these researchers published their theories on emotion before these four “zones” on the continuum were named. For example, MacDougall, who mainly published in the early twentieth century, was certainly unaware of the current theoretical difference between basic emotions and psychological construction. Second, it is impossible to classify all of the aspects of these scholars’ theories and research studies on a single dimension. Researchers’ ideas about emotion also certainly change over the

52 course of their career, for a myriad of possible reasons, including the advent of new technology. In the same vein, studies by the same author can be conducted with different theoretical perspectives or methodologies. Emotional theory is also multidimensional, whereas this continuum lies on a single dimension. Therefore, it is impossible to capture the subtleties of researchers’ perspectives in a single graph. With these caveats in mind, it is still useful to create a visual depiction that represents where music researchers tend to lie on this continuum of emotion perspectives.

I have created a similar graph to the one in Gross and Feldman Barrett (2011) with respect to prominent music and emotion scholars (Figure 2.1 of this chapter, page

56). The intention of the graph is to compare how music scholars’ emotion perspectives align with general psychologists’ emotion perspectives, as well as to directly compare the positions held among music scholars. The same caveats hold in my Figure 2.1 as they do in the Gross and Feldman Barrett figure. Namely, this graph is unable to fully represent the emotion perspectives published by the represented authors. The placement of these authors came from synthesizing their explicit theories about emotion in music-listening, their research methodologies, and the literature they cited in their published papers. When possible, I directly contacted these theorists to confirm that I was accurately representing their views on music and emotion. As we have seen, in the music cognition literature, researchers tend to subscribe to either the discrete emotion model/basic emotions camp or the dimensional emotion model/psychological construction camp. However, there are scholars that lie in each of the four major emotion “zones,” and several scholars that lie somewhere between these main camps.

53

As discussed, often times the distinction between these four categories is blurred and there is substantial theoretical overlap among the four camps. Specifically, three major similarities among the four camps deserve attention. The first similarity is the idea that we are constantly monitoring our towards life-station goals with both proximal and distal implications. The second similarity is that emotions are triggered by an internal or external event. The third similarity is that emotions involve interactive multi-modal processes. That is, emotions recruit changes in neural circuits, facial expressions, somatic functions (such as heart rate and breathing patterns), and experiential feeling-states that we know as “subjective emotions.” These points of convergence should remain in the mind of the reader as they examine Figure 2.1 (page

56).

A related caveat to this figure is that the field has advanced from experiments performed using many emotion models. For example, Eerola and Vuoskoski (2011) compared emotional ratings of film scores that adopt a discrete approach (anger, fear, happiness, sadness, tenderness) to ratings that adopt a three-dimensional model

(continuous scales of valence, energy, and tension). They tested how raters perceived emotions in music using these two models and found that, although the dimensional ratings tended to be more reliable than the discrete models, both models produced comparable ratings of perceived emotion in music. In their 2008 paper introducing the

GEMS, Zentner, Grandjean, and Scherer also compare different emotion models. Some of their findings were consistent with the idea that the basic emotion model was effectively able to explain perceived musical emotions, but could not adequately account for experienced musical emotions. The musical emotion model developed by these

54 authors using the GEMS data, conversely, was better able to account for experienced emotions from music-listening than for perceived emotions in music. They also found that their model better explained music-related experienced emotions than did a more traditional dimensional model. However, all three models of emotion provided fairly good explanations of both perceived and induced emotions from music listening.

55

Basic Emotions Appraisal Theory Psychological Construction Social Construction

56

Juslin Peretz Peretz Eerola Eerola Huron Huron Peltola Dibben Scherer Scherer Koelsch Koelsch Garrido Schubert Vuoskoski Vuoskoski Thompson Krumhansl

Figure 2.1. Dimensional graph of where music scholars lie on a continuum from basic emotions, appraisal theory, psychological construction, and social construction

Strengths and Weaknesses

When designing a study about music and emotion, the methodology and stimuli used will depend on theoretical alignment of the author. Of course, each model comes with certain strengths and weaknesses. In this section, I will summarize the principal strengths and weaknesses of the four major psychological models of emotion discussed in this chapter. I will also discuss which model I personally prefer and believe has the most potential to benefit future music and emotion research. Future researchers should investigate how neurological emotion models relate to the psychological models presented in this dissertation.

Basic emotions models posit that emotions are evolutionarily adaptive and can be located in specific brain regions. One strength of this model is that there are some affective neuroscience studies that show differences between, say, happiness and sadness, in the brain (e.g., Saarimäki et al. 2015). A weakness of the basic emotions model, though, is that meta-analyses have not been successful in finding a neural “fingerprint” for even a single emotion (Barrett, 2017). A second strength of the basic emotion model is that emotional facial expressions seem to be necessary to develop social relationships

(Ekman, 1992; Goldblatt & Williams, 1986). A weakness of this model, however, is that

EMG research suggests that people often do not make the stereotyped facial configurations when actually experiencing an emotion (Barrett, 2017). A third strength of the basic emotions model is that emotional expressions and voices do seem to be universally understood in infants and across cultures (Vaish et al., 2008). A last weakness of this model is that basic emotions theorists cannot agree on the number of basic emotions that exist (e.g., Levenson, 2011; Panksepp & Watt, 2011).

57

Appraisal theorists suggest that environmental stimuli are evaluated by the brain in a series of cognitive appraisals—these appraisals then give rise to certain emotional states. A strength of this model is that theorized appraisals and action tendencies can account for up to sixty percent of the variance in a person’s subjective emotional responses (Frijda et al., 1989). A major weakness of the model lies on the reliance of self- report as the major source of data (e.g., Craske et al., 2014). It is not clear whether people are able to consciously access or remember what they were experiencing during an emotional event. If people are not accurate in reporting which appraisals they used when they experienced an emotion, credibility of the appraisal-emotion connection would be lost. In other words, the reliance on human subjectivity produces problems of unreliability. A second weakness with this theory is that people experience emotions in response to instrumental music. Music does not have any goal-relevance for survival, and many of the appraisals that are thought to be necessary to induce an emotion are irrelevant in the case of music (Ellsworth & Scherer, 2003).

Psychological construction models suggest that each instance of an emotion (e.g., sadness) varies across situations and people. The idea that each emotion category is populated by a variety of unique instances of an emotion is a strength of this theory

(Barrett, 2017). A second strength of the theory is that studies examining people’s emotional experiences are consistent with the idea that people experience the same emotion in different ways (e.g., some people have a high valence-focus and others have a high arousal-focus) (Barrett, 2004). A weakness with psychological construction theories is that much of the evidence comes from disproving basic emotions and appraisal theories. Because there is thought to be no discrete emotional state in the brain or body,

58 there is no way to differentiate an emotional state from a non-emotional state. A second weakness with psychological construction theories is that there is not an accepted definition of the term “emotion” (Adolphs, 2017). Emotions are defined by what they are not, making it impossible to prove or disprove these theories.

Finally, social construction models tend to regard emotions as culturally-specific transactions among individuals. A strength of this theory is that there is not a single universal emotion concept that has been found, as to this date (Barrett, 2017). A weakness of this theory is that the same physiological arousal, valence, and behaviors may be classified as different emotions in different cultures. Namely, what may be classified as “sadness” in one culture may be considered to be “sleepiness” in another culture.

My own ideas about emotion align with the theory of psychological construction

(further discussed in Chapter 10). I am compelled by the collection of studies that Lisa

Feldman Barrett has conducted and reviewed. For example, her laboratory recently published a meta-analysis of over 200 studies of emotion (Siegel et al., 2018). The evidence from this meta-analysis is more consistent with tenets of constructed emotion

(e.g., population thinking) than with traditional basic emotion models. Although it is always possible that a specific neurological circuit or hormone will later corroborate basic emotions theories, until there is evidence consistent with this view, I think that the field of music and emotion should continue to design studies aligned with the theory of psychological construction.

Additionally, I am not convinced that the research traditionally used to support basic emotions perspectives cannot also be explained by psychological construction

59 theories. For example, Juslin and Lindström (2003) asked participants whether music can express “basic” emotions (e.g., happy) and “complex” emotions (e.g., ). Their results are consistent with the idea that some emotions are more easily conveyed to listeners than others. The authors use this result as support for the idea that so-called basic emotions can be expressed in music, while more so-called complex emotions cannot be represented in music. However, the fact that listeners seem to agree that music easily expresses some emotions, but not others, does not mean that these emotions are basic. I can see two alternative possibilities that would explain these results.

First, it could be that the various participants conceptualize emotions in different ways. For example, when asked if different emotions could be conveyed by music, the emotion “interest” resulted in one of the lowest ratings of agreement among listeners. A cursory Internet search of synonyms for “interest” include terms like , passion, and attentiveness. If listeners (and performers attempting to portray these emotions) do not agree on the definition of a term, like “interest,” it is effectively impossible to know whether that emotion can be conveyed in music. I will discuss this idea further in this dissertation when I engage with the idea of emotional granularity (Chapter 10).

Second, the fact that listeners can reliably identify basic emotions in music does not necessarily result from the existence of determinate categories in the music, but rather from methodological procedures leading to such a categorical distinction. For example, there is weaker evidence in support of a theory when a researcher uses a close-ended response format in which participants have to select an answer among a specific number of emotion categories, in contrast to an open-ended response format, where participants can name any emotion that comes to mind. In their 2018 article advocating for

60 psychological construction in music research, Julian Céspedes Guevara and Tuomas

Eerola catalog a number of ways that past “evidence” used to support basic emotions theory can be reinterpreted in the light of constructivist positions.

As discussed earlier in this chapter, I believe that participants may be able to perceive more emotions in music than has been suggested in previous research (e.g.,

Juslin, 2013). It could be that the reason that people tend not to agree on what constitutes a melancholy-like passage, for example, is that the term “melancholy” is nebulous, both to the researchers and to the participants. The definition of melancholy could mean different things to different people. By training researchers and participants on what certain emotion words mean, we may better be able to explore the boundaries of which emotions are, in fact, perceived and induced in music.

Preliminary Emotion Model

An important direction that music and emotion research needs to go, in my opinion, is the exploration of how music fits with the psychological models proposed earlier. Although music-related research generally aligns with emotion-related research apart from music, there are some aspects of music-related affect that have not traditionally appeared to fit with the mainstream psychological models. In this final section of the chapter, I propose a preliminary model of emotion that can apply to emotions in music-listening conditions and in non-aesthetic contexts. The intention of the model is simply to propose the type of direction music research can take in the future.

The model is not intended to fully explain how emotion is experienced or perceived by a listener. For the purposes of this chapter, the scope of the model is limited to the

61

(potential) cognitive processes of emotion. To be a complete model of emotion, the model would also need to engage with work in predictive processing, dynamical systems theory, developmental studies, as well as embodied and enactive cognition. Again, to include all of these aspects is beyond the scope of this chapter.

Most mainstream psychological models require emotions to be relevant to a person’s goals. Goal-relevancy is particularly important in basic emotions and appraisal theories, but it plays a role in all four models of emotion. The study of music is interesting because, according to several researchers, music does not appear to have any relevance for a person’s goals (e.g., Ellsworth & Scherer, 2003). Therefore, some classical emotion models—like basic emotions and appraisal theories—have trouble explaining why we experience emotions in response to music. Instead, we must use psychological construction theories to answer such questions.

Another often-discussed problem of using traditional psychological emotion models to describe music is that some people believe that music causes different emotions than do “everyday” events (Juslin, 2013b; Konečni, 2008; Scherer, 2004).

Namely, music-related sadness is differentiated from sadness experienced in non- aesthetic contexts. Consistent with psychological construction tenets, I do not believe that there is a difference between aesthetic and non-aesthetic emotions. Instead, it is more likely that every instance of an emotion differs from another instance. Perhaps, for example, during your Monday afternoon break, you feel sad because you are listening to

Albinoni’s Adagio. On Friday, you take a break at the same time and listen to the same work—you note that the Adagio still makes you feel sad. What proponents of psychological construction would say is that, even in these similar situations, the two

62 instances of sadness are not equivalent. Your body will respond to the music in different ways, depending on factors like how much sleep you got during the course of the week.

For every emotion you experience, whether “aesthetic” (e.g., listening to music) or

“everyday” (e.g., losing a job), the context of the emotion needs to be taken into account.

Traditional psychological models of emotion also tend to be silent on how mixed

(or multiple) emotions are experienced. These models tend to examine sadness versus happiness, fear versus anger. There are fewer explanations for feelings of nostalgia or . Although mixed emotions may occur in non-aesthetic contexts (e.g., returning a cat that you love, but that attacks you often, is likely to elicit a mixed emotion), they may occur even more often in music. When listening to a “sad” piece of music, you may feel sadness mixed with compassion, tenderness, and the understanding that what you are is “just music.” Any successful emotion model must, then, be able to explain the phenomenon of mixed (or multiple) emotions.

Based on the comparison of psychological models of emotion, I propose a new model of emotion that can be used to understand music-related emotions and emotions arising from non-aesthetic contexts (see Figure 2.2 on page 64). Again, this model is not meant to be a complete explanation of an emotional process, but is simply intended to highlight some of the possibilities for future theoretical research. This model lies in the psychological construction models of emotion, although it borrows ideas from all four camps. This preliminary model separates two events that occur during an emotion: (1) an antecedent event (typically being some external or internal stimulus) and (2) an emotional episode, which is composed of many components. This stipulation requires a linear timeline; the antecedent event will always occur before the emotional episode begins.

63

64

Figure 2.2. Proposed preliminary emotion model Antecedent Event

As shown in Figure 2.2, a successful emotion model may recognize three different types of stimuli: (1) a stimulus with an unknown cause, (2) a stimulus with a known cause, and (3) a stimulus with a “wrong” cause. In all three cases, a person may perceive the event and appraise its relevance to their situation. However, the person will go through different cognitive processes (dashed lines in Figure 2.2) depending on what is the cause. This will result in different feeling states.

A stimulus with an unknown cause is when a person experiences an emotion but is unsure (unconsciously or consciously) of where to attribute the affective feeling. A stimulus with a known cause is when a person understands the source of the emotion and is able to correctly attribute the affective feeling to the source (external or internal). A stimulus with a wrong cause results in a situation where the experiencer thinks he or she understands the cause of the stimulus event, but is mistaken. A person perceives

(consciously or unconsciously) that an event has occurred and appraises the event in terms of novelty, valence, goal relevance, and agency (see Ellsworth & Scherer, 2003 for a review of these appraisal processes). However, the affective quality attributed to the antecedent event is fundamentally different in these three cases.

When there is an unknown cause to a stimulus event, it could be that a person experiences a feeling of qualia—an instance of an emotionally-laden event where the experience is “like” something (e.g., like a summer breeze, like a leaf falling from a tree, like the smell after it rains). In this case, the language and concepts used to characterize the event are often metaphors and cannot be defined without reference to an external event. This idea would fit the explanation of the feelings of tension and relaxation

65 experienced in relation to scale degree qualia (Huron, 2006). Qualia is also experienced through the sense of smell—something smells “like an orange” or “like a rose.”

If the cause of the antecedent event is known, then the experiencer is likely able to effectively appraise (or reappraise) the event and understand its origin.

Finally, if the experiencer attributes the wrong cause to the antecedent event, they will be misattributing the affect to the wrong source. They will still (re)appraise the event as belonging to a stimulus, but the affective information will be attributed to the wrong source. A classic example of misattribution comes from Dutton and Aron (1974), where participants misattributed the feeling of lightheadedness from walking across a wobbly bridge to the feeling of with an experimenter. The emotion the participants felt was real, but it was misattributed to the wrong source. Misattribution could also be a potential source of musical emotion. It has been argued that musical emotions must be attributed to some sort of “other”—whether this other is the performer or composer, an imaginary being, or the person themselves is not pertinent to this discussion (see the special issue of Empirical Musicology Review on music and empathy, 2015). However, it seems fairly clear that a person does not experience the vibrations coming from a string as the source of emotion. Instead, the person misattributes the source of the emotion to another agent. For example, if a piece of music makes a person feel afraid, they are not afraid of the vibrating violin string. Instead, they are afraid because the music could remind them of a scene in a scary movie or because the sounds are consistent with sounds of approach.

66

Emotional Episode

As shown in Figure 2.2 (page 56), an emotional episode may consist of four main activities that tend to be present in all models of emotion: behaviors (including action tendencies), subjective feeling states, physiological changes, and feelings of core affect

(an arousal and valence combination). All of these activities can modify the experience of the others. For example, a person’s physiological changes can affect their subjective feelings as much as their subjective feelings can affect a person’s behaviors. In terms of music, this can account for many of the emotional experiences that people report

(Sloboda, 1991; Zentner, Scherer & Grandjean, 2008): tears/lump in throat, chills, and feeling like you want to .

All four of these processes may interact with each other (shown by the curved lines in Figure 2.2). All four components (behaviors, subjective feelings states, physiological changes, and feelings of core affect) of an emotional episode may be subject to changes made by cognitive processes. There are two major kinds of cognitive processes at work: (1) ones of which we are unaware (unconscious) and (2) ones of which we are aware (conscious). Unconscious cognitive processes can include factors such as statistical learning, enculturation, expectation (e.g., schematic memory) and distal functions (e.g., you feel good when you eat because you need to eat to survive), among others. Cognitive processes of which we are conscious include autobiographical memories, past associations, expectation (e.g., episodic memory) and proximal motivations (e.g., you feel good when you eat because it tastes good), among others.

Conscious and unconscious processes may interact with each other (as indicated by the double arrows in Figure 2.2). Similarly, both conscious and unconscious cognitive

67 processes can affect behaviors, subjective feelings, physiological changes, and core affect. The latter four activities can also affect your cognitive processes, leading to new memories and playing into your expectations about what you should feel the next time you are in this state.

Finally, language will likely mediate a person’s emotional experience. A person can experience emotion without labeling it (i.e., they can feel sad), but they must be able to label it with language in order to have a meta-emotional experience (i.e., they can think

“I am feeling sad right now”). The language a person applies to an emotion, in turn, depends on a person’s emotional granularity and his or her emotional (see

Chapter 10 for more details). People can be valence-focused (i.e., they make strong distinctions between “I feel pleasant or unpleasant”) or arousal-focused (i.e., they make strong distinctions between “I feel relaxed or aroused”), but people are usually not both

(Barrett, 2004). The different states that people experience depend to some extent on their emotional vocabulary (Barrett, 2004), especially in terms of arousal. Therefore, the mediation effect of language will depend on a person’s emotional granularity and .

Once a person has access to a meta-emotional experience, he or she can consciously deliberate how he or she is feeling and/or regulate their emotional experience through many tactics. Therefore, language may make meta-emotional experiences (“I am feeling sad”) possible. Once a person has a meta-emotion experience, he or she can use emotion regulation or conscious deliberation in order to further modify the emotional experience.

68

An Example: Musical Grief

In order to understand how the model works, we will examine the case of musical grief. In previous music research, the perception and induced feeling of sadness, compared to other emotions, has a special place. Many researchers have wondered why people enjoy listening to sad music. Three contradictory facts exist regarding sad music.

(1) Sad music makes you sad. (2) People enjoy listening to sad music. (3) People do not enjoy being sad. Because of this apparent contradiction, many philosophical debates and empirical research have studied the “sadness paradox.” I will attempt to pose a (partial) solution to this paradox through this preliminary emotion model. Imagine a person is listening to Barber’s . She has heard the piece before, but cannot help feeling an overwhelming sense of sadness, mixed with a curious to be compassionate. The antecedent event would either be an unknown cause or a wrong cause. For example, she would think that she is feeling sad because she is remembering a time where this piece was playing when she ended a relationship or because the music represents an “other,” to which her compassion is directed. In this case, she would perceive the music, appraising its relevance to her situation, but misattributing the affective feeling to a wrong cause.

In the emotional episode, she might notice that her core affect might have negative valence and low arousal, but there is a mixed subjective feeling of positive valence, as well. She may tear up, feel an action tendency to help others, and her heart rate may slow. Because she has heard the piece before, her memory will tell her what musical events to expect that are coming up next in the piece. However, she may still notice that some of the violate common expectations. She will use her

69 understanding of the Western scales and tuning systems in order to predict how these harmonies may resolve.

Finally, she may note her feeling of mixed valence and low arousal and use her mental linguistic concepts of emotion to recognize that she is feeling longing or grief. If she has low emotional granularity, she might also note that she is feeling anxious and sad.

She can then note that she does not like feeling this grief-like emotion and regulate her emotional experience by reminding herself that it is “just music.”

In summary, when listening to a grieving piece of music, a person may be likely to misattribute the source of the emotion—he or she will believe that the music is the cause of their emotion. The emotional experience can consist of mixed feelings. Some of the feelings may come from the person’s core affect, namely, negative valence and low arousal. Mixed feelings could arise due to sources such as emotional contagion (i.e., the person will feel sad from the music), ethological signaling (i.e., the person will feel compassion in response to the grief signal), interoceptive awareness (i.e., the person will note that they are tearing up), and learned associations (i.e., the person could feel nostalgic from remembering the first time they heard the piece in a college course). All of these mixed feelings can be accounted for through interactions of cognitive processes, subjective feelings, physiological changes, and behaviors.

Of course, this simple preliminary model should be examined with caution. The model needs to be modified to include insights from embodied/enactive cognition, dynamical systems theory, developmental studies, and predictive processing, among other fields of research. The intention of proposing this model was simply to propose the kinds of factors to which future music and emotion researchers should pay attention. It

70 was not the intention to fully explain how emotion is experienced or perceived by a listener.

Summary

In this chapter, I have summarized the main tenets of four psychological theories of emotion. I then highlighted how music scholars conform to and deviate from these psychological theories. We saw that music scholars generally agree with the main tenets of the psychological theories, but that they propose important additions to the psychological models. I created a dimensional graph of where music scholars lie on a continuum from basic emotions, appraisal theory, psychological construction, to social construction. Furthermore, I showed that, despite the theoretical differences among these emotion camps, experiential evidence tends to produce comparable ratings of how emotion can be interpreted by listeners. Finally, I proposed the type of theoretical model that would benefit the field. Indeed, the future of music and lies in comparing experimental results from the perspectives presented above (basic emotions theory, appraisal theory, psychological construction, social construction, ethological signaling theory, and 4E perspectives). In the next two chapters, I will further explore the way music cognition scholars have approached emotion research. Chapter 3 presents a new database of emotion-related musical stimuli, the corpus of Previously-Used Musical

Stimuli (PUMS). Chapter 4 uses the PUMS database to investigate the types of emotions studied in music research from 1928 to 2018.

71

Methods Evolution Brain Modules Development Language Cultural Context Basic fMRI, EEG, PET, Emotions are Each emotion has a Emotions are Language is of Emotions can occur in Emotions facial emotions, innate, unique neural innate and are peripheral non-human species and self-report, lesion functional substrate (cortical or present at birth interest; what robots; emotional states studies, cross- responses that subcortical) or shortly after makes an are universal species analysis evolved from birth emotion is a our ancestors brain central state Appraisal Self-report Appraisals are An emotion may Appraisals are We can only use Different situations can Theories innate, have a unique innate and are language to give rise to different functional physiological present at birth access appraisals; if the same responses that pattern, but may not or shortly after information appraisals are made, the

72 evolved from birth about appraisals same emotion will be

our ancestors experienced Psychological Self-report, Each emotion is The only thing that The ability to We can only use Emotions are a social Construction disproving basic a unique separates an develop language to reality that depend on emotions instance emotion from the emotional access enculturation of a person constructed on rest of cognition is categories is information and the language one uses; the spot. its goal-relevance innate; children about emotions; emotions depend on Emotional learn emotional language holds interoceptive perception prototypes are categories as emotion concepts learned through they grow older together culture Social Ethnographies, Emotions are An emotion may Parents instill Language will Emotions are a social Construction emotional lexicons, passed down have a unique values and vary across reality; different cultures fMRI, historical generations by physiological emotional cultures, leading will have different cultural documents, cultural way of learning pattern, but may not. concepts into to different scripts for emotion; new myths/legends, from others in These patterns will their children; collective emotions can be acquired the culture differ for different enculturation intentionality by moving to a different cultures across culture culture

Table 2.1. Summary of proposed characteristics of the four major emotional theories (basic emotions, appraisal theories, psychological construction, social construction)

Chapter 3: The PUMS Database: A Corpus of Previously-Used Musical Stimuli

in 306 Studies of Music and Emotion

In Chapter 2, we saw that music and emotion has been studied from numerous psychological perspectives. While most of the theories presented in Chapter 2 come from the last century, dating back to have noted that music can induce feelings of sadness and pleasure (e.g., Levinson, 2014). Today, people throughout the world still use music as a means to engage with emotion, which can be seen through the myriad of Spotify playlists with titles like “sad songs,” “happy tunes,” and “calm vibes.”

People may listen to music as a means of escape, to experience transcendence, to produce pleasure, to regulate their own emotions, and to provide diversion (e.g., Saarikallio &

Erkkilä, 2007; Schäfer et al., 2013). In scientific studies of emotion, music is often used as a medium to convey certain affective states (e.g., Eerola, 2011), to evoke feelings in listeners (e.g., Salimpoor, Benovoy, Longo, Cooperstock, & Zatorre, 2009), or to both represent and induce emotion (e.g., Hunter, Schellenberg, & Schimmack, 2010).

Despite the prevalence of music in everyday life and in behavioral and psychophysical research on emotion, there has been a lack of systematic analysis of which musical passages have been associated with specific emotions. Although there are several review papers that discuss musical stimuli (e.g., Eerola & Vuoskoski, 2013;

Gabrielsson & Lindström, 2010; Garrido, 2014; Juslin & Laukka, 2003; Schubert, 2013;

73

Västfjäll, 2002), these articles tend to concentrate on how stimuli (broadly) might affect emotion or mood, rather than summarizing features of the stimuli themselves.

Furthermore, there is no current database that identifies which musical stimuli have been used in previous emotion-related studies. Rather, researchers have often summarized stimuli characteristics without providing references to the stimuli themselves. This chapter describes the creation of an online and publicly-available database of Previously-

Used Musical Stimuli (PUMS), which summarizes the types of musical passages that have been used in studies of and evocation from 1928 through

2018. The PUMS database also provides a resource for researchers designing new emotion-related studies.

A number of methodological questions are likely to arise for any researcher interested in conducting emotion-related research. In particular, various questions include how to find the most suitable musical stimuli, how long the passage should be, whether an excerpt is suitable for the study or whether the full musical work is needed, whether participants should be unfamiliar or familiar with the musical samples, and which style

(or genre) of music is most appropriate. Researchers may also be interested in whether these characteristics of emotional music passages differ in studies about perceived emotion or experienced emotion. The PUMS database addresses these, and other, methodological considerations. Perhaps most importantly, the PUMS corpus clearly identifies the names, , performers, and specific measure numbers for thousands of musical passages used in emotion-related studies so that future researchers can (1) listen to the musical stimuli that other researchers have used, (2) replicate hundreds of

74 studies with the exact musical passages, and (3) easily identify stimuli for their own future studies.

Search Strategy

A systematic analysis of the literature on music and emotion was conducted.

Discovering musical stimuli used in studies about emotion requires a multidisciplinary, broad search. Publications come from the fields of psychology, neuroscience, music theory, , , , consumer science, marketing, and engineering (e.g., Musical , MER). Accordingly, there are thousands of publications regarding this topic. For example, a cursory Google Scholar search on “music,” “stimuli,” and “emotion” results in approximately 111,000 articles.

Because searching every single article on the topic would be nearly impossible, I created specific search criteria to find a representative sample of the population of music and emotion studies. The goal was to limit the search findings to English language publications in peer-reviewed journals, although I also included studies reported in conference proceedings. The searches took place between September 2017 and

November 2018.

The papers chosen for this study were the result of three separate processes. First,

I examined the articles in the six major reviews or meta-analyses on music and emotion cited at the beginning of the paper (Eerola & Vuoskoski, 2013; Gabrielsson & Lindström,

2010; Garrido, 2014; Juslin & Laukka, 2003; Schubert, 2013; Västfjäll, 2002). Second, I looked through the references of the papers from the first step. Finally, I conducted searches in the following Internet-based scientific databases: Google Scholar, JSTOR,

75

PsychINFO, and Ingentia. The following search terms were included in various combinations and truncations: emotion, music, perceived, induced, and stimuli. No limits on dates were placed. The search was stopped when an a priori limit of 650 papers was met. Overall, 654 papers were examined by the author, but only articles that explicitly included musical stimuli were included in the PUMS database. The results of this exclusionary criteria resulted in a total of 306 studies involving 22,417 stimuli, which comprise the PUMS database.

Feature Selections and Operationalizations

Each of the 22,417 stimuli—from 306 representative studies—was analyzed according to the following criteria:

Designated Emotion/Mood

The specific term the researchers used to describe the emotion or mood of the musical passage was identified. For example, a researcher may be interested in which musical samples make someone feel sad or portray negative valence and high arousal.

The exact terminology the researcher used was specified in the PUMS, in order to further differentiate music that causes depression versus melancholy, for instance. Instances of music-related mood and music-related emotion were included in the database, as stimuli that portrayed or evoked these affective states were both deemed to be important. In addition, many authors did not clearly differentiate between mood and emotion in the original studies.

76

Length of the Sample

The length of the stimulus was noted when it was available. If a range of durations was given, the range of durations and the mean duration for that particular passage was identified.

Excerpt or Full Work

Depending on the study in question, a musical stimulus could be either an excerpt from a musical work or a complete work. When the researcher included this information in the original study, each stimulus in the PUMS database was labeled explicitly as an excerpt or full work. A full work was defined as a complete movement; for example,

Mozart’s No. 40, Mvt. 1 was considered a full work if the entire movement was performed.

Induced/Perceived Emotion

One of the most common distinctions made in the music and emotion literature is the difference between how music expresses emotion (also: conveyed emotion, perceived emotion) and how music induces emotion (also: experienced emotion, felt emotion, evoked emotion) in listeners (e.g., Juslin & Sloboda, 2011). Literature on the way music conveys emotion, or how people perceive emotions in music, is often focused on the structural aspects of the composition (e.g., Hannon & Trehub, 2005; Hevner, 1935;

Huron, 2008; Juslin & Laukka, 2003; Juslin, 2013a; Schubert, 2004; Sloboda &

Lehmann, 2001). For example, if a passage of music emulates the sounds of humans crying (e.g., wails, breaking voice), it might be perceived as expressing sadness. When

77 listening to this “sad” musical passage, however, it may induce either positive or negative feelings. A person’s experienced emotions when listening to music are thought to arise through various processes (e.g., brain stem reflexes, rhythmic entrainment, associations with memory, and musical expectation) and often depend on demographic characteristics

(e.g., age, sex), personality characteristics (e.g., Openness and trait empathy), and listening conditions (e.g., listening alone versus in a group) (e.g., Demos et al., 2012;

Huron, 2006; Juslin, 2013b; Kövecses, 2000; Meyer, 1956). Therefore, studies of perceived emotion and induced emotion tend to rely on different conceptual theories and often employ different methodological designs.

When possible, the designation of each stimulus as belonging to a study of perceived emotion or induced emotion was chronicled. In addition to the description of the study aims and methodology, the exact wordings of participant instructions were examined in order to determine the locus to which the study referred. Options were given for induced, perceived, and both.

Type (Style) of Music

The type of the musical stimulus was also notated. The wording of the experimenters was retained when possible, although some of the style information was summarized. For example, the label Western was used to describe multiple subclasses of Western Art Music, such as Classical and . These broader classifications of style were designated and corroborated by music experts. These style designations might not be best characterized as musical genres. Future research may wish to reclassify these style designations using the clusters proposed by Pasi Saari and

78

Tuomas Eerola (e.g., Saari & Eerola, 2014; Saari, Eerola, Fazekas, & Sandler, 2013).

Additional information about the style of musical passages is also provided in the PUMS database, when possible, such as the type of instrument used in the musical recording.

Information about the Stimuli

The composer, name, performer, track number, and measure numbers (or duration markings) of the work were identified when this information was listed in the original papers.

Operationalization

The methods that the researchers used in order to find their emotional stimuli were noted whenever possible. Five broad categories were denoted: experimenter/expert chosen, previous studies, professionals asked to play/express emotions, pilot tested, and composed for study. The exact process of how the researchers chose the stimuli is also included in the PUMS database, when possible, such as the following description:

“Experimenter/expert chosen: the happy music featured fast tempo, high sound level, and major mode, while the sad music featured slow tempo, low sound level, and minor mode.” In the case of previous studies, the study from which the stimulus was taken was identified, when possible.

Date

The year the study was published was denoted so that researchers can examine trends over time.

79

Familiarity

Musical emotion studies may differ in methodology if a participant is explicitly familiar or unfamiliar with the stimuli. For example, participants may be more likely to experience emotions like chills when they are familiar with a musical passage (e.g.,

Salimpoor, Benovoy, Longo, Cooperstock, & Zatorre, 2009). However, if one wishes to minimize potential confounds, like episodic memory, it may be better to rely on unfamiliar stimuli (e.g., Gosselin, Peretz, Noulhiane, Hasboun, Beckett, Baulac, &

Samson, 2005). When the familiarity of participants with the musical stimuli was explicitly stated in the original paper, the PUMS database includes markings of unfamiliar, familiar, or both (which indicates mixed familiarity among participants).

Conclusions

This chapter introduced a new corpus of Previously-Used Musical Stimuli

(PUMS), which summarizes the selection of musical stimuli in over three hundred studies of perceived and evoked emotion. The musical stimuli represented in the database exhibit a wide range of characteristics, such as passages ranging from two seconds to 45 minutes, passages ranging from to to psychedelic , and from piano music to music synthesized by computers. Stimuli in the PUMS include music composed by J.S. Bach, Fiona Apple, Ella Fitzgerald, and and include emotions such as fear, joy, and tenderness. Some passages have been composed by music experts in order to portray specific affective states, while others were explicitly chosen by participants because they are able to induce an emotion in listeners. The PUMS database allows scholars to easily sort through thousands of musical stimuli in order to find

80 passages that are optimized for the specific methodological considerations of their research. Furthermore, the PUMS database provides a way to critically examine the stimuli used by other researchers. Finally, the PUMS database allows researchers to identify trends in the musical stimuli used in psychological research across a period of 90 years.

The intention of this chapter was to provide readers with some idea about which stimuli might be used in order to study music and emotion. The hope is that readers will use the PUMS database when beginning a new experiment or when examining emotional trends over time. The next chapter uses the PUMS database to analyze trends present in emotion-related music research, such as the styles of music used in these 306 studies, the percent of induced vs. perceived emotion studies, and the range of durations of over

22,000 musical stimuli.

81

Chapter 4: Choosing the Right Tune: A Review of Music Stimuli used in

Emotion Research

The PUMS corpus was introduced in the last chapter. This database is utilized in the current chapter in order to create a literature review of scientific music and emotion studies. As seen in the PUMS database, research dating back to the 1920s and 1930s have explored how music is able to express emotions such as sadness, happiness, and fear

(e.g., Hevner, 1935, 1936; Sherman, 1928). Furthermore, researchers in the areas of , , and music theory and composition have each offered unique contributions to the field. Substantial progress has been made in this area of music cognition such that several conjectures for how music is able to convey and evoke emotions in listeners have been thoroughly examined.

Because there exist thousands of articles and books published with regard to music and emotion, review articles of these studies are important resources to researchers interested in this area. To my knowledge, there are six major reviews that have been published on the topic of music and emotion:

(1) Västfjäll (2002) examines how mood can be evoked from music-listening

experiences. In particular, he discusses the Musical Mood Induction

Procedure (MMIP), a popular technique in the psychology literature around

the 1980s and 1990s. The MMIP is a procedure where participants are told to

82

listen to music with the that the music will change their experienced

mood state (typically “depressed” and “elated”). Västfjäll examines the

methodology and the stimuli used in this procedure to give a comprehensive

picture of music’s emotional evocation properties.

(2) Juslin & Laukka (2003) survey literature regarding how musical performance

expresses emotion and its comparison to communication of emotions in vocal

speech. The authors provide one of the few meta-analyses on music and

emotion to date, as they compare findings across 145 studies from the speech

and music literatures. The authors find that an evolutionary perspective on

emotional speech is consistent with some of the findings of music’s

expressive power. For example, sad music tends to mirror sad speech, and

therefore a single explanatory code can be used to describe how music and

speech are expressive of emotion.

(3) Gabrielsson & Lindström (2010) survey how musical performance and

composition tend to be associated with emotional expression in music. The

authors meticulously chronicle the effects of features like tempo/speed,

intensity/, pitch, and timbre/spectrum. They discuss that more typical

“musical” features (like mode and ) often depend on the musical

context, rather than acting in . Furthermore, they discuss how more

extreme examples of musical features tend to result in emotional affects,

whereas milder examples of these musical features are unable to affect the

83

listener’s emotional response.

(4) Eerola & Vuoskoski (2013) review the research approaches and emotional

models that have been used to study music and emotion writ large. The

authors found that most studies used some variation of a discrete or

dimensional model, that many studies rely on Western Art Music samples,

and that there have been increasing variants in methodological design,

including self-report, biological approaches, developmental approaches, cross-

cultural approaches, and music analysis approaches.

(5) Schubert (2013) compares different loci of emotions—felt emotion and

in 19 studies. He found that expressed emotions tend to

have higher ratings of emotional intensity than experienced emotion and that

demographic characteristics and musical structure influence both perceived

and felt emotions. Importantly, he calls for a more systematic use of emotional

terminology in the literature.

(6) Garrido (2014) summarizes the distinction between mood and emotion, how

researchers utilize excerpts in these two affective states, and the different time

frames used for measuring affective states. This is one of the only papers that

differentiates mood and emotion, which is an important distinction that should

be noted in future research questions.

84

Despite these important reviews, each of which covers music and emotion with a different perspective, there has been a lack of systematic analysis of the actual musical stimuli used in emotion research. Although several of these review papers discuss musical stimuli, they concentrate more on how stimuli (broadly) might affect emotion or mood, rather than characterizing features of the stimuli themselves. The current chapter aims to supplement these major reviews with a survey of the types of stimuli used in previous research of musical expression and evocation and a discussion of emotional language that clarifies how emotional terminology has been utilized in the literature. As the ability to identify and utilize appropriate musical samples is a task that all experimenters face, it is crucial to identify both good research practices and general trends of stimulus choices. The validity, reliability, and power of the study to find an effect are all directly affected by stimulus choice.

Accordingly, there are two central aims to this chapter. First, I enumerate which types of musical passages have been utilized in music and emotion research. Second, I explore how music-related emotion has been operationally characterized by various researchers. In short, the goal of this chapter is to present readers with a summary of how musical emotional stimuli have been used in the past, with the intention of facilitating the choice of stimuli in future studies. Specifically, I examine how stimuli have been used from 1928 to 2018, showing that there are certain trends in music research that come and go. By examining the relationship among methodological factors and types of musical stimuli used, I highlight the importance that is given by study authors to specific combinations of methodological and musical forms. The review does not cover the difference between mood and emotion, how music compares with speech, or performance

85 factors that imbue music with emotional meaning. It should be noted that the majority of the stimuli are from Western music and therefore, cross-cultural comparisons must be left for future research.

In conducting a review of stimuli, it is important to acknowledge the kinds of questions that are likely to arise. In particular, various questions arise regarding the most suitable stimuli:

(1) Should one make use of extant musical recordings or create more controlled

stimuli that are composed specifically for the study?

(2) What is the optimum duration of a stimulus; should stimuli be long or short?

Additionally, when using existing excerpts, should they be excerpts from

longer works or the full works?

(3) Does the style or genre of the music matter?

(4) Should one avoid music that is familiar to the participants?

(5) Finally, of critical importance is the question of how to operationalize the

mood or emotion conveyed or evoked by a particular passage. How have

other researchers operationalized emotion and mood in past studies? For

example, how have other researchers operationalized an emotion like

“sadness”?

Throughout this chapter, I will address each of these questions in turn, examining how researchers have utilized emotion-related musical samples in the past. Based on these findings, I conclude with some limitations of past studies, in an attempt to better prepare future studies of music and emotion.

86

Methodology

In order to investigate how musical stimuli have been used in past research, I made use of the Previously-Used Musical Stimuli (PUMS) database, first presented in

Chapter 3. Recall that the PUMS database is a publicly-available, online database that lists 22,417 musical stimuli that have been used in emotion-related research. In total, the

PUMS corpus lists the published information from 306 studies on music and emotion. To create the PUMS database, I coded each musical stimulus used in these studies according to various criteria: its designated emotion and how it was operationalized, its length, whether it is an excerpt from a longer work, and whether the passage has been used in studies about perceived or induced emotion.

Analyzing Features of Emotion-Related Musical Stimuli

Descriptive statistics are presented for each of the coded features in the PUMS database, including the emotion/mood of the stimuli, the composer, passage name, track information, length, whether it was an excerpt or full work, whether it was used to convey or evoke emotion, the level of familiarity of the participants with the stimuli, the type (style) of music, and how the researchers operationalized the emotion.

In general, I found that excerpts exhibited a range of lengths from shorter than 10 seconds to longer than 10 minutes (detailed later). Several works were excerpted in different ways by various researchers, and many researchers simply chose works that had been used in previous studies. The works are heavily biased towards Western popular and art music genres, but stimuli were also used from and film genres. In addition to naturalistic recordings of music, some excerpts were synthesized by computers.

87

Familiarity

I collected basic descriptive statistics regarding the stimuli when it was noted in the original study. I found, when the information was available, that 55% (2,557) of the stimuli were explicitly used because they were unfamiliar to participants, 35% (1,619) were used because they were familiar to participants, and 10% (482) of studies included both familiar and unfamiliar stimuli. Musical familiarity is important because a participant is likely to have other associations with the stimulus (e.g., personal or movie references) and any emotional response to a stimulus may be colored by these associations. When studying experienced emotion, however, often times using familiar music is preferred, as the researcher can better examine how a person’s physiology is changing due to intense emotional responses rather than studying (presumably) weaker emotional responses due to experimenter-selected music.

Excerpt/Full Work Designation

I additionally found that 55% (10,832) of the stimuli were musical excerpts or passages and 45% (8,873) were full musical works. Short musical excerpts allow a researcher to study music that is more likely to be affectively homogenous—meaning that the music may portray (or evoke) a single emotion. These short excerpts may therefore exhibit internal validity. Longer excerpts make it impossible to know which musical event may evoke a particular affective response. However, longer excerpts or full works become important when studying experienced emotion or mood, as it takes more time to induce and maintain an affective response in a listener (e.g., Eerola & Vuoskoski, 2013).

Furthermore, listening to longer works may allow the listener to respond in a continuous

88 fashion, where their responses can be measured over time. For example, Schubert (2001) examines a person’s continuous response to a piece of music, allowing him to investigate how a participant emotionally responds to different musical events without varying the genre or composer.

Duration

The works ranged from 2 seconds in length to 45 minutes in length. In general, most stimuli were very short in nature, with 50% (4,237) of the stimuli being less than 30 seconds in duration. 28% (2,381) of the stimuli were between 30 seconds and 59 seconds,

9% (769) were between 1 and 2 minutes in length, 2% (165) were between 2 and 3 minutes in duration, and 8% (704) were between 3 and 4 minutes in duration. Pieces 4 minutes and over (in one minute increments) each represented 1% or less of the sample.

As mentioned above, there are different advantages to using stimuli of various durations.

The wide range of stimuli durations in the PUMS database shows that the field may be using incommensurate techniques to properly study emotions with different theoretical or empirical aims.

Induced or Perceived

As mentioned in Chapter 3, there is a vast difference between induced and perceived emotions. It is generally accepted that people perceive and recognize emotion through different mechanisms than the way they experience emotion (Huron, 2015;

Juslin, 2013; Juslin & Laukka, 2003; Zentner et al., 2008). Accordingly, when information was available about whether the study investigated induced or perceived

89 emotion, I found that 74% (13,895) of the studies focused on perceived emotion, 24%

(4,607) focused on induced emotion, and only 2% (316) focused on both perceived and induced emotion. The fact that there are many more perceived emotion studies than induced emotion studies is likely a testament to a number of Music Emotion Recognition

(MER) studies, which comprise large numbers of stimuli. MER studies are part of the

Music Information Retrieval (MIR) realm, where the researchers use mathematics and computational modeling to predict which emotion a certain passage of music is likely to convey. In these MER studies, computer models are trained on “ground data,” where musical passages have prior emotional labels—typically participant-given labels.

Then, new, unclassified musical passages are given to the model and the algorithm predicts which emotion the passage is likely to represent. There is an “amount of correctness” (or “amount of variance explained”) that quantifies how well a particular model performs. These studies rely on thousands of stimuli. Excluding some of these larger studies (i.e., Eerola, 2011; Schuller et al., 2010; Weninger et al., 2013), the number of emotion studies regarding perceived and induced emotion is closer to equal, with

4,564 stimuli used in induced emotion studies and 8,163 stimuli used in perceived emotion studies. The fact that perceived emotion seems to take precedence over induced emotion could mean that the mechanisms behind perceived emotion are better studied or understood. Future research might focus on induced emotion, although these studies are notably complicated by the fact that experienced emotion depends on demographic information, personal associations, familiarity with the stimulus, etc.

90

Emotional Terms

The particular terms that researchers used to describe the stimuli are important for theoretical and practical reasons. Theoretically, emotions of the same affect (e.g., sad) may impact a listener using similar mechanisms. These stimuli may additionally share similar music theoretic or structural characteristics. It is natural to compare stimuli of the same affect category. If stimuli with different characteristics are summarized using a single term, however, problems may arise when comparing statistics across stimuli of this class. As discussed in Chapter 2 and Chapter 10, I believe that using more distinct and nuanced (“emotionally granular”) terms may benefit the field as a whole. Pioneers of the field of music psychology, such as Kate Hevner and Malcolm Rigg, found that musical expression of emotion can be broad and imprecise, such that different listeners do not always agree on more exact emotion characterizations. Another possibility, however, is that the current definitions of music-related emotions do not provide enough information to describe exactly what is experienced subjectively or semantically. Participants may agree on more nuanced terms, provided that they are told specifically what a researcher means by a possibly nebulous term such as melancholy. In other words, the field of music and emotions could suffer from semantic underdetermination.

In the PUMS database, I found that there were 114 emotions listed, suggesting that researchers are using some degree of nuance in differentiating emotional classes.

However, of the 114 emotions listed, 23% (2,013) of the stimuli were deemed sad, 19%

(1,614) were happy, 10% (869) were angry and 9% (775) were relaxed. Furthermore, 8%

(731) pertain to chills/pleasure and 4% (332) pertain to fear. Chills (no associated pleasure) made up 4% (318) of the sample, peace (172) and groove (148) each made up

91

2% of the sample, and negative valence (119) made up 1% of the stimuli. This means that over half of music-related emotion studies concentrated on only three emotions: sadness, happiness, and anger. In addition, for emotions that were used more than one percent of the time only nine common terms are represented (sad, happy, anger, relaxed, chills/pleasure, fear, chills, peace, and groove). Additionally of interest is the fact that the term tender appeared about 1% of the time; this is surprising, given that it is sometimes considered to be a primary emotion of music (e.g., Juslin, 2013; Horn & Huron, 2015). A complete list of the emotion terms is provided in Appendix A, Table A.1 (page 306).

Operationalization of Emotion

Another important choice a researcher makes when designing a study on music and emotion is how to choose the particular emotional stimuli. If a researcher wants to study the difference between sub-types of sad music, for example, they must choose the appropriate stimuli. There are many ways of choosing these stimuli. Some of the common ways of operationalizing emotional stimuli include expert opinion, pilot testing, or asking professional musicians to express emotions when playing a stimulus. Using different operationalizations of emotional stimuli—like how researchers define and select sad music—will impact the results of the study. There are pros and cons for each operationalization method. Using previously-validated materials have advantages, such as convenience and the benefits of having been placed in a peer-reviewed journal. By relying on the way others have defined sad music, however, a researcher may be using an unintentionally biased sample. For example, I have conducted research that suggests that there is a difference between melancholic and grief-like music (see Chapters 5-8).

92

Melancholy music tends to be quieter, lower-in-pitch, and contains narrow pitch intervals, whereas grief-like music tends to contain sustained tones, gliding pitches, and harsh . However, in many studies, these different types of musical emotions are both labeled as sad. If one relies on others’ definition of sad music, he or she may accidentally use a grief-like stimulus when they are truly interested in melancholy-like stimuli.

Nevertheless, whether an expert chooses a sad stimulus because it was slow and in the minor mode, or because a performer attempted to convey sadness in his or her rendition of a , it is important to understand the way researchers defined and selected their emotional stimuli.

In the PUMS database, the way the researchers operationalized the emotion of the stimuli is broken into six categories: previous studies, experimenter/expert chosen, pilot tested, composed for study, participant chosen, and professionals asked to play/express emotion. The analysis of the PUMS database suggests that 37% (8,046) of stimuli were chosen because they were used in past studies, whereas 46% (10,051) of the stimuli were selected because of experimenter or expert opinion. 7% (1,547) of the stimuli were participant selected and only 3% (744) were pilot tested. An additional 1% (206) of the stimuli were specifically composed for the study, and 1% (175) were performed by professionals (e.g., drummers, singers, pianists, guitarists) who were asked to play or express certain emotions. For the list of stimuli that were chosen because they were used in past studies, the reader is referred to Appendix A, Table A.2 on page 308. In general, I recommend that future studies rely more on pilot testing and less on previous studies (see

Chapter 11 for specific recommendations).

93

Style of Music and Composers

The type of music was also recorded (see Appendix A, Table A.3 on page 310).

57% (10,705) of works were classified as , 10% (1,910) as Western Art

Music, and 7% (1,397) were film or film-like music. The large focus on popular music can be attributed to several large studies, each with over 2,000 stimuli. Three of these studies (Eerola, 2011; Schuller et al., 2010, and Wenignger et al., 2013) were removed, in order to observe the influence of popular music without these large studies, which, in total, accounted for 8,448 stimuli (7,944 of which were labeled as popular music).

Without these three studies, the influence of popular music remains, with 4,081 stimuli, compared to Western Art Music (WAM), which contained 1,564 stimuli, and film music, which contained 807 stimuli. In addition to counting the number of stimuli that belong to a certain genre of music, another way of examining the styles of musical samples is to investigate the number of studies that made use of these style classifications. In the

PUMS database, 103 studies used Western Art Music as their stimuli, compared to 27 studies that used film music, and 28 studies that used popular music. The focus on

Western Art Music is notable. One of the potential confounds of conflating studies that have used popular music and Western Art Music is that popular music may be more likely to contain emotional than the typically-instrumental WAM. Popular music may also employ a different harmonic vocabulary compared with WAM (de Clercq &

Temperley, 2011), so it becomes more difficult to compare specific musical features that may be giving rise to participants’ emotion. The increasing use of film music is a trend that is important because this music may be composed specifically to help convey or induce an emotion in listeners (during film viewing), which may correspond with the film

94 scene. The small number of cross- (or studies that use music from non-

Western cultures) is notable. Future research should address this shortcoming of the

PUMS database.

In the PUMS database, the ten most popular composers were Mozart (168),

Beethoven (135), J.S. Bach (129), Bernard Bouchard (112), Chopin (91), Schumann (69),

Mendelssohn (62), Schubert (57), Brahms (49), and Marsha Bauman (48). Bernard

Bouchard and Marsha Bauman composed works specifically for studies on music and emotion. By comparison, the ten most popular composers in music theory journals in rank order (according to Duinker & Léveillé Gauvin, 2017) are Schoenberg, Beethoven,

Brahms, J.S. Bach, Mozart, Webern, Stravinsky, Wagner, Schubert, and Chopin.

Although there is overlap between these two lists, the music theory journals tend to concentrate more on later composers, whereas musical emotion studies tend to rely on

Western Art Music composers only through the Romantic period.

Musical Mood Induction Procedure

A series of studies utilized the Musical Mood Induction Procedure (MMIP) and included over 100 musical stimuli. These various studies investigated a wide range of topics (see Appendix A, Table A.4 on page 311). However, most of these studies relied on only a few musical works. In particular, many of these studies used Delibes’ Coppelia to induce happy or elated moods, while they used Prokofieff’s Russia under the

Mongolian Yoke (notably played at half speed) to induce sad or depressed moods. It seems that a few seminal studies (e.g., Clark, 1983 and Clark & Teasdale, 1985) used

95 these excerpts. These studies were subsequently referenced as the source of musical stimuli by later studies using the MMIP.

Interactions

Duration and Perceived/Induced

Eerola & Vuoskoski (2013) found that passages around 15 seconds are ideal for studies of perceived emotion, whereas passages over around 30 seconds are better suited for studies of induced emotion. It takes a certain amount of time to initiate an emotional response and therefore, longer stimuli are more appropriate for the induced emotion locus. Given this information, I also examined the length of excerpts in studies of induced versus perceived emotion. A chi-square test suggests that the distributions of durations

(collapsed into < 30 sec, 30 sec – 1 min, > 1 min) were used differently across induced, perceived, and both induced/perceived conditions (X2 = 3150.2, df = 4, p < 0.01). I found that both induced and perceived studies had more stimuli less than 30 seconds than any other duration. However, whereas perceived studies had used 65% of their stimuli being less than 30 seconds (3,178) and 98% of their stimuli being less than one minute (4,757), induced studies only used 27% of their stimuli as less than 30 seconds (776) and 45% of their stimuli as less than one minute (1,314). Almost all of the perceived studies used pieces under three minutes in length; however, induced studies included about 29% of stimuli longer than 3 minutes (833). This finding shows an awareness of the music community that it takes longer to evoke an emotion than to perceive an emotion in music.

Nevertheless, many experiments are using stimuli that may be too short to reliably induce

96 an emotion in listeners. The exact durations for induced/perceived musical stimuli is summarized in Appendix A, Table A.5 on page 317.

Duration and Style

I also examined the duration of excerpts with regard to genre (Western Art Music,

Film or Film-like music, Popular music, and Jazz). A chi-square test comparing these genres to the collapsed time periods (< 30 seconds, 30 sec – 1 min, > 1 min) suggests that the distributions of times were used differently across styles of music (X2 = 411.72, df =

6, p < 0.01). One finding of note is that while film music, popular music, and jazz music almost exclusively relied on stimuli under 30 seconds, 72% for film (577), 75% for jazz

(125), and 80% for popular music (941), Western Art Music had more of a spread of durations used for musical passages. Only 48% (556) of the Western Art Music stimuli were less than 30 seconds, with 26% (300) lasting between 30 seconds and 1 minute, and

26% being longer than one minute. This could be due to the typically long full works of

Western Art Music being played, but could also imply that the questions asked in the studies were dependent on longer samples of music.

Perceived/Induced and Operationalization

The relationship between induced/perceived studies and the operationalization of emotions was also investigated. A chi-square test suggests that the operationalizations of emotion were used differently across perceived, induced, and both perceived/induced studies (X2 = 4881.9, df = 12, p < 0.01). Whereas studies of perceived emotion primarily relied on stimuli from previous studies (51%, 7,009) and experimenter/expert chosen

97 stimuli (38%, 5,187), studies of induced emotion relied more on experimenter/expert chosen stimuli (35%, 1,533), and participant chosen stimuli (30%, 1,343), rather than on previous studies (19%, 826). These different trends between induced and perceived emotion may speak to the importance certain authors place on familiar (participant- selected) stimuli in order to induce an emotion in listeners. Certainly, studies examining frisson experiences or music-induced sadness may wish to rely on familiar stimuli that evoke a specific emotion in the listeners. Related to this fact, over twice as many studies utilized stimuli that had been pilot tested before the main experiment for induced emotion than for perceived emotion (6%, 267 for induced emotion and 3%, 460 for perceived emotion). It could be that studies of perceived emotion focus on previously-validated stimuli, rather than pilot testing, because some authors are writing algorithms to recognize emotion in music (MER studies). Perhaps, however, previously-validated stimuli are also used because the experimenters wish to regulate certain parameters, such as major mode and fast tempi for happiness. An additional fact of note is that all of the stimuli that asked professionals to play a work with a certain emotion were studied in a perceived emotion context.

Perceived/Induced and Familiarity

A chi-square test suggests that the familiarity of the participants with the musical samples varied across perceived, induced, and both perceived/induced studies (X2 =

792.4, df = 4, p < 0.01). An analysis of the relationship of the use of familiar music with the methodology of perceived emotion versus induced emotion shows that induced studies use more familiar music (50%, 1,367) than unfamiliar music or music with mixed

98 familiarity, whereas perceived studies tend to rely on unfamiliar music (83%, 1,270).

Studies that measured both perceived and induced emotions used familiar music only

22% of the time (28) and used unfamiliar music 78% of the time (100). As noted above, this illustrates the importance of using primarily familiar music when conducting studies of induced emotion.

Emotional Categories and Familiarity

Familiarity was used differently across the emotion categories (only the nine terms used more than one percent of the time were examined). A chi-square test suggests that the familiarity of the participants with the musical samples varied across emotion categories (X2 = 2751.5, df = 16, p < 0.01). Angry music was primarily unfamiliar to participants (71%, 73). Fearful music, as well, was mainly unfamiliar to participants

(85%, 170), with only 9% of fearful stimuli classified as familiar to participants (17). The happy music was familiar to participants only 2% (8) of the time and was unfamiliar to participants 75% (303) of the time. In contrast to happy music, the sad category was primarily familiar to participants (45% 329), although this could be due in large part to the Taruffi & Koelsch (2014) paper, where participants were asked to select music that made them feel sad. The sad music was unfamiliar to participants 42% of the time (307), while participants had mixed familiarity with the sad music 13% of the time (92). Music associated with chills (92%, 280) and chills/pleasure (100%, 731) were overwhelmingly familiar to the participants, whereas participants had mixed familiarity with all of the groove stimuli (148). Peaceful music was largely unfamiliar to the participants (93%,

149).

99

Emotional Categories and Operationalization

The number of emotional categories studied appears to vary by the operationalization used by the researchers. A chi-square test comparing the nine most common emotion terms with the operationalization categories suggests that the operationalizations of the emotion were different across emotion classes (X2 = 6202, df =

48, p < 0.01). When stimuli are composed specifically for an experiment or chosen to be played by professionals, they tend to be defined by basic emotion terms. Indeed, only eight emotion terms were used to describe stimuli composed specifically for a study

(anger, fear, happy, happy/sad, negative valence, neutral, peace, sad) and only twelve terms were used for stimuli when professionals were asked to play/express emotions

(anger, anger/hate, fear, happy, joy, neutral, pain, sad, solemn, sorrow, surprise, tender).

Similarly, participant-selected stimuli also relied on a limited number of affective terms

(anger, chills, chills/pleasure, disgust, fear, happy, joy, lively, lump in throat/tears, relaxed, sad, surprise). When stimuli were chosen by the experimenter, however, a broader spectrum of affective terms were used, with 54 distinct terms appearing in the

PUMS corpus for experimenter/expert chosen stimuli and 40 terms appearing for previous studies. The terms for the experimenter/expert chosen stimuli were , anger, , arousal/valence, arousing, chills, chills/happy, chills/sad, comforting/relaxing, content, depressed, energizing, excitative, excited, expressive, exuberant, fear, groove, happy, happy/angry/agitated, happy/sad, humor, irritation, joy, joy/pleasant, longing, low arousal, negative tension, negative valence, negative valence/high arousal, negative valence/low arousal, negative valence/low/high arousal, negative/positive valence/low arousal, negative/positive valence/low/high arousal,

100 neutral, nostalgia, peace, pleasant, positive energy, positive tension, positive valence, positive valence/high arousal, positive valence/low arousal, positive valence, low/high arousal, relaxed, sad, scary, sedated, solemn, spiritual, surprise, tender, tension, and unpleasant.

The terms for previous studies included agitated, anger, anxiety, arousal, arousal/negative valence, arousal/valence, arousing, calm, comforting/relaxing, depressed, elated, fear, fear/threatening, happy, happy/sad, joy, low arousal, negative tension, negative valence, negative valence/high arousal, negative valence/low arousal, neutral, peace, pleasant, pleasant/joyful, positive energy, positive tension, positive valence, positive valence/high arousal, positive valence/low arousal, relaxed, sad, sad/beautiful, scary, serene, stimulating, tender, tension, tranquil, and unpleasant.

Perceived/Induced and Emotional Categories

As mentioned above, there are vast differences in the methodology and theoretical aims of induced and perceived emotion studies. Accordingly, the emotion classes were also analyzed separately for perceived and induced studies. A chi-square test comparing the nine most common emotion terms (minus groove, which had no classifications of induced/perceived) with the classifications of studies into induced, perceived, and both induced/perceived suggests that the emotion categories differed across the type of study design (X2 = 1946.7, df = 14, p < 0.01). For perceived emotion studies and studies that included both perceived and induced emotion, sadness was the most commonly studied emotion, with happiness coming in second. Induced emotion studies were topped by chills/pleasure, with sadness and happiness coming next on the list. The induced studies

101 had a total of 58 separate emotion categories, perceived studies had 61 separate emotion categories, and studies of both induced and perceived emotion contained 16 emotion categories.

Induced studies had 22 emotions with 10 or more counts and 24 categories with only one emotion term. Perceived studies used 27 emotion terms with 10 or more counts and 15 categories with only one emotion term. Studies that examined both perceived and induced emotions had 16 emotion term categories, five of which had 10 or more stimuli present. The reader is referred to Appendix A, Table A.6 (page 318) to see the order of the top 25 terms for the different methodological loci (perceived/induced emotion).

Longitudinal Trends

A chi-square test comparing the nine most common emotion terms with dates from the 1920s-1990s, 2000s, and 2010s suggests that the emotion categories differed across the decade studied (X2 = 2539.1, df = 16, p < 0.01). Sad is consistently one of the most common emotions. It represented 15% of the stimuli in the 1980s (after neutral and depressed), 31% in the 1990s, 17% in the 2000s, and 27% in the 2010s. Happiness is commonly the next most commonly studied emotion, with 15% of the research in the

1980s, 23% in the 1990s, 16% in the 2000s, and 20% of the research in the 2010s. The

2000s saw a massive spike in the study of chills, with 30% of all emotion studied regarding the presence of frisson experiences (whether pleasurable or not). In the 2010s, chills were studied only 2% of the time, and they were only examined 4% of the time in the 1990s. The study of chills was virtually absent in the other decades. In general, then, most research has focused on happiness and sadness, which has remained constant

102 throughout the decades. Although it is not surprising that these are the two most commonly studied emotions, it could be that researchers are grouping together different emotion classes under the umbrella terms happy and sad, an idea that will be discussed further in Chapter 10.

The number of emotion categories identified was 5 in the 1920s, 7 in the 1970s, 9 in the 1980s, 38 in the 1990s, 70 in the 2000s, and 51 in the 2010s (2010-2018). This trend shows a massive proliferation of emotion categories as time passes. This could be due to a number of factors, including primarily studying basic emotions thoughout the

1980s and the use of dimensional models in the 1990s onwards. It could also mean that people broke down a previous category of emotion (like happiness) into more than one term (like joy and tender and peaceful). This idea will be taken up in the section on recommendations in Chapter 11.

A chi-square test comparing the study design of induced, perceived, and both induced/perceived and the decades 1920s-1990s, 2000s, and 2010s suggests that the type of study design varied across the decade studied (X2 = 2369.8, df = 4, p < 0.01). This data should be interpreted with caution, as researchers before the 1990s may not have made the distinction between perceived and induced studies clear to their participants. All of the studies examined in the 1920s and 1930s only examined perceived emotion, and all the studies examined in the 1970s only examined induced emotion. In the 1980s, 93%

(53) of the studies examined induced emotions. In the 1990s and 2000s, induced and perceived emotion were studied more equally, with 51% (334) studying perceived emotion and 41% (265) studying induced emotion in the 1990s and 50% (2,324) studying perceived emotion and 48% (2,258) studying induced emotion in the 2000s. In the 2010s,

103 there was a huge spike in studying perceived emotion, with 84% studying perceived emotion (11,144).

A chi-square test comparing the length of the stimulus (< 30 seconds, 30 seconds

– 1 minute, and > 30 seconds) and the decades 1920s-1990s, 2000s, and 2010s suggests that the length of the stimuli varied across the decades (X2 = 1775.6, df = 4, p < 0.01). In the studies from 1928-1999, there was a relatively even split of studies of different lengths. 35% of the stimuli (145) were less than 30 seconds, 21% of the stimuli (86) were between 30 seconds and one minute in length, and 45% of the stimuli (188) were greater than one minute in duration. In the 2000s, most of the stimuli were shorter than 30 seconds (64%, 2397), and 29% (1100) were greater than one minute. In the 2010s, this trend continued, with only 13% (533) of the stimuli having durations of longer than one minute.

A chi-square test comparing whether a stimulus was an excerpt or full work and the decades 1920s-1990s, 2000s, and 2010s suggests that the type of stimuli varied across the decades (X2 = 2272.9, df = 2, p < 0.01). From 1928-2009, most musical stimuli were excerpts of longer works (75% or 412 from 1928-1999 and 87% or 3,458 in the 2000s).

In the 2010s, this trend switched so that there was a roughly even spread of excerpts and full works used (46% or 6,962 were excerpts).

The familiarity of participants with the stimuli was also examined across time. A chi-square test comparing the familiarity of the participants with the stimuli and the decades 1920s-1990s, 2000s, and 2010s suggests that the type of stimuli varied across the decades (X2 = 958.97, df = 4, p < 0.01). Before 2000, there was a roughly even spread of participant familiarity with the stimuli (25% or 67 were familiar, 35% or 95 were

104 unfamiliar, and 40% or 107 had mixed familiarity). In the 2000s, the majority of the works were familiar to participants (57% or 1,062), while in the 2010s, most of the stimuli were unfamiliar to participants (69% or 1,751).

The style of music (film, jazz, popular, or WAM) was also compared across decades. A chi-square test comparing the style across the decades 1920s-1990s, 2000s, and 2010s suggests that the type of stimuli varied across the decades (X2 = 4384, df = 6, p < 0.01). Whereas 93% (274) of all music studied from 1928-1999 was Western Art

Music, the 2000s saw a relatively more even spread of genres, with 25% (384) of film music, 6% (117) of jazz, 27% (419) of popular music, and 40% (616) of WAM. In the

2010s, most of the music studied was popular (87%, 8,949).

Finally, the operationalizations across the years were analyzed. Once again, a chi- square test was conducted, comparing the operationalizations across the decades 1920s-

1990s, 2000s, and 2010s. The results are consistent with the idea that the way researchers operationalized emotions varied across the decades (X2 = 9588.4, df = 12, p < 0.01).

Before 2000, 38% (293) of stimuli were chosen by the experimenter or another expert,

17% (128) of the stimuli were pilot tested, 15% (113) of the stimuli were performed by professionals explicitly asked to express those emotions, 10% (76) of the stimuli were chosen because they were used in a previous study, 9% (67) of the stimuli were selected by the participants, 1% (10) of the stimuli were composed for the study, and the remaining 11% (82) were operationalized in two or more ways. In the 2010s, the stimuli were selected in the following ways: 23% (1,042) were also used in previous studies,

22% (990) were participant-selected works, 20% (905) were chosen by experts or the experimenters, 13% (584) were pilot tested, 2% (91) were composed for the study, 1%

105

(62) were performed by professionals asked to play or express the emotion, and 18%

(833) were operationalized in more than one way. From 2010-2018, 53% (8,833) of stimuli were chosen by the experimenter or an expert, 42% (6,928) of stimuli were chosen because they were used in previous studies, 3% (490) of stimuli were selected by participants, 1% (105) of stimuli were composed for the study, less than 1% (32) of stimuli were pilot tested, and there were no stimuli used that asked professionals to specifically express an emotion. The remaining 1% (165) of the stimuli were operationalized in more than one way.

Correspondence Analysis

A correspondence analysis was conduced on the PUMS data (also see Appendix

A). Correspondence Analysis (CA) is similar to Principal Component Analysis (PCA), except for the fact that PCA primarily deals with continuous data and CA focuses on categorical data. The idea behind Correspondence Analysis is to summarize many variables into a smaller number of dimensions. The analysis relies on contingency tables, where the inclusion of small amounts of data can bias the results of the analysis. For example, there is only one example of an irritating musical stimulus in the PUMS.

Accordingly, data from the PUMS were categorized in the following way. The emotions examined included the five standard “basic” emotions (sad, happy, anger, fear, and tender), as well as two “dimensional” terms (negative valence and positive valence).

Twelve classification variables were examined to tabulate these seven emotion categories: date (2000s and below, 2010s), length (less than 30 sec, greater than 30 sec), loci (induced, perceived), style (film music, popular music, Western Art Music), and

106 operationalization (experimenter/expert chosen, previous studies, other operationalization). The chi-square test of independence indicates that the distribution of the twelve variables differed across the seven emotions (X2 = 724.24, df = 66, p < 0.01).

A scree plot of the data suggested two dimensions, where the first dimension explains

48% of the variance and the second dimension explains 35% of the variance.

An analysis of the classification contributions suggests that the first dimension is determined largely by film music, induced emotion studies, and the date range of the studies, whereas the second dimension includes contributions by different kinds of methods of operationalization and Western Art Music. Emotion category contribution analysis suggests that the first dimension primarily describes tender music, which holds a special place in music analysis. The second dimension primarily describes the more general valenced terms (negative and positive valence). Although both dimensions contain contributions by many variables, a couple of trends emerge from the biplot (see

Appendix A, Figure A.1 on page 319). First of all, there is no clear differentiation of emotion by arousal or valence. However, there is some sense of specificity of emotions across the graph. While both positive and negative valence dimensions lie towards the top of the y axis, more specific (granular) emotions lie generally in the lower half of the y axis. Additionally, there is a clear distinction between induced and perceived emotion studies, a clear distinction between stimuli of short and long lengths, and a further distinction between the three genres of music included in the analysis.

This biplot (Appendix A, Figure A.1) reveals several trends of the PUMS corpus.

First, tender music tends to be taken from film soundtracks and chosen by the experimenters or experts. Second, fear and anger music tend to be studied in perceived

107

(rather than induced) emotion designs and have been primarily studied in the 2010s.

Third, happy and sad music were primarily examined in the 2000s and the years preceding 2000. Fourth, more generally-labeled valenced stimuli have been used with

“other” operationalizations (composed for study, pilot tested, and operationalized in more than one way). Fifth, the Western Art Music works that consist of the seven emotion categories included in the analysis tend to be taken from previous studies and tend to be longer than 30 seconds in duration (note that this differs from the overall trend of all

Western Art Music, which are chosen by experimenters 42% of the time and chosen from previous studies 33% of the time). Sixth, popular music works that address the seven induced emotion categories tend to be used in induced emotion studies and tend to be less than 30 seconds in duration (note that this differs from the overall trend of all popular music, which focuses almost entirely on perceived music designs. A close analysis reveals that many of the perceived study designs for popular music include other emotions, such as unsettling, peace, anxiety, unstimulating, festive, etc.).

Stimulus Problem

Through the analysis of the PUMS database, I noticed that there have been varied approaches to sampling musical excerpts in the emotion literature. Studies use variable lengths of musical passages (excerpts in the research literature range from 2 seconds to about 45 minutes), use musicians and non-musicians as participants, conflate responses among individuals with different degrees of familiarity with the passages, and use different types of experimental conditions (i.e., ecologically-valid vs. laboratory- controlled). Additionally, operationalizations are inconsistent within and between

108 emotion researchers. Finally, some reviews conflate results among these disparate research approaches. In other words, some large reviews or meta-analyses have noted that, although individual studies show certain trends, when combined with other studies, the trends disappear. I suggest that some of the inconclusive results from previous reviews may be due to the inconsistent use of emotion terms throughout the music community.

As one example, in their 2003 review on features of musical and vocal affect,

Juslin and Laukka examined characteristics of sad music. When making their classification of sad music, the authors combined the following terms: crying despair, depressed-sad, depression, despair, gloomy-tired, grief, quiet sorrow, sad, sad- depressed, sadness, sadness-grief-crying, and sorrow. I suggest that if this nominally- labeled sad music was split into multiple categories (such as melancholy and grief), two or more trends might have emerged. Although Juslin and Laukka’s sad music had characteristics like slow tempo, low sound level, and low pitch level, I suspect that this trend would not hold true for grief-like music. Namely, grief-like music may have fast tempo, high sound level, and high pitch level. Indeed, Warrenburg (submitted) finds that this is the case for a small sample of grief-like music. If the grief-like music was removed from the sad sample curated by Juslin and Laukka, it is possible that the remaining music would have shown an even greater propensity to be slow, quiet, and low in pitch. In other words, the conflation of melancholy and grief may have contributed to a weakening of the general trends described by Juslin and Laukka.

Another problem is that many studies make use of convenience samples by relying on curated excerpts used in previous studies. As documented in past research

109

(Eerola & Vuoskoski, 2011; 2013), many stimuli were chosen as a representative sample of emotion in early studies and then were used in a large number of subsequent studies by the same and other researchers. The PUMS database analysis indicates that 37% of stimuli used in research studies have been chosen because they were used in previously- published papers. This trend is increasing, from 10% in the 1990s and before, to 23% in the 2000s, and to 42% in the 2010s. The problem with using music from previous studies only occurs when many studies rely on the same stimuli. In these cases, although the stimuli may have been well-chosen for an initial study, these samples of music may be

(unintentionally) biased and lead to misrepresentative results in subsequent studies.

One example of this is the use of the Music Mood Induction Procedure (MMIP).

These procedures have been used in the psychology literature to study various behavioral tasks for topics like mood-congruent memory effects (see Appendix A, Table A.4 on page 311). The MMIP became popular after Clark (1983) claimed that the MMIP was more effective in inducing mood than the popular Velten Induction Procedures. The

Music Mood Induction Procedure asks participants to try to feel the mood that is suggested by certain musical works. However, it appears that very few of them are concerned with how the music itself aids in perception or induction of mood states. As mentioned above, many of these studies used Delibes’ Coppelia to induce happy or elated moods, while they used Prokofieff’s Russia under the Mongolian Yoke (played at half speed) to induce sad or depressed moods. In addition, the MMIP places a strong emphasis on asking the participants to try and get into the desired mood. For example, participants are encouraged to think of past unhappy experiences, engage in fantasies about death, dance to the music, and think about possible future pleasant events. For these original

110 studies, this is not a problem, but from a musical perspective, it is concerning. It is impossible to know if the music alone is able to alter a person’s mood state, given these instructions. Additional problems from the perspective of a music researcher are that the studies indiscriminately use musicians and non-musicians and that the studies do not control for familiarity with the music.

In summary, it is possible that there is a kind of “stimulus problem” in the literature. Rather than simply rely on stimuli from the same well-cited studies, it is better to use pilot testing and specific operationalizations of the emotions in question (e.g., characterize sad music as music that is quieter, slower, lower in pitch, utilizes dark timbres, has “mumbled” articulation, etc. See Chapters 5-8 for more about sad music).

Discussion

The current chapter aimed to examine the emotional spectrum present in previous music studies. A literature review of 306 studies on music and emotions is presented. The

22,417 passages of emotional music used in these research studies were analyzed according to the designated emotion and its operationalization, length, whether it is an excerpt from a longer work, and whether the passage had been used in studies about perceived or induced emotion. I showed that the literature has relied on approximately nine emotional terms. The implications of previous research conflating multiple emotional states are profound, as the ability to discriminate different emotions affects all music and emotion literature, including meta-analyses.

Several interesting features in the PUMS data are worth noting. First, notice that there are similar numbers of emotion terms in both the perceived and induced studies.

111

This trend is interesting because it has been noted that listener agreement is high only for a few emotions in perceived emotion (e.g., sadness, fear, tenderness, anger, happiness;

Juslin, 2013), whereas listeners are thought to experience a vast number of emotions in response to music (see the GEMS of Zentner et al., 2008). Even if it is true that listeners can only perceive about five categories of emotions in music, my study indicates that investigators are exploring other emotions that might be able to be perceived in music.

It also appears that there are many more uses of valenced terms (positive valence, negative valence) in the induced studies than in the perceived studies. This trend could be due to the fact that more dimensional models of emotion are used in the induced emotion studies than in the perceived emotion studies (see Eerola & Vuoskoski, 2013 for more on dimensional vs. discrete models of emotion). In short, the kinds of emotions studied in perceived and induced loci matters because it has implications for the mechanisms of emotion that are being investigated by researchers. The kinds of emotion studied are also important because it could indicate which emotions have been thought to be successfully perceived or induced in music listeners. If an emotion is studied often, it could be that there has been success in evoking/representing that emotion to listeners. Fear, anger, peacefulness, and happiness are used much more often in perceived studies than in induced studies..

The predominance of both happy and sad music in induced and perceived studies is of particular importance, as this shows that these terms consistently dominate both studies of perceived and induced emotion in music. Together, happiness and sadness make up over 42% of the stimuli of the PUMS. In general, I suggest that these two terms

(happy and sad) have been used to describe an unmanageably large space of emotion.

112

Namely, there are vast differences in music that have been labeled as happy or labeled as sad. Work by Lisa Feldman Barrett (2017) suggests that many people do describe emotions in such a broad and imprecise manner. However, with training, one can learn to apply more specific terminology to better represent the emotional state they are perceiving or feeling (also see Chapter 10). It is possible that both researchers and participants can learn to apply more specific terminology to music-related emotional states.

While it is likely that stimuli fit broad designations such as happy and sad, it is also possible that more specificity can be used to identify this music. For example, instead of deeming a musical sample as happy, it could also be considered joyful, elated, pleasant, peaceful, tender, or lively. A nominally sad passage of music may better be categorized as melancholy, grief, anxious, nostalgic, or sorrowful (see Chapters 5-8). It is likely that there are measurable differences among the stimuli labeled as happy (and as sad). Similarly, it is possible that happy (and sad) works contain different musical structures and evoke different forms of emotion.

It is also interesting to note the differences between participant-selected stimuli compared with experimenter-selected stimuli. In the above analyses, I first noted how experimenter-selected and participant-selected stimuli vary across genres (66% participant-chosen works come from Western Art Music, whereas 75% experimenter- chosen music is popular music). Next, it was found that 91% of participant-chosen music was used in induced emotion studies, whereas 76% of experimenter-selected music was used in perceived emotion studies. In contrast to participant-selected music, experimenter-selected music was unfamiliar to participants 80% of the time. Finally,

113 while there are 54 emotions examined from experimenter-selected music, there are only

40 emotions examined from participant-selected music.

The longitudinal analyses show that, unsurprisingly, the way emotion has been studied has changed from 1928 through 2018. For example, the number of emotions examined in research has blossomed in recent decades, and the intensive study of certain emotions (like groove and chills) were mainly studied during a few (recent) years. The focus on induced emotion in the 1980s and 1990s has shifted to a focus on perceived emotion in the 2000s and 2010s, and the reliance on Western Art Music from the 1920s through the 1990s has given way to a new focus on Popular Music. Future analysis should examine whether or not these trends are due to a shift in theoretical focus of the researchers.

Conclusions

In summary, this chapter presented some descriptive statistics about how the research of music and emotion has utilized musical stimuli in the past. The intent of this chapter was to provide readers with some idea about what kinds of stimuli might be used in order to study music and emotion. The careful selection of stimuli will benefit correlational, exploratory, and experimental designs and result in more power in statistical analyses.

In the beginning of this chapter, I asked several questions regarding how researchers should choose musical excerpts for their studies. First, I asked if one should make use of extant musical recordings or create more controlled stimuli that are composed specifically for a study. In this chapter, I presented data that suggests that

114 about one percent of music and emotion studies have opted to use stimuli composed specifically for the study. I suggested that if familiarity with a musical stimulus will confound the results of the study, researchers should seek a composer to create stimuli for them. Composers, in turn, can examine the various operationalizations of emotion used in the PUMS database (e.g., using features like major mode, fast tempo, and loud dynamics) to aid in their compositional process.

Second, I asked whether it would be better to use long or short stimuli, and whether stimuli should be excerpts or full works. I found that half of the stimuli in music and emotion studies are less than 30 seconds in duration, while only about 12% were two minutes or longer. I further found that 65% of stimuli in perceived emotion studies utilized stimuli less than 30 seconds in length, with almost 98% of these perceived emotion stimuli were less than 1 minute in length. If one wishes to study induced emotion, one should make use of longer stimuli. 55% of induced emotion studies utilize musical stimuli longer than 1 minute, consistent with this idea. Although 55% of the stimuli were reported to be excerpts, only 8% of the works longer than one minute in duration were full works, suggesting that most of the full works used were very short.

Third, I analyzed the style of the music used in studies of music and emotion.

Although there was a wide spread of genres in the PUMS database, ranging from Heavy

Metal to Mafa to Psychedelic Trance, over half of music used in these studies were considered to be Popular Music. A further 10% of these stimuli were Western Art Music.

Despite this trend, the top ten composers sampled in these studies included Mozart,

Beethoven, J.S. Bach, Chopin, , Mendelssohn, Schubert, and Brahms.

The concentration on Western Art Music composers the Top 10 list might suggest that

115 the WAM music sampled in the literature relies on a more narrow range of music than does the popular music sampled in the literature. In fact, while there are 134 WAM composers in the PUMS and 204 Popular Music composers in the PUMS, 12 WAM composers account for 50% of the WAM stimuli, while 57 Popular Music composers account for 50% of the Popular Music stimuli. The style of the music may matter, depending on the goals of the study. For example, while 96% of Popular Music was used in studies of perceived emotion, only 38% of Western Art Music was used in studies of induced emotion. Additionally, over 99% of Film Music was unfamiliar to participants, while only 74% of Western Art Music was unfamiliar to participants.

Fourth, the familiarity of participants with the musical stimuli was examined.

Although 55% of the stimuli in the PUMS were explicitly used because they were unfamiliar to participants, 35% were used because they were familiar to participants.

However, while 83% percent of perceived emotion studies included unfamiliar stimuli, only 43% of induced emotion studies included unfamiliar stimuli. It makes sense that when studying experienced emotion, researchers may want to utilize stimuli with which participants have previous emotional connections. Consistent with this claim, 38% of familiar music was chosen by the participants themselves.

Finally, and most importantly, I investigated how researchers have operationalized the mood or emotion in a particular musical passage. The analysis of the

PUMS database is consistent with the idea that 46% of musical stimuli are chosen by experimenters or other experts, 37% are selected because they were used in previous peer-reviewed publications, and only 3% of musical stimuli are pilot tested. When studying music-induced sadness, 47% of the stimuli were participant chosen, 13% were

116 pilot tested, and none of the stimuli were performed by professionals with the intent of inducing sadness in listeners. On the other hand, when studying how music can represent sadness to others, only 4% of the stimuli were selected by participants, 4% were pilot tested, and 11% were played by professionals asked to directly express sadness. In contrast inducing sadness from music, inducing fear from music never utilized participant-selected works, instead mostly relying on pilot testing (39%) and expert/experimenter opinion (32%). These comparisons highlight how study design depends on the emotion(s) in question and the specific aims of the researcher.

In this chapter, I have highlighted some of the design concerns related to emotion- related stimuli. It is well known that music researchers vary widely in their theoretical perspectives on emotion, as discussed fully in Chapter 2. In some , however, the personal opinions of the researchers do not matter when choosing stimuli. By observing trends of music stimuli over 90 years, we have seen how design methodologies have developed over time and vary by the emotion category. The hope of this review is that researchers will continue to carefully choose their stimuli and report on the selection process, aiding in the field of music and emotion for decades to come.

While the first few chapters examined how emotion-related music has been studied in general, the next four chapters focus on a single emotion: sadness. Chapters 2 through 4 presented the idea that researchers may have used emotion words inconsistently. An important caveat to this finding is that the inconsistent use of emotion words does not mean that listeners cannot perceive subtle shades of emotion. Rather, it may simply indicate that we need to be more clear about which emotions are being studied. Chapters 5 through 8 investigate whether work on sadness has conflated (at

117 least) two emotional states: melancholy and grief. A series of studies are presented that explore whether listeners can perceive differences in melancholic and grieving music and whether they experience distinct patterns of emotion in response to these passages.

Furthermore, I present a study that suggests that melancholic and grieving music contains different structural features in line with work on vocal displays of melancholy and grief.

Before these studies can be discussed at length, however, it is important to note how I chose the musical stimuli for these tasks. The selection of melancholic and grieving musical stimuli is the focus of Chapter 5.

118

Chapter 5: Assembling Melancholic and Grieving Musical Passages

Research dating back to Darwin indicates that melancholy and grief may be separable emotions that have different motivations and physiological characteristics

(Darwin, 1872). Current research related to human crying also suggests the existence of these two different, yet complementary, states (Vingerhoets & Cornelius, 2012). These emotions appear to have separate motivations and physiological characteristics (e.g.,

Andrews & Thomson, 2009; Frick, 1985; Kottler, 1996; Mazo, 1994; Nesse, 1991;

Rosenblatt et al., 1976; Urban, 1988; Vingerhoets & Cornelius, 2012). Generally, melancholy is considered to be a negatively-valenced emotion associated with low physiological arousal, while grief is thought to be a negatively-valenced emotion associated with high physiological arousal.

Psychological research suggests that melancholy may arise when an individual experiences a failure to meet expectations, among other reasons (Ekman, 1992).

Corresponding physiological symptoms are thought to include feelings of anergia, decreased heart rate, slow and shallow respiration, and reduced levels of epinephrine, norepinephrine, acetylcholine, and serotonin (Andrews & Thomson, 2009; Nesse, 1991).

Sad or melancholic individuals also tend to have a slumped posture, relaxed facial expressions, feelings of lethargy, and reduced attentiveness. Melancholy tends to lead to

119 reduced activity, slow movements, weak voice, diminished interest, social withdrawal, and increased rumination (Nesse, 1991; Andrews & Thomson, 2009).

Persons in a state of grief are also thought to experience negative valence, but their body is in a state of high arousal, with corresponding physiological symptoms like crying, erratic breathing, and wailing (Rosenblatt et al. 1976; Vingerhoets & Cornelius,

2012). While in a state of grief, a person’s heart rate and blood pressure may increase and breathing may become more erratic (Frick, 1985; Mazo, 1994; Urban, 1988). A person also experiences a rise in epinephrine, cortisol, and prolactin, as well as a constricted pharynx and a creaky voice (Frick, 1985; Mazo, 1994; Urban, 1988). Facial features associated with weeping have been called faces of “agony” (Ekman, 2003). According to

Huron (in preparation), “grief vies with physical pain for the most negatively-valenced affective state.” Experiences of loss—whether due a significant death, loss of safety, loss of autonomy, or loss of identity—are thought to drive experiences of grief (Epstein,

2019).

A pertinent question is whether the hypothesized differences between melancholy and grief are also present in music. The aim of Chapter 5 is to describe the collection of melancholic and grieving musical passages, using two different approaches. The first approach asks trained and untrained student participants to provide examples of sad/melancholic music and crying/grieving music. The second approach utilizes expert opinions to identify melancholic and grieving music according to the theoretical characteristics proposed by David Huron (2015). The result of these two approaches is a collection of melancholy and grief musical passages that will be used the studies

120 presented in Chapters 6-9 and can be used by other researchers studying music and emotion.

Method 1: Untrained Participant-selected Passages

The first study aims to collect melancholic and grieving musical passages from participants who were unaware of the theoretical differences between melancholy and grief. This collection method was utilized to help minimize experimenter bias as well as reduce potential bias from previous studies. In general, the re-use of stimuli in multiple experiments reduces the validity of the research insofar as a small repeated stimulus set reduces the representativeness of the stimuli for the phenomenon of interest (as discussed in Chapter 4). By gathering data from multiple musicians, it was hoped that a wide range of musical samples would be collected and that these samples would contain minimal bias.

Participants were asked to provide a list of musical works corresponding to two emotions: sadness (melancholy) and crying (grief). No details were given about how to define these emotions. Participants were allowed to choose works that elicited emotional responses or displayed emotional content. The instructions were intentionally left vague, in order to allow the participants to define the emotional spaces for themselves. Namely, participants could use multiple ways of defining “sadness/melancholy” and “crying/grief” music. They could use, for example, autobiographical memories, learned associations, familiarity with movie soundtracks, music theory concepts, etc. By collecting a broad swatch of stimuli, I hoped to gather data that covers the entire space of “musical melancholy” and “musical grief.”

121

Participants were given the following instructions for this study:

“We are interested in people’s musical preferences. For example, we are

interested in what music people associate with the following feelings: anger, fear,

happiness, sadness, crying, dancing, , , fear, tenderness. To help us

out with this, you have been selected to explore three of these terms: [e.g.,

happiness, sadness, crying]. Take some time to choose musical works that you

think represent these emotions (e.g., this music sounds sad or happy) or makes

you feel those emotions (e.g., this music makes me cry or makes me happy). You

should come up with 10 minutes of music for each condition. You should write

the musical works below in the appropriate box.”

Participants

Participants (n=56) were second-year aural training students in the School of

Music at The Ohio State University. There were 22 females and 33 males (1 sex unreported). The average age was 19.6 (range from 18-25). The average years of private lessons was 7.5 years (range from 2 years to 16 years).

Results

The complete list of crying (grief) and sad (melancholic) songs identified by participants is listed in Appendix B, Table B.1 (page 321). Participants produced a list of

97 grief/crying songs and 115 sad/melancholic songs. Eight songs or works were present in both lists: Barber’s Adagio for Strings, Ticheli’s An American Elegy, Sufjan Stevens’

Death with Dignity, Kansas’s Dust in the Wind, Coldplay’s Fix You, Tchaikovsky’s

122

Symphony No. 6, Mvt IV, of Horses’ The Funeral, and Bruno Mars’ When I Was

Your Man.

Method 2: Participant-selected Passages Trained with Facial Expressions

The second study asked participants to gather a list of musical works corresponding to a priori defined operationalizations of musical emotions. Namely, the aim was to define the emotional space for the participants according to psychological research on melancholy and grief (e.g., Andrews & Thomson, 2009; Frick, 1985; Mazo,

1994; Nesse, 1991; Rosenblatt et al., 1976; Urban, 1988; Vingerhoets & Cornelius,

2012). While it was important for participants to understand how melancholy and grief differ, I did not want them to know how these emotions should theoretically correspond to sounds. In order to circumvent possible , the difference between melancholy and grief was explained through pictures of facial expressions.

Facial expressions are thought to convey information about the emotional state of an individual, but do not necessarily provide any information about vocal or non-verbal sounds. In the psychological literature, grief has been related to crying, wailing, and faces of agony; when expressing grief, people often have open mouths (Ekman, 2003;

Vingerhoets & Cornelius, 2012). Melancholy, on the other hand, may not contain distinctive facial expressions and may sometimes be confused with expressions of relaxation or sleepiness; when expressing melancholy, people tend to have relaxed facial features, including a closed mouth (Andrews & Thomson, 2009; Nesse, 1991).

Participants were asked to focus explicitly on perceived (rather than experienced) musical emotion. In addition to several other mechanisms, much of the emotion induced

123 by music appears to relate to autobiographical memory and arbitrarily-learned associations (e.g., Juslin, 2013). Consequently, having participants focus on perceived emotion, rather than experienced emotion, may reduce some of the variability in the data.

Participants were given the following instructions:

“We are interested in people’s musical preferences. For example, we are

interested in what music people associate with the following feelings: anger, fear,

happiness, sadness, crying, dancing, boredom, awe, fear, tenderness. To help us

out with this, you have been selected to explore four of these terms: [e.g.,

peaceful/contented, joyous/jubilant, melancholy/sad, grief/crying]. These

emotions are represented in the pictures in front of you.

Take some time to identify musical works that you think represent these

emotions (e.g., “this music sounds peaceful,” NOT “this music makes me feel at

peace.”). When thinking about suitable music, you should aim to select passages

that match the emotions portrayed in the pictures as closely as possible.

You should come up with 3 or 4 examples of music for each emotion. You

should identify the musical works (or passages) in the appropriate box.”

The picture stimuli were taken from the NimStim Set of Facial Expressions

(Tottenham et al., 2009). This set of pictures contains 16 facial expressions, each performed by 43 professional actors, for a total of 672 facial expressions. The facial expressions are naturally-posed photographs of actors. The actors were African-, Asian-,

European-, and Latin-American. The emotions portrayed are happy, sad, angry, fearful, surprised, disgusted, neutral, and calm. All emotions were taken in two conditions (open-

124 mouth and closed-mouth), with the exception of surprise. Three happiness poses were selected, corresponding to closed-mouth, open-mouth, and open-mouth/exuberant. In the study by Tottenham and colleagues, the set was given to 81 untrained participants, who judged the facial expressions of the photos. The set has been shown to have high validity, reliability, and intra-rater agreement.

The photos used in the current study were chosen to represent the emotions of interest. The “melancholy/sad” pictures were four pictures of “sad: closed-mouth” from the NimStim set. The “grief/crying” pictures were four pictures of “sad: open-mouth” from the NimStim set. Each of the categories included expressions from the same four actors (actors 3, 11, 24, and 40) in the NimStim set. There were two females and two males. One actor was Latina-American, one actor was European-American, and two actors were African-American.

The participants were given four pieces of paper, corresponding to the two emotion categories of interest in this study, and two additional pieces of paper, corresponding to the emotions of peaceful/contented, and joyous/jubilant (which were collected for a separate project). Each piece of paper reproduced images of the facial expressions, along with the title of the emotion represented in the pictures (i.e.,

“Melancholy/Sad”). The participants were also given a sheet of paper containing the printed instructions.

Participants

Twenty-six participants who had not participated in the first study completed this task. All of the participants were second-year undergraduate music students at The Ohio

125

State University who participated for course credit. The age of the participants ranged from 19-25 years (M = 19.73, SD = 1.43). Participants had a range of musical training from 1-15 years in either instrumental or vocal practice (M = 6.27, SD = 3.74).

Results

The participant-identified works included 20 grief/crying works, 46 melancholy/sad works, 46 peaceful/contented works, and 59 joyous/jubilant works (see

Appendix B, Table B.2 on page 327). Of these 184 works, 91% of the stimuli might be labeled as popular music or film soundtracks, and only 9% of the stimuli might be considered to be Western Art Music. Most of the selections contained lyrics. Only three songs appeared in multiple lists. Happy (Pharell Williams), Love Story (Taylor Swift), and Starships (Nicki Minaj) appeared in both the joyous/jubilant and peaceful/contented lists.

Method 3: Expert-selected Passages

The final study utilized experts who were familiar with the theoretical, behavioral, facial, postural, and vocal characteristics of melancholy and grief. In addition, the experts were familiar with ideas of how the musical structure might differ between melancholic and grieving music (see Chapter 6 for details about these features). The idea that grief- related and melancholy-related passages might differ in musical structure comes from the comparison of musical passages with speech prosody (e.g., Juslin & Laukka, 2003).

Melancholic speech, for example, tends to be spoken in a quieter-than-normal voice, a slower speaking rate, lower-than-normal overall pitch, a monotone voice, in a mumbling

126 fashion, and with dark timbre (Kraepelin, 1921). Melancholic-music tends to mirror these prosodic characteristics: melancholic music is quieter, slower, lower in pitch, has smaller pitch movements, is legato, and uses darker timbres (Huron, 2008; Huron, Anderson, &

Shanahan, 2014; Schutz, Huron, Keeton, & Loewer, 2008; Turner & Huron, 2008; Post

& Huron, 2009; Yim, Huron, & Chordia, in preparation). Grief sounds, on the other hand, include features such as vocalized punctuated exhaling, energetic sustained tones (wails), ingressive vocalization, use of falsetto phonation, breaking voice, pharyngealization, creaky voice, and sniffling (Huron, 2015; Urban, 1988). Grief music, then, is conjectured to exhibit similar features: it may contain punctuated or forceful onsets, sustained tones, descending pitch motions, ingressive phonation, wide leaps, loud dynamics, and utilize abrupt pitch transitions (“breaking”) (Huron, 2015; Paul & Huron, 2010).

Based on this research, it could be expected that the specialists surveyed in this study had a narrow conceptualization of what music should be considered to be melancholic or grief-like. I sorted through a wide range of songs in order to find exemplars of melancholic and grieving musical passages. From the list of participant- identified works collected in the first two studies, works without understandable lyrics were selected in order to avoid possible confounds arising from the semantic content. In addition to the two participant-generated sources of stimuli cited above, a final source of stimuli was the PUMS database, discussed in Chapters 3 and 4. From the PUMS corpus, passages were selected that had been explicitly categorized as sad (or related terms, such as depressed and sorrowful) by other researchers. A subset of the works from the PUMS was chosen, leading to an additional list of 884 candidate melancholic and grieving works. Due to insufficient information identifying the specific passage/section of interest,

127 this list was further reduced to 650 works, as some works were not given identifiable information and could not be found.

Using these three sources—the two participant-generated lists and the PUMS—a total of 928 musical works that were related to musical sadness were selected (650 from the PUMS database, and 278 participant-selected works from the studies presented above). This list primarily consists of full musical works. From these full works, two types of stimuli were generated:

(1) Passages of approximately 15 seconds. These passages are sufficiently short

to effectively represent one musical emotion, termed “affectively

homogenous.” They are also an appropriate length for listeners to accurately

categorize the conveyed musical-emotion of the passages (Eerola &

Vuoskoski, 2013).

(2) Passages of approximately 1 minute. These passages are long enough to be

able to effectively induce emotions in listeners (Eerola & Vuoskoski, 2013).

From these 928 musical passages, I selected 62 passages of music-related melancholy or grief. These passages are listed in Appendix B, Table B.3 (page 331).

Specifically, passages were given labels of melancholy or grief if they contained the criteria identified by Huron (2015) as features of grief-like and melancholy-like music

(summarized in Table 5.1 on page 130). Passages were designated melancholic music samples if they contained the following features: (1) quiet, (2) slow, (3) low in pitch or minor mode, (4) small pitch movements, (5) legato, and (6) dark timbres. Passages were designated grieving musical samples if they exhibited the following features: (1) loud, (2)

128 fast, (3) high in pitch, (4) wide pitch movements or leaps, (5) staccato or punctuated exhaling, (6) harsh or nasal timbres, (7) descending (gliding) motions, and (8) sustained tones or wails (Huron, 2015; in preparation).

Once the 62 examples of melancholic and grieving music were identified, two additional experts independently listened to each sample. These experts were also familiar with the psychological and musical operationalizations of melancholy and grief discussed above. These two judges were asked to classify each of the 62 passages as either grief-expressive or melancholy-expressive. The labeled emotion (melancholy or grief) of these passages and the correspondence among the three judges is presented in

Appendix B, Table B.3 (page 331). Intra-rater reliability was calculated by comparing within-subject responses to the short and long musical passages. Out of 87 comparisons,

75 ratings were consistent across the short and long passages, leading to an intra-rater reliability of 86.2%. Out of the 62 musical passages selected, all three raters agreed on 38 out of the 62 samples, or 61.2% of the passages.

129

Melancholic Melancholic Grieving Grieving Speech Features Musical Features Speech Features Musical Features Quiet voice Quiet dynamics Energetic or Loud dynamics; sustained tones Sustained tones; (wails) Gliding pitch motions/portamento; Energetic/fast tempo Slow speaking rate Slow tempo

Lower overall pitch Low tessitura; Falsetto phonation High tessitura Minor mode

More monotone Small pitch range; Breaking voice Large pitch range; prosody Small melodic Wide pitch intervals intervals or leaps

Mumbled Legato articulation; Vocalized Repeated articulation Greater use of punctuated exhaling staccato/detaché sostenuto pedal

Dark timbre Dark timbre Pharyngealization Harsh or nasal instruments; timbres Stopped strings; Use of mutes Breathy voice -- Sniffling --

Ingressive -- vocalization

Table 5.1. Hypothesized grief and melancholy acoustic and musical features according to David Huron (2015)

General Discussion

In the psychological field, it is recognized that what some people call sadness can be better explained using (at least) two separate terms: melancholy and grief (e.g.,

Darwin, 1872). The idea that nominally sad music may, in fact, be better explained by using these two terms has broad implications for the field of music psychology as a whole. As we saw in Chapter 4, of the 117 emotions listed in the PUMS database, 22%

(2,107) of the stimuli were designated as sad. Sadness was also the most commonly-

130 reported emotion, appearing more often than happiness, anger, fear, and tenderness. In

Chapter 4, I questioned whether one word (such as sadness) was able to accurately represent drastically different works of music. The current chapter asked participants of various levels of expertise to identify two (potential) types of sad music: melancholy and grief. There were few works that were identified as both melancholic and grieving, suggesting that there may be differences between these types of music. If it is the case that melancholic music differs from grieving music, future researchers should consider using these two terms when describing stimuli or classifying music.

The next logical step in the examination of melancholic and grieving music is to inspect the structural features of these musical passages to see if they are consistent with

Huron’s theory. Chapter 6 presents a study that uses highly-trained listeners to identify different features in melancholic and grieving passages.

131

Chapter 6: Redefining Sad Music: Music’s Structure Suggests at Least Two

Sad States

In Chapter 2, I described the ideas of so-called basic emotion theorists, who tend to believe that there are a few specific neural states that signify when a person is in a certain emotional state, such as sadness (e.g., Ekman, 1977). I also discussed that although current psychological theory tends to be dominated by psychological and social constructionists—who usually agree that there are many more emotions than those accounted for by the basic emotions and appraisal theorists—music research still focuses on only a few emotions that can be evoked or expressed through music (e.g., Juslin,

2013).

The analysis of the PUMS data, presented in Chapter 4, revealed that there have been over 100 emotions explicitly related to musical stimuli. Despite the apparent variety of emotions related to music, however, I found that over half of music-related emotion studies consisted of only three emotions: sadness, happiness, and anger. Furthermore, I found that only nine emotional terms were used to describe more than one percent of the musical stimuli: sad, happy, anger, relaxed, chills/pleasure, fear, chills, peace, and groove. The question that naturally arises from these findings is, “Can these nine emotion terms accurately describe all the affective states represented or elicited by music?”

132

Chapters 5-8 represent a first step to answer this question by focusing on only one of those nine emotions: sadness. From analyzing the PUMS (and other) data, it appears that the way researchers define sadness varies widely. Listeners and researchers appear to offer a wide range of descriptions when characterizing nominally “sad” music. These terms include crying despair, depressed, quiet sorrow, and gloomy-tired (e.g., Juslin &

Laukka, 2003). Using operationalizations made by psychologists, not all of these terms even correspond to emotions. Researchers distinguish three categories of affective responses: emotions, moods, and personality traits (e.g., Scherer & Zentner, 2001).

Emotions are described as brief affective episodes that occur in response to a significant internal or external event; moods are described as longer affective states characterized by a change in subjective feelings of low intensity; and temperaments are described as a combination of dispositional and behavioral tendencies. According to this classic taxonomy, sad and desperate would be deemed emotions, gloomy, listless, and depressed would be moods, and morose would be deemed a temperament or personality trait.

Without specifying which of these three affective classes (emotions, moods, personality traits) are being examined in a particular experiment, it becomes impossible to compare results across studies.

One indication that researchers have (unknowingly) compared results across these three affective classes is that meta-analyses on musical sadness provide seemingly null results. For example, despite the fact that many studies have shown that participants can perceive and experience musical sadness, a review found that there is no association of nominally sad music with positive or negative valence (Eerola & Vuoskoski, 2011). I

133 hypothesize that the large variance in response to sad music could be a consequence of the failure to distinguish multiple states of sadness, such as melancholy and grief.

Back in the 2000s, researchers investigated structural features of nominally sad music. For example, Juslin and Laukka (2003) conducted an important review that summarized characteristics of emotional vocal and musical patterns. As discussed above, in order to study features of sad voice and music, Juslin and Laukka combined passages that had been given labels such as crying despair, depressed-sad, depression, despair, gloomy-tired, grief, quiet sorrow, sad, sad-depressed, sadness, sadness-grief-crying, and sorrow, among others. In the decade that has followed, researchers have wondered if passages exhibiting crying despair and quiet sorrow can truly be compared (e.g., Huron,

2015; Peltola & Eerola, 2016). In line with the current research trend, the hypothesis of this chapter is that we may be able to better describe nominally sad music by using more than one emotional term.

As discussed in the previous chapter, psychological research has long suggested that the term sadness is unnecessarily vague (e.g., Darwin, 1872). A number of psychologists have suggested, for example, that melancholy and grief are two separable emotions with differing motivations and physiological characteristics (e.g., Andrews &

Thomson, 2009; Frick, 1985; Kottler, 1996; Mazo, 1994; Nesse, 1991; Rosenblatt et al.,

1976; Urban, 1988; Vingerhoets & Cornelius, 2012). David Huron (2015) has speculated that the difference between melancholy and grief may also be present in music, although this has not been tested empirically. Initial support of this idea, presented in Chapter 5, is the finding that listeners identify different musical examples in response to prompts of melancholy and grief.

134

Huron is not the only scholar to suggest that there are multiple types of music- related sadness (e.g., Eerola & Peltola, 2016; Eerola, Peltola, & Vuoskoski, 2015; Eerola,

Vuoskoski, Peltola, Putkinen, & Schäfer, 2017; Laukka, Eerola, Thingujam, Yamasaki, &

Beller, 2013; Peltola & Eerola, 2016; Quinto, Thompson, & Taylor, 2014; Taruffi &

Koelsch, 2014; van den Tol, 2016). Musical sadness has also been broken down into categories such as grief, , and sweet sorrow (Peltola & Eerola, 2016), grief- stricken sorrow, comforting sorrow, and sorrow (Eerola & Peltola, 2016), as well as other combinations (Eerola, Peltola, & Vuoskoski, 2015). These researchers have suggested that some of these sad emotions are negatively valenced, while others are positively valenced. It is thought that breaking nominally sad music into different categories may help account for the wide variety of experiences people have when listening to music.

This chapter aims to test whether there are different music structural features in melancholic and grieving music. Melancholic and grieving music, Huron conjectures, may contain similar features to their vocal homologues. Namely, melancholic music may have quiet dynamics, slow tempo, low tessitura, minor mode, small pitch range, small melodic intervals, legato articulation, greater use of sostenuto pedal, dark timbre instruments, stopped strings, and use of mutes, while grieving music may have loud dynamics, sustained tones, gliding pitch motions, portamento passages, energetic or fast tempo, high tessitura, large pitch range, wide pitch intervals or leaps, repeated staccato or detaché passages, and harsh or nasal timbres. In the past decade, Huron has found that nominally sad music—what I am referring to as melancholic music—is consistent with some of the features mentioned above (Huron, 2008; Schutz, Huron, Keeton, & Loewer,

135

2008; Turner & Huron, 2008; Post & Huron, 2009; Yim, Huron, & Chordia, in preparation). In many of these cases, however, sad music was operationalized simply as music in the minor mode; this definition may be problematic because music in other modes (e.g., major, Dorian) can sometimes be considered sad and it is clear that not all minor-mode passages are sad. Additionally, in these early studies, Huron and colleagues did not differentiate between melancholic and grieving music. The current study aims to reduce these problems by utilizing passages deemed melancholic or grieving by musicians, first presented in Chapter 5.

In attempting to assess the importance of different proposed parameters, at least three different measurement methods might be distinguished: (1) score based analysis, (2) audio-signal based analysis, and (3) listener subjective estimates. For the case of loudness, a number of studies have made use of dynamic markings in musical scores. For example, the average loudness can be calculated by noting the dynamic marking for a particular passage. This can be quantized by utilizing an ordinal scale of common dynamic markings, with ppp being ranked as “1”, pp as “2”, through fff as “8.” A second approach might take advantage of audio-signal processing tools, such as the MIR

Toolbox in MATLAB (Lartillot & Toiviainen, 2007). For example, average RMS levels can be taken as an estimate of loudness. A third way we could measure each musical parameter might make use of subjective judgments of these features by listeners. For example, listeners could rate the loudness of a passage on a 1-7 unipolar scale.

There are advantages and disadvantages to each of these three methods. Although score-based methods have proved useful in previous corpus studies (e.g., Warrenburg,

2016), this approach is impractical in this case because notated scores are typically

136 unavailable for film soundtracks. Although audio signal processing tools are sophisticated, they are unable to address several important factors thought to be important for grief and melancholy. For example, there currently exists no audio-based tool to determine whether an instrument is played with mutes. Although subjective, listener ratings may have greater validity in identifying factors related to emotion. For example, if a work starts quietly, but builds in dynamic until the end of a passage, perhaps participants would weight the ending of the work as more important to the emotional character than the beginning. Therefore, rather than quantitatively rating the average dynamic level, he or she may say the passage is generally “loud.” Additionally, participants can choose to weight the importance of the melody and accompaniment differently, if they so choose. Two disadvantages of this approach are that participants may be biased and typically only one recording of a passage is used to gather the ratings.

This method was chosen for the current study, even with its limitations.

Methods

As mentioned above, the principal goal of this chapter was to examine the importance of several musical parameters in determining whether a passage represents melancholy or grief. The current study made use of the list of melancholic and grief-like musical passages identified in Chapter 5. Ten of these passages were selected for use in the present study, as summarized in Table 6.1 on page 138. All of these selected passages were around 15 seconds in duration, as previous work has shown that 15 seconds is enough time for listeners to accurately perceive emotions in music (Eerola & Vuoskoski,

2013). These relatively short passages were chosen instead of the longer (one minute)

137 passages because it was thought that the short passages might be more affectively homogenous. In other words, the short files are more likely to only represent a single emotion (melancholy or grief). The longer passages are more likely to represent or portray multiple emotions, and therefore might confuse the participants or add unwanted or unexplained variance. The use of shorter passages also makes it more likely that participants will choose to listen to the passages more than once, which will help to maintain a more accurate memory of each stimulus. These passages were presented in a random order to all participants.

ID Composer Title Passage Type 1 Barber Adagio for Strings 4:06-4:22 Melancholy 2 Faure Après un rêve 0:00-0:13 Melancholy 3 Mozart Fantasia in D Minor 00:50-1:07 Melancholy 4 Junkie XL Redemption 00:01-00:19 Melancholy 5 Albinoni Adagio in G Minor 0:24-00:36 Melancholy 6 Williams Schindler’s List, Track 1 02:37-02:47 Grief 7 Marionelli Jane Eyre, Track 1 02:08-02:23 Grief 8 Barber Adagio for Strings 6:20-6:35 Grief Sherlock Series 2, 9 Arnold and Price Track 18 1:11-1:26 Grief 10 Tchaikovsky Symphony 6 Mvt 4 05:54-06:11 Grief

Table 6.1. List of stimuli for the study regarding structural features

In his writings on musical melancholy and grief, Huron hypothesizes that there are eleven features of musical melancholy and nine features of musical grief (see below).

The current study aimed to evaluate each of these twenty parameters in melancholic and grieving music. Participants—all of whom were highly proficient in aural skills—were asked to listen to the ten musical passages and rate each excerpt on the twenty features identified by Huron (2015; in preparation). Each of these features was presented on a 7-

138 point unipolar scale. Unipolar scales (e.g., loud dynamics, from 1: not at all loud to 7: very loud) were used instead of bipolar scales because the seemingly-related features

(such as quietness and loudness) were thought to be statistically independent (see Larsen

& McGraw, 2011 for such a discussion). Namely, it is possible for a passage to have both quiet and loud elements. Listeners were asked to rate each musical passage on the proposed characteristics of melancholic music and the proposed characteristics of grieving music. The twenty features are listed below:

(1) Quiet dynamics (1: not at all quiet, 7: very quiet)

(2) Loud dynamics (1: not at all loud, 7: very loud)

(3) Slow tempo (1: not at all slow, 7: very slow)

(4) Fast tempo (1: not at all fast, 7: very fast)

(5) Low register (1: not at all low register, 7: very low register)

(6) High register (1: not at all high register, 7: very high register)

(7) Narrow pitch range (1: not at all narrow pitch range,

7: very narrow pitch range)

(8) Wide pitch range (1: not at all wide pitch range, 7: very wide pitch range)

(9) Use of small melodic intervals (1: no small melodic intervals,

7: lots of small melodic intervals)

(10) Use of wide melodic intervals (or leaps) (1: no large melodic intervals/leaps,

7: lots of large melodic intervals/leaps)

(11) Legato articulation (1: not at all legato, 7: very legato)

(12) Staccato articulation (1: not at all staccato, 7: very staccato)

139

(13) Use of long (or sustained) tones (1: no sustained tones,

7: lots of sustained tones).

(14) Gliding pitch motions (1: no gliding pitch motions,

7: lots of gliding pitch motions)

(15) Use of minor mode (1: never in the minor mode,

7: always in the minor mode)

(16) Dark sound (or timbre) (1: not at all dark, 7: very dark)

(17) Harsh or nasal sound (or timbre) (1: not at all harsh or nasal,

7: very harsh or nasal)

(18) If piano was present: use of sustain pedal (1: no use of the sustain pedal,

7: lots of use of the sustain pedal)

(19) It string instruments were present: use of stopped strings

(1: no use of stopped strings, 7: lots of use of stopped strings)

(20) If brass or string instruments were present: use of mutes (1: no use of mutes,

7: lots of use of mutes).

Unlike speech, music commonly makes use of multiple simultaneous sound sources. This can lead to divergent textures, such as a contrast between a melody and an accompaniment. It is possible for a melody to feature the use of small intervals and a narrow pitch range, while the accompaniment features arpeggiated figures involving large intervals and a wide pitch range. Similarly, a melody may involve a rather slow pace, whereas the accompaniment features a rapid sequence of pitches, suggesting a faster pace. The combination of multiple features raises the question of which component

140 of the texture listeners should rate. Rather than instruct listeners to focus on a single component in a musical texture (such as the melody or the accompaniment), listeners were instructed to respond according to the overall emotional character conveyed by the passage. Namely, listeners were asked to focus on one element or another, depending on the particular passage and feature of interest. Listeners were also encouraged to report any additional observations that were salient to them regarding the emotional character of the passage. These observations might include factors such as the musical structure, , or harmonic function.

Specifically, judges received the following instructions:

“In this task, you will listen to 10 musical excerpts, each about 15 seconds

in length. You are free to listen to each excerpt as many times as you wish. You

will be asked to rate each excerpt according to 18 features. Your response should

reflect the overall emotional character conveyed by the passage. In rating these

features, we encourage you to make your best guess if you are not sure of an

answer, but you are allowed to leave the question blank if you choose.

In addition, we ask that you identify any additional noticeable musical

features that you think contributes to the emotional character of the music (such

as rhythm, harmony, instrumentation, orchestration, structure, etc.).”

Pilot Study

A pilot study of 10 musicians (7 graduate students in music theory, 1 professor of music theory, 1 undergraduate in music, and 1 graduate student in psychology) was conducted in order to assess the difficulty of this task. In the pilot study, listeners rated

141 how confident they were in their ability to understand and aurally discern each of the twenty features (e.g., legato articulation, use of stopped tones) in several excerpts of music. The confidence ratings showed that listeners varied in their confidence to aurally identify each feature. The means and standard deviations of the confidence ratings are shown in Table 6.2. Because of the low confidence scores on “use of stopped strings”

(average of 1.8 out of 7) and “use of mutes,” (average of 2.9), these features were eliminated from the main study. The other eighteen features were included in the main study.

Dimension Mean Confidence SD Fast tempo 6.7 0.46 Slow tempo 6.7 0.46 Loud dynamics 6.6 0.66 Quiet dynamics 6.6 0.66 Staccato (or détaché) articulation 6.2 1.17 High register 6.1 0.94 Legato articulation 6.1 1.04 Low register 6.1 0.94 Use of sustained tones 5.9 1.22 Use of large melodic intervals (or leaps) 5.5 1.20 Use of small melodic intervals 5.5 1.02 Narrow pitch range 4.6 1.91 Wide pitch range 4.6 1.91 Gliding pitch motions (or portamento) 4.3 1.62 Dark sound (or timbre) 4.1 1.64 Minor mode 4.1 1.37 Harsh/nasal sound (or timbre) 4.0 1.48 Use of the sustain pedal 3.3 1.48 Use of mutes 2.9 1.79 Use of stopped strings 1.8 0.87 .

Table 6.2. Confidence ratings on the 20 dimensions for the pilot study

142

Main Study

After the pilot study was completed, a separate study was needed to assess the importance of the remaining eighteen features in determining whether a musical excerpt was melancholic or grieving. The aim of the study was to examine the importance of each of the eighteen identified musical parameters in determining whether a passage represents melancholy or grief. The chosen passages of melancholic and grieving music are summarized in Table 6.1 (page 138).

Twenty independent judges, unfamiliar with the results of the pilot study, were recruited for this task. Because of the importance of being able to discriminate features of the musical passages (e.g., use of small melodic intervals, use of minor mode), it was decided a priori to only include participants who were highly proficient in aural skills.

Accordingly, each of the participants was hand-selected by music theory or aural skills instructors because of his or her demonstrated superior aural-skills ability. The participants were all from the School of Music at The Ohio State University. The small number of participants may have resulted in reduced power to detect an effect (if it is there). However, if participants without proficient aural skills were to participate in the study, it could undermine the validity of the study results. For better or worse, I believed that it was more important to receive accurate measurements of the musical parameters than to have a large number of participants.

In order to minimize possible confirmation bias, the judges were not told about the hypothesized differences between melancholy and grief. Since the participants were from The Ohio State University, it is appropriate to determine whether they were familiar with Huron’s theory. At the end of the survey, participants were asked, “Are you familiar

143 with David Huron’s theory of the difference between sad (or melancholy) music and grief-like music?” in order to control for possible confounding effects. A few of the judges were familiar with the theoretical distinction between melancholy and grief, but were unaware of the classification of the specific passages used in the study. A regression analysis showed that the results of the study were unaffected by whether or not participants were familiar with the theoretical distinction between melancholy and grief.

In other words, those familiar and unfamiliar with the theory made similar judgments regarding the features of interest in the study.

Due to a technical error, demographics were only collected for 12 out of the 20 participants. Half of the 12 participants were female (50%) and the average age was 26.5.

Participants reported, on average, 11 years of private instrument/voice instruction and reported an average of 3.46 years of aural skills or theory training.

Results

If the proposed grief and melancholy features help to discriminate the two types of passages, then one ought to expect that the ratings for the eighteen proposed features should be able to predict whether a passage is classified as predominantly expressing grief or melancholy. Logistic regression provides a suitable method of analysis, where the predicted variable is the grief or melancholy designation and the predictor variables are the eighteen features of interest. Opposite pairs (e.g., loud and quiet, slow and fast) proved to have correlations of almost -1, so only one of the two opposite terms (e.g., loud and slow, but not quiet and fast) were input into the analysis to avoid over-fitting.

Additionally, there were only 59 observations out of a possible 220 observations for the

144 question regarding the sustain pedal, so this variable was excluded in the analysis, as well.

As can be seen from Table 6.3, music that is quieter, lower-in-pitch, and contains narrow pitch intervals predicts music that is melancholic in nature, whereas music that contains sustained tones, gliding pitches, and harsh timbres predicts music that has been previously categorized as grieving (all ps < 0.05). The amount of variance explained by the model was 81.8%.

Feature Estimate Std. Error Z p (Intercept) 1.42519 3.21260 0.444 0.657314 Quiet -0.64233 0.22304 -2.880 0.003979 * Slow -0.07036 0.31028 -0.227 0.820605 Low -1.06539 0.32127 -3.316 0.000912 * Narrow -0.85043 0.26980 -3.152 0.001621 * Small 0.00866 0.22697 0.038 0.969565 Legato -0.42763 0.31820 -1.344 0.178973 Sustain 0.61599 0.27736 2.221 0.026357 * Gliding 0.65340 0.23728 2.754 0.005893 * Minor 0.11442 0.23924 0.478 0.632445 Dark 0.18353 0.29374 0.625 0.532090 Harsh 1.10090 0.31220 3.526 0.000422 *

Table 6.3. Logistic regression results predicting melancholy and grief-like music from 18 musical features. Asterisks represent statistically significant results.

A post-hoc cluster analysis was performed on all features, as well. Six clusters resulted from the analysis. The first cluster included the terms high pitch, low pitch

(negative) and dark timbre (negative). The second cluster included the terms narrow pitch movements (negative) and wide pitch movements, whereas the third cluster involved the terms slow tempo, fast tempo (negative), and sustained tones. The terms legato, staccato

(negative), and gliding pitches comprised Cluster 4, whereas the terms loud, quiet

145

(negative), and harsh timbre comprised Cluster 5. Finally, Cluster 6 contained the following features: large pitch movements, small pitch movements (negative), and minor mode (negative). Overall, these results correspond with the terms used in the logistic regression.

The additional features that listeners used to “identify any additional noticeable musical features that you think contributes to the emotional character of the music” are listed in Appendix C, Table C.1 (page 333).

Discussion

In light of the logistic regression analysis, it appears that quiet dynamics, low registers, narrow pitch ranges, sustained tones, gliding pitches, and harsh timbres are statistically significant predictors of whether a passage might be deemed expressive of melancholy or expressive of grief. Overall, the results are consistent with acoustical features that can be directly attributed to characteristic physiological hallmarks associated with melancholy and grief. The results are also consistent with the hypothesis that music that has previously been labeled as melancholic and grieving contain different musical parameters.

In this study, participants were not directly asked whether these passages were expressive of emotions, and if so, which emotions they represented. In other words, it is not possible to claim that these passages are perceived as portraying grief and melancholy, specifically. Chapter 7 addresses this limitation by asking participants to identify which emotions are represented in these musical passages.

146

Chapter 7: Melancholic and Grieving Music Express Different Affective States

When characterizing nominally sad music, listeners appear to offer a wide range of descriptions. The current chapter continues the investigation of whether the large variance in responses is a consequence of the failure to distinguish melancholy from grief. In the last chapter, I presented a study where the results were consistent with the idea that there might be differences in the musical structure in grieving and melancholic music. Specifically, the results of the study were consistent with the idea that music that is quieter, lower-in-pitch, and contains narrow pitch intervals predicts music that is melancholic in nature, whereas music that contains sustained tones, gliding pitches, and harsh timbres predicts music that has been previously categorized as grieving.

Despite the fact that melancholic and grieving passages seem to contain different structural features, there is no evidence that these passages actually express melancholy or grief. In other words, although participants are able to distinguish structural characteristics of these passages, both types of music may simply express sadness (or another emotion). The current study addresses this limitation by investigating whether listeners perceive different emotions in melancholic and grieving music. Two correlational studies with different methodological designs are presented.

147

Study 1

It is common for researchers to ask listeners to choose which emotion(s) a passage represents from a list of emotion words or to ask participants to rate the amount of emotion(s) expressed on a scale (e.g., 1-7 Likert scale or 1-100 continuous scale).

Often, in these studies, the researchers only examine a few emotional categories, like happy, sad, angry, and fearful (e.g., Juslin, 2000). Notice that in this list of emotions, sad

(usually referring to melancholic-like music) is the only emotion that exhibits low arousal

(Russell, 1980). Happy, angry, and fearful are all considered to be high-arousal emotions

(Russell, 1980). When a musical passage is indicated to be more sad (or melancholic) than other passages, using this paradigm, it is not possible to determine whether listener responses truly indicate a robust ability to identify sad (melancholic) music, as opposed to merely low arousal music. In the first study, musical passages exemplifying another low arousal emotion—tenderness—are included, so as to determine whether nominally melancholic music can be distinguished from another low-arousal emotion.

In terms of the Affect Grid (Russell et al., 1989), melancholy might be characterized as exhibiting negative valence and low arousal, whereas grief might be characterized as exhibiting negative valence and high arousal. In the first experiment, passages representing the other two theoretical quadrants—happiness (positive valence and high arousal) and tenderness (positive valence and low arousal)—were also included.

The for this decision was to reduce the potential confound of arousal level in judgments of melancholic or grief-related expressions. That is, the goal is to determine whether the identification of melancholic music is more than simply a response to low-

148 arousal music and to similarly determine whether the identification of grieving music is more than simply a response to high-arousal music.

Formally, the following hypothesis was tested:

H. Listeners can distinguish musical passages associated with grief from passages

associated with melancholy. Listeners can also distinguish grieving musical

passages from happy musical passages and can similarly distinguish melancholic

musical passages from tender musical passages.

Stimuli and Experimental Conditions

The musical stimuli consisted of four melancholic passages, four grieving passages, four tender passages, and four happy passages, each approximately 15 seconds in duration. The passages were taken from extant research (melancholy and grief from

Chapter 5; tender and happy from Eerola & Vuoskoski, 2011). No effort was made to verify the emotional labels from these databases. The complete list of stimuli can be found in Table 7.1 on page 150.

Participants were given the following instructions:

“For this block, you should think about what emotions the music is representing.

In other words, you should focus on the emotional characteristics that the music

itself displays. For example, a passage may sound like it is sad or sound like it is

angry. You should not take into account how you are feeling in order to answer

these questions. For example, it is possible that you might hear some

passages that sound happy, but the passage might actually make you feel angry.

149

For this block, we want you to describe the musical passage rather than what you

feel. So you’d say the music sounds happy.”

Emotion Composer Title Passage Melancholy Barber Adagio for Strings 4:06-4:22 Melancholy Faure Aprés un rêve 0:00-0:13 Melancholy Junkie XL Redemption 0:01-0:19 Melancholy Franck Symphony in D Minor, Mvt 2 0:37-0:53 Melancholy Mozart Fantasia in D Minor 00:50-1:07 Melancholy Albinoni Adagio in G Minor 0:24-0:36 Grief Marionelli Jane Eyre, Track 1 2:08-2:23 Grief Barber Adagio for Strings 6:20-6:35 Grief Arnold and Price Sherlock Series 2, Track 18 1:11-1:26 Grief Williams Schindler’s List, Track 1 2:37-2:47 Tender 029 The Portrait of a Lady, Track 3 0:23-0:45 Tender 041 Shine, Track 10 1:28-1:48 Tender 042 & Prejudice, Track 1 0:10-0:26 Tender 107 The Godfather, Track 5 1:12-1:28 Happy 027 Oliver Twist, Track 8 1:40-2:04 Happy 055 with Wolves, Track 10 0:28-0:46 Happy 071 The Untouchables, Track 6 1:50-2:05 Happy 105 Pride & Prejudice, Track 4 0:10-0:29

Table 7.1. List of musical stimuli used in Studies 1 and 2. Items in italics were only used in Study 1 and items in bold were only used in Study 2.

The participants were not told about the hypothesized differences between melancholy, grief, happiness, and tenderness. The rationale for not explaining these differences was that it allows participants to use their own definitions of the terms and so minimizes potential bias in favor of the experimenter’s theoretical perspective.

Participants were asked, “Which of these four emotions does this musical passage best represent? If none of these fit, choose ‘none’.” That is, the experiment made use of a five-alternative forced-choice response paradigm: melancholy, grief, tender, happy, and none. Participants were also asked to evaluate each stimulus by answering the following

150 questions: “To what extent does this audio file sound positive?” (11-point Likert scale from 0 “not at all positive” to +10 “extremely positive”) and “To what extent does this audio file sound negative?” (11-point Likert scale from 0 “not at all negative” to +10

“extremely negative”). These two questions were directed at the perceived valence of the music. As previous research has shown that positivity and negativity are separable in emotion experiences, two unipolar scales were used instead of a single bipolar scale

(Larsen & McGraw, 2011). The next question aimed to identify the perceived emotional arousal of the music: “To what extent does this audio file sound energetic/arousing?”

(11-point Likert scale from 0 “this sound represents no energy/arousal” to +10 “this sound represents an extreme amount of energy/arousal”). Finally, participants were asked, “How familiar are you with this audio file?” on a three-point scale (not familiar, somewhat familiar, very familiar).

Participants

Forty-six participants from The Ohio State University School of Music and

Department of Psychology took part in the experiment. The mean age was 20.5 years (SD

= 1.68; range from 19 to 28 years). Of these participants, 15 (38.5%) were female.

Participants exhibited a wide range of musical training, with a range of 2-4 years of formal music theory training (mean = 2.64). Participants reported a range of 3-15 years of formal instrumental or vocal training (mean = 9.39). Due to a technical problem, age, gender, amount of musical training, and amount of music theory training were only recorded for 39 participants.

151

Validating Stimuli Valence and Arousal

The results of the study are consistent with the idea that the stimuli represented the four quadrants of the Affect Grid (Russell et al., 1989). The grief stimuli were rated as having negative valence and high arousal (positivity = 2.16, negativity = 7.15, arousal

= 4.96), the melancholy stimuli were rated as having negative valence and low arousal

(positivity = 2.86, negativity = 6.47, arousal = 3.13), the tender stimuli were rated as having positive valence and low arousal (positivity = 6.39, negativity = 2.44, arousal =

3.68), and the happy stimuli were rated as having positive valence and high arousal

(positivity = 8.16, negativity = 1.01, arousal = 7.66). Melancholy and grief stimuli differed in arousal, positivity, and negativity (all ps < .01) and happy and tender stimuli differed in arousal, positivity, and negativity (all ps < .01). Specifically, the results are consistent with the idea that melancholy and grief stimuli (and happy and tender stimuli) differ in perceived affective attributes, consistent with the results of Chapter 5.

Distinguishing Melancholy from Grief

Participants did not use the terms melancholy, grief, happy, and tender equally

(X-sq. = 72.97, df = 4, p < 0.01). Participants used the term grief 166 times (25.6%), melancholy 132 times (20.4%), happy 171 times (26.4%), and tender 179 times (27.6%).

In particular, participants used the term grief more frequently than the term melancholy

(X2 = 3.88, df = 1, p < 0.05). Responses to all the stimuli are graphically illustrated in

Figure 7.1.

152

Figure 7.1. Number of emotional responses for each stimulus in Study 1

When presented with the a priori deemed grief-like stimuli, participants chose the descriptor grief 102 times (65.0%), melancholy 45 times (28.7%), tender 10 times, happy

0 times, and none 19 times. A chi-square test suggests that these emotional terms were not used equally to describe grieving music (X2 = 190.19, df = 4, p < 0.01). Participants chose the term grief more than chance (chance level = 0.2; p < 0.01). Based on the hypothesis of the study, it is necessary to compare whether nominally-grieving music was equally expressive of grief and melancholy, as well as equally expressive of grief and happiness. The use of the binomial test is appropriate for these hypotheses. The binomial test compares whether the percentage of grief terms is equal to 50% of the sum of grief and melancholy terms (or grief and happy terms). The binomial test indicated that the amount of grief and melancholy terms differed in a statistically significant manner (p <

153

0.01). A separate binomial test indicated that the amount of grief and happy terms differed in a statistically significant manner (p < 0.01).

When presented with the a priori deemed melancholic stimuli, participants chose the descriptor melancholy 70 times (41.2%), grief 63 times (37.1%), tender 35 times

(20.6%), happy 2 times, and none 6 times. A chi-square test suggests that these emotional terms were not used equally to describe melancholic music (X2 = 111.9, df = 4, p < 0.01).

Participants chose the term melancholy more than chance (chance level = 0.2; p < 0.01).

A binomial test indicated that the amount of melancholy and grief terms did not differ in a statistically significant manner (p > 0.05). However, a separate binomial test indicated that the amount of melancholy and tender terms did differ in a statistically significant manner (p < 0.01). As will be discussed later, these results could be due to the fact that two of the a priori chosen melancholic stimuli were more grief-like in nature than melancholic in nature.

Distinguishing Tender from Happy

The results of the study are consistent with the idea that participants can distinguish tender music from happy music. When presented with tender stimuli, participants chose the descriptor tender 129 times, melancholy 16 times, grief 1 time, happy 22 times, and none 9 times. A chi-square test suggests that these emotional terms were not used equally to describe tender music (X2 = 316.31, df = 4, p < 0.01).

Participants chose the term tender more than chance (chance level = 0.2; p < 0.01), chose the term tender more often than the term happy (p < 0.01), and chose the term tender more often than the term melancholy (p < 0.01).

154

When presented with happy stimuli, participants chose the descriptor happy 148 times, grief 0 times, tender 5 times, melancholy 1 time, and none 22 times. A chi-square test suggests that these emotional terms were not used equally to describe happy music

(X2 = 460.76, df = 4, p < 0.01). Participants chose the term happy more than chance

(chance level = 0.2; p < 0.01), chose the term happy more often than the term tender (p <

0.01), and chose the term happy more often than the term grief (p < 0.01).

Discussion

The results of Study 1 are consistent with the main hypothesis: listeners can distinguish musical grief from musical melancholy and from musical happiness and listeners can also distinguish musical melancholy from musical tenderness, but not from musical grief. One possible interpretation of the failure to discriminate melancholy from grief is that the stimuli chosen to represent melancholy could have portrayed other “sad” emotions like grief or longing. Post-hoc tests were carried out to investigate whether there were differences of perceived emotions among the four melancholic stimuli. These post-hoc tests suggest that two of the nominally-melancholic stimuli (Albinoni’s Adagio in G and Faure’s Aprés un rêve) were perceived as melancholic, while the other two nominally-melancholic stimuli (Barber’s Adagio for Strings and JunkieXL’s Redemption) were perceived as grief-like. The Albinoni stimulus, for example, was perceived as melancholic 50% of the time and as grieving 23% of the time. The Barber stimulus, on the other hand, was perceived as grieving 45% of the time and as melancholic only 23% of the time.

155

The perception of Adagio for Strings and Redemption could have also been affected by listener familiarity. Redemption was written for the soundtrack of the popular

2015 movie, Mad Max: Fury Road, and Adagio for Strings has been widely-used in popular films and television shows, such as Platoon, In My Room, Amélie, Seinfeld, ER, and The Simpsons. Of the 19 listeners who perceived grief in Adagio for Strings, for example, 13 rated the music as being familiar; by contrast, of the 10 listeners who chose melancholy, only 4 reported being familiar with the music. Accordingly, the differences in the melancholy/grief responses suggest a possible confound due to learned associations—namely, the affects portrayed in the associated movie scenes. That is, listeners familiar with Adagio for Strings and Redemption may have remembered the music from an anguished movie or television scene that consequently prompted them to answer grief in response to the music.

One of the purposes of Study 1 was to vet the proposed musical stimuli.

Consequently, these two problematic melancholic stimuli were replaced in advance of

Study 2. Study 2 aims to replicate the findings in Study 1—that listeners perceive the differences between melancholic and grieving music—using a different methodology.

Study 2

The principal goal of the second study was to examine which emotion(s) were represented in sixteen musical stimuli. The driving hypothesis was that people would perceive different kinds of emotion in grief-related and melancholy-related musical passages. Rather than simply use an alternative forced-choice task, Study 2 allowed

156 participants to select multiple emotions—from a list of fourteen emotions—that a passage represented.

Stimuli and Experimental Conditions

Most of the musical stimuli remained the same between Studies 1 and 2; namely four melancholic passages, four grieving passages, four happy passages, and four tender passages, each around 15 seconds in duration. As mentioned at the end of the discussion of Study 1, there were problems with two of the melancholic stimuli. For this second study, these two melancholic stimuli were replaced with other melancholic stimuli

(summarized in Table 7.1 on page 153). All participants listened to the selected musical passages via headphones.

Experimental Procedure

Similarly to Study 1, participants were instructed to pay attention to perceived emotion, rather than experienced emotion. Participants were then asked to “identify which emotion(s) the audio file represents by checking the appropriate emotion(s) from the following list. You may select as few or as many as you like.” The emotional terms presented to participants were identical to the ones used in Warrenburg and Way (2018).

The list of emotions was the following: angry, bored, disgusted, excited, fearful, grieved, happy, invigorated, melancholy, relaxed, surprised, tender, neutral/no emotion, and other emotion(s). Warrenburg and Way selected these emotional terms because previous research has shown that they are relevant to musical passages (e.g., Eerola & Vuoskoski,

2011; Huron, 2006; Huron, 2015) and because they are balanced across the four

157 quadrants of Russell’s Affect Grid (e.g., King & Meiselman, 2010). Participants were also allowed to list other emotions that they thought the passages represented.

Once participants finished selecting the emotional terms, the list of terms they chose appeared again on the screen. Participants were then given the following instructions: “Given this list of emotion terms you chose, which one(s), if any, strongly apply?” By asking participants to choose emotion terms that strongly apply, a three-point gradient (i.e., does not apply, applies, strongly applies) can be used to examine emotion combinations.

Finally, participants were asked to indicate their degree of familiarity with the musical excerpts using a three-point scale (not familiar, somewhat familiar, very familiar).

In order to control for potential confounds, participants were also asked to complete measures of musical preferences (Short Test of Music Preferences-Revised;

STOMP-R; Rentfrow, Goldberg, & Levitin, 2011), trait empathy (Interpersonal

Reactivity Index; IRI; Davis, 1983), personality (Big 5 Personality Questionnaire; John &

Srivastava, 1999), musical absorption (Absorption in Music Scale; AIMS; Sandstrom &

Russo, 2011), musical sophistication (Ollen Musical Sophistication Index; OMSI; Ollen,

2006), and basic demographics (age, preferred gender identity).

Demographics

57 participants took part in the study—35 males, 19 females, and 3 unreported sex. The ages of participants ranged from 18 to 69 (mean = 22.59). On average, participants started music lessons at the age of 8 years (range from 1-18 years), had been

158 taking music lessons for 7 years (range 0-16 years), and had been practicing daily for 13 years (range 1-23 years).

Results

Recall that participants were asked to rate whether they perceived a series of emotions in the music and then were asked which terms strongly applied to the music. A three-point gradient was calculated, where emotions that were not selected were coded as a 0, emotions that applied to the music were coded as a 1, and emotions that strongly applied to the music were coded as a 2. The corresponding distribution of emotional terms was subjected to a principle components analysis with a varimax rotation (see

Appendix D). The principle components analysis conducted on this data revealed two components, which roughly correspond to arousal and valence. The total variance explained by these two components was only 29.5%, suggesting that there are many other factors that contribute to the perceived emotion of a musical excerpt.

The distribution of happy, tender, melancholy, grief, and other terms was subjected to a chi-square test; these terms were not used equally across the different stimulus types (melancholy, grief, happy, tender) (X2 = 1177.2, df = 12, p < 0.0001).

Fisher’s exact test is a more stringent test that can be used to compare the use of two terms across distributions. For the melancholy and grief stimuli, the use of the words melancholy and grief were compared using Fisher’s exact test. The test showed that the two terms were used differently across melancholy and grief stimuli (odds ratio = 0.355, p < 0.01). For melancholic stimuli, the emotion category melancholy was chosen 65% of

159 the time, whereas grief was chosen 35% of the time. For grieving stimuli, grief was chosen 61% of the time and melancholy was chosen 39% of the time.

Similarly, for the tender and happy stimuli, the use of the words tender and happy were compared using Fisher’s exact test. The test showed that the two terms were used differently across tender and happy stimuli (odds ratio = 0.029, p < 0.01). For tender stimuli, the term tender was used 56% of the time, whereas the term happy was used 44% of the time. For happy stimuli, the term happy was used 96% of the time, whereas the term tender was used 4% of the time. Future research should investigate why happy music seems to be the most easily-recognized emotion of these four emotion categories.

Therefore, the results of Study 1 were replicated in Study 2: people perceive different emotions in melancholic and grieving music (and in tender and happy music).

Further exploratory analysis revealed how all terms (angry, bored, disgusted, excited, fearful, grieved, happy, invigorated, melancholy, neutral, relaxed, surprised, tender) were used across the four stimuli types (melancholy, grief, tender, happy) (see

Figure 7.2 on page 162). Melancholic music showed the following trends: for the a priori selected melancholic stimuli, the term melancholy was used to describe the music most often (191 times), followed by grieved (104 times), tender (96 times), and relaxed (74 times). The other terms that were selected by participants included the following terms: apprehensive, /curious (4 times), cautious, empathy, hesitant, hopeless, loneliness, longing, malicious, mischievous, mopey, mourning, mysterious, nostalgic (3), pensive, questioning, sadness, scared, self-reflective, serious, tense (2), thoughtful, uncanny, and wonder.

160

Grieving music was described best by the terms grieved (181), fearful (134), and melancholy (118), followed by angry (53), invigorated (49), and tender (45). The other terms selected by participants included the following: aching, , , anxious

(2), buzz-kill, cathartic, coldness, confused, distraught, dramatic, /hunger, intense, relieved, relieved yet pained, sad, stressed, suspenseful, tense/tension (13), torn, tragic, transcendent (2), uncertain, uplifted, wanting/pained.

Tender music was described best by the terms relaxed (197), tender (191), and happy (148), followed by melancholy (56). The other terms selected by participants included the following: calm (2), compassionate (2), content, dreamy, hopeful (2), infatuated (2), love/ (2), nostalgic (3), peaceful (3), reflective (2), relieved, romantic, satisfaction, /nostalgic, sensual, and whimsical.

Finally, happy music was best described by the terms happy (216), excited (210), and invigorated (200). The other terms listed by participants included the following: accomplished, adventurous, amused, bold, , carefree, curiosity (2), dancing/dancelike/dance (3), determined, driven, (2), heroic (4), innocence, inspired

(2), jovial, joyous, mischievous (2), neutral, nostalgic, playful, power/powerful (3), proud, reaching, regal, resolute, satisfaction, stoic, strong, triumphant (2), and wonder.

161

162

Figure 7.2. Distribution of terms across the four stimuli types in Study 2 Discussion

The results of Study 2 replicated and extended the findings of Study 1: participants were able to distinguish music-related melancholy from grief and were able to distinguish music-related happiness from tenderness. Moreover, once the problematic two melancholic stimuli from Study 1 were removed, listeners were able to distinguish melancholy from grief in all examined stimuli (as shown in the PCA, see Appendix D).

The fact that tender and melancholy are differentiated by listeners is a somewhat surprising finding, given that they have been thought to contain similar acoustic characteristics (e.g., Juslin and Laukka, 2003).

One of the intentions of Study 2 was to investigate the range of emotional terms perceived in melancholic and grieving music. In all four stimulus types (melancholy, grief, happy, and tender), listeners reported perceiving multiple emotions. An interesting observation is that the two positively-valenced stimuli sets (tender and happy) perhaps resulted in more perceived emotions than did the negatively-valenced stimuli. For example, there are almost an equal amount of relaxation and tenderness ratings for tender stimuli and almost an equal amount of happy, excited, and invigorated ratings for happy stimuli. This trend is not as evident in the negatively-valenced stimuli. One possible interpretation of these results could be that positively-valenced items are less differentiated than negatively-valenced items. If it is true that positively-valenced items are more closely-related than are negatively-valenced items, the data fall in line with the density hypothesis (Unkelbach et al., 2008), which claims that negative items are more different from each other than are positive items.

163

General Discussion

The results of the two studies are consistent with the idea that listeners perceive different emotions in melancholic and grieving passages. That is, the results are consistent with the distinction between two forms of sadness—melancholy and grief—as hypothesized by scholars like Eerola, Huron, Peltola, Vuoskoski, and myself (Eerola &

Peltola, 2016; Eerola, Peltola, & Vuoskoski, 2015; Eerola, Vuoskoski, Peltola, Putkinen,

& Schäfer, 2017; Huron, 2015; in preparation; Laukka, Eerola, Thingujam, Yamasaki, &

Beller, 2013; Peltola & Eerola, 2016; Quinto, Thompson, & Taylor, 2014; Taruffi &

Koelsch, 2014; van den Tol, 2016; also Chapters 5 and 6 of this document). Participants were additionally able to distinguish grief from another high-arousal affect, namely happiness. Similarly, listeners were able to distinguish melancholy from another low- arousal affect, namely tenderness. One interpretation of these findings is that the musical features used by listeners to classify melancholy (or grief) are not merely limited to features of low (or high) arousal.

The failure to recognize disparate sad emotions (e.g., melancholy and grief) in the music and emotion literature becomes problematic when researchers compare results across studies. Let us say that a scholar wants to conduct a meta-analysis regarding nominally sad music. If the results of some studies are consistent with the idea that sad music exhibits low arousal, but the results of other studies are consistent with the idea that sad music exhibits high arousal, the effect size in the meta-analysis will be somewhere in the middle. Namely, the results of the meta-analysis could suggest that sad music is not particularly related to either high or low arousal. By combining different types of sad music, then, we lose explanatory power of important underlying trends.

164

The practice of conflating multiple kinds of sad music may also result in a loss of information across researchers, performers, and listeners, as these people may define sadness in different ways. For example, in the years before 2000, around 15% of emotional music stimuli were created by performers who were explicitly asked to express certain emotions (e.g., Behrens & Green, 1993; Gabrielsson & Juslin, 1996; Gabrielsson

& Lindström, 1995; Jansens, Bloothooft, & de Krom, 1997; Juslin, 1997; Meyer, Palmer,

& Mazo, 1998; Senju & Ohgushi, 1987; also see Chapters 3 and 4). Performers in these studies were typically given few instructions and were told that they could vary everything about the music except for the pitches. In this paradigm, listeners are then asked to choose which emotion(s) the musical performances expressed, often from a list of emotion terms. Sometimes the lists of affective terms are limited to a few emotional terms, but at other times, the lists can be relatively long. As one example of a longer list,

Juslin (1997) asked listeners to identify whether performances—intended to express happiness, sadness, fear, anger, and love/tenderness—effectively expressed the following emotions: happiness, elation, fear, terror, anger, rage, , love, melancholy, sorrow, , disgust, shame, curiosity/interest, desire, and jealousy.

A common conclusion from this type of research is that listeners tend to be successful in discerning the general emotional “category” in the performances, but that they are not always able to agree on the “intensity” of the expressed emotion. For example, Juslin (1997) found that listeners disagreed on the amount of melancholy and sorrow that were expressed in the sad performances. One of the conclusions often drawn from this research is that “basic emotion” categories (e.g., sadness, fear, anger, happiness) can be reliably communicated in music, but that more subtle shades of these

165 categories (e.g., melancholy, sorrow) cannot be accurately communicated in musical performances. Although this explanation may be correct, I see another possible explanation for these findings. It could be that the performers, researchers, and listeners disagree about the definition (or musical features) of music-related sadness. For example, maybe some performers interpreted sad music as containing melancholic features and that other performers interpreted sad as including features of grief or sorrow. If some performers (or listeners) expect different kinds of sad music, this could contribute to the instability in the ratings of sorrow and melancholy present in the music.

In other words, if there was agreement among performers, researchers, and listeners about the definitions of emotions like sadness, melancholy, and grief/sorrow, maybe the listeners in these early studies could have accurately discerned music expressing sorrow and melancholy. Indeed, in the current study, the results are consistent with the idea that listeners can, in fact, accurately perceive melancholy and grief in musical passages.

One of the possible interpretations of my work is the idea that some music and emotion studies are semantically underdetermined: the words researchers are using to classify emotional concepts in music do not give a listener enough information to understand what is happening in the music. Future work should aim to utilize more precise emotional terms. By using more specific terminology, it is possible that we will learn more about music’s structure and its associated emotions.

Common life experience supports the idea that a person can perceive (and experience) multiple states of sadness. The aim of the current study was to begin to refine the umbrella concept of sadness in music research by showing how sad musical passages can be explained by at least two separable states: melancholy and grief. One implication

166 of this finding is that researchers should aim to make use of more specific terminology when labeling emotions in the music domain. In this chapter, we found that listeners perceive different emotions in melancholic and grieving music. Chapter 8 presents two follow-up studies to the tasks presented in this chapter. Rather than investigate which emotions are represented by the music, however, Chapter 8 explores which emotions listeners experience in response to melancholic and grieving music.

167

Chapter 8: People Experience Different Emotions from Melancholic and

Grieving Music

Sadness appears to have a special attraction for music researchers, as discussed in the preceding chapters. Music-related sadness accounts for roughly 23% of all of the passages used in studies of music and emotion—more than any other emotion, including happiness and fear (Chapters 3 and 4). Part of the appeal of sadness as a research topic is the unusual appeal of a seemingly negative emotion, a paradox that has attracted philosophical commentary and speculation from ancient to modern times (e.g., Levinson,

2014).

One phenomenon of interest to researchers is why people’s reactions to sad music vary widely. It is known that people experience both positive and negative emotions from sad music (e.g., Eerola & Peltola, 2016). Listeners also tend to confuse expressions of

(negative) musical sadness and (positive) musical tenderness (e.g., Juslin, 2013). From these seemingly paradoxical observations, the following question arises: what makes sad musical passages unique?

Chapters 5-7 suggest that musical sadness may, in fact, be a synthesis of more than one emotional state. Namely, rather than having a single broad category of sad music, it might be possible to identify multiple sad affects. Although there appears to be a difference in the musical structure (Chapter 6) and perceived emotions (Chapter 7) in

168 nominally melancholic and grieving music, it is not clear if these two types of music result in different emotional experiences. Recall that Tuomas Eerola, Henna-Riika

Peltola, and Jonna Vuoskoski conducted a large, systematic survey asking about people’s attitudes towards sad music. Through thematic content analysis of the survey responses, these researchers found that three different kinds of emotions are induced by listening to sad music: grief, melancholia, and sweet sorrow (Peltola & Eerola, 2016). A factor analysis conducted on this data found three slightly different emotions evoked from nominally sad music: grief-stricken sorrow, comforting sorrow, and sublime sorrow

(Eerola & Peltola, 2016). Of these three musically-induced emotions, only grief-stricken sorrow represents a negatively-valenced experience. It is clear, therefore, that nominally sad music can evoke both negative and positive emotions.

What is not clear, however, is whether these disparate emotional reactions to sad music are only due to dispositional or situational characteristics, or whether the variable experiences to sad music can also be explained by the conflation of more than one emotional state, like melancholy and grief. In other words, it is not clear whether melancholic (or grieving) music induces similar emotional experiences across people and situations. Certainly, personality characteristics and environmental situations play an important role in felt emotions from music listening. By asking the same person to describe their experiences from listening to melancholic and grieving music, however, we may be able to limit some of this variance. The main question asked in this chapter is whether a person experiences different patterns of emotions in response to melancholic and grieving music. This chapter tests this conjecture directly using two separate methodologies.

169

Study 1

There is more than one way that music can induce emotion in listeners. In particular, the BRECVEMA model of Juslin (2013) posits that there are several mechanisms for how music is able to evoke emotional responses: brain stem reflexes, rhythmic entrainment, evaluative conditioning, emotional contagion, visual imagery, episodic memory, musical expectancy, and aesthetic judgment. The combination of these various mechanisms suggests that music may be able to evoke multiple emotions in parallel, making it possible to experience mixed-valenced emotions such as nostalgia, wonder, and pleasurable sadness. Furthermore, the experience of listening to a musical passage likely differs across people and across occasions. Dispositional factors such as gender identity and age could affect a person’s response to music at one time or another.

The function of listening to music also varies across listeners and situations. Some of the reasons why people have listened to nominally sad music, for example, are to reminisce, to get comfort, to experience new emotions, and to share emotions with others

(Eerola & Peltola, 2016). People may also listen to music as a means of escape, to experience transcendence, to produce pleasure, regulate emotions, or provide diversion

(Saarikallio & Erkkilä, 2007; Schäfer et al., 2013). Given these complicating factors, the aim of the study was exploratory in nature. The principal goal of the first study was simply to explore which emotions are evoked or induced in listeners from previously- defined grieving and melancholic musical passages. Accordingly, the main hypothesis is as follows:

H. Listeners will experience different patterns of emotions in response to

melancholic and grieving music.

170

In addition to exploring emotions induced by these two negatively-valenced music types, Study 1 also examined emotions evoked by two kinds of positively-valenced musical passages: happiness and tenderness. The rationale for including happy and tender music was to determine if there were differences among emotions induced by melancholic and tender music, which both exhibit low arousal, and among emotions induced by grieving and happy music, which both exhibit high arousal. The methodology of comparing music-related emotions that are characterized by each of the four quadrants of the Affect Grid (Russell et al., 1989) has been used in similar scholarly work (like in

Chapter 7).

Stimuli

A review conducted by Eerola and Vuoskoski (2013) suggests that passages around 30-60 seconds in duration are sufficient to induce emotion in listeners. As such, it was decided to make use of one-minute passages of experimenter-selected music— specifically sixteen musical excerpts. The selected stimuli have been previously validated as representing their respective emotions to listeners of a similar age and musical background to those in the current experiment. The melancholy and grief stimuli came from Chapter 5 and the tender and happy passages were taken from Eerola & Vuoskoski

(2011). The complete list of stimuli used in Study 1 can be found in Table 8.1.

171

Emotion Composer Title Passage Melancholy Faure Aprés un rêve 0:00-0:52 Melancholy Franck Symphony in D Minor, Mvt 2 0:37-1:14 looped Melancholy Mozart Fantasia in D Minor 00:50-1:29 looped Melancholy Albinoni Adagio in G Minor 0:00-1:00 Grief Marionelli Jane Eyre, Track 1 1:28-2:31 Grief Barber Adagio for Strings 5:51-6:59 Grief Arnold and Price Sherlock Series 2, Track 18 0:42-1:32 Grief Williams Schindler’s List, Track 1 2:37-3:07 looped Tender Kilar The Portrait of a Lady, Track 3 0:23-1:08 Tender Hirschfelder Shine, Track 10 1:01-2:00 Tender Marionelli Pride & Prejudice, Track 1 0:10-0:49 looped Tender Coppola The Godfather III, Track 5 1:13-2:19 Happy Portman Oliver Twist, Track 8 1:32-2:09 looped Happy Barry Dances with Wolves, Track 10 0:00-0:46 Happy Morricone The Untouchables, Track 6 1:26-2:06 looped Happy Marionelli Pride & Prejudice, Track 4 0:10-1:06

Table 8.1. List of musical stimuli used in Study 1. Items in bold in were also used in Study 2.

Instructions

The participants were not told about the hypothesized differences between melancholy, grief, happiness, and tenderness, in order to minimize potential bias in favor of the experimenter’s theoretical perspective. The difference between induced and perceived emotion was explained to participants, however. The instructions regarding induced emotion tasks were the following:

“For this block, you should only consider how you feel when you listen to the

music. A passage may make you feel happy or make you feel afraid. The music

may not make you feel any emotion—this is okay. If this happens, you should

answer “no emotion” to the question. For these questions, you should not think

about the characteristics of the music itself, but rather how you are feeling in

response to the music.”

172

Experimental Procedure

Participants were reminded about the difference between perceived and felt emotion and then were given instructions regarding the specific tasks. After listening to each stimulus, participants were asked the questions, “To what extent does this audio file make you feel a positive emotional reaction?” using an 11-point Likert scale (from 0 “I feel little or no positive emotion” to 10 “I feel an extreme amount of positive emotion”) and “To what extent does this audio file make you feel a negative emotional reaction?” using an 11-point Likert scale (from 0 “I feel little or no negative emotion” to 10 “I feel an extreme amount of negative emotion”).

Participants were also asked to identify which emotion(s) they experienced by checking the appropriate emotion(s) from a list of twenty-four emotions. The emotion choices were taken from extant research that has examined induced emotions

(Warrenburg & Way, 2018). The emotional terms were the following: angry, bored, compassionate, disgusted, excited, fearful, grieved, happy, invigorated, joyful, melancholy, nostalgic, peaceful, power, relaxed, soft-hearted, surprised, sympathetic, tender, transcendent, tension, wonder, neutral/no emotion, and other emotion(s). These emotions were selected in the Warrenburg and Way study because the terms are theoretically important to research on emotions possibly induced in music. For example, the list contains the emotions identified in the GEMS (Zentner, Grandjean, & Scherer,

2008).

After participants selected which emotions they experienced from the list of terms, they were presented with the list of the terms they selected. They were then asked to respond to the question, “Given this list of emotion terms you chose, which one(s), if

173 any, strongly apply?” This question was asked so that a three-point gradient of emotional intensity (i.e., does not apply, applies, strongly applies) could be created. Participants were also allowed to identify any other emotion(s) they experienced when listening to the musical excerpts.

Listeners were also given the option to write down experiences they felt or thoughts they had when listening to the music. The free response format was chosen so the answers could be analyzed using thematic content analysis to uncover common themes (see Study 2). The specific prompt was the following: “What kinds of thoughts went through your head as you listened to this music? It could be images, metaphors, moods, or anything else you can think of.”

Participants were also asked to indicate their degree of familiarity with the musical passages on a three-point scale (not familiar, somewhat familiar, very familiar).

Finally, participants were asked to complete measures of several individual characteristics. These questionnaires included measures of musical preferences (Short

Test of Music Preferences-Revised; STOMP-R; Rentfrow, Goldberg, & Levitin, 2011), trait empathy (Interpersonal Index; IRI; Davis, 1983), personality (Big 5

Personality Questionnaire; John & Srivastava, 1999), musical absorption (Absorption in

Music Scale; AIMS; Sandstrom & Russo, 2011), musical sophistication (Ollen

Sophistication Index; OSI; Ollen, 2006), and basic demographics (age, preferred gender identity).

174

Demographics

Fifty-seven participants (35 males, 19 females, 3 unreported gender) participated in Study 1. The ages of the participants ranged from 18 to 69 (mean = 22.59). Participants started music lessons, on average, at the age of 8 years (range from 1-18 years), had been taking music lessons for an average of 7 years (range 0-16 years), and had been involved in daily practice for an average of 13 years (range 1-23 years).

Results

As discussed above, a three-point gradient was calculated, where emotions that were not experienced by participants were coded as a 0, emotions that applied to participant experiences were coded as a 1, and emotions that strongly applied to participant experiences were coded as a 2. A principle components analysis was conducted on the data, with four components arising (see Appendix E). The first two components roughly corresponded to valence (12.3% of the variance) and arousal (11.1% of the variance), while the third component was influenced by the terms neutral and bored (7.9% of the variance), and the fourth component was influenced by the terms transcendence and wonder (5.7% of the variance). In total, only 37.0% of the variance was explained by these four components, suggesting that other factors likely contribute to the kinds of emotion induced through music listening.

The corresponding distribution of happy, tender, melancholy, grief, and other terms was subjected to a chi-square test. The chi-square test on the distributions of these terms (melancholy, grief, happy, tender, other) shows that these emotions were not

175 experienced equally across the four stimulus types (melancholy, grief, happy, tender) (X2

= 919.95, df = 12, p < 0.0001).

For the melancholic stimuli, the mean of negative emotions experienced was 4.5

(out of 10), whereas the mean of positive emotions experienced was 2.9. For the grieving stimuli, the mean of negative emotions experienced was 4.9, whereas the mean of positive emotions experienced was 2.4. For the melancholic and grieving stimuli, the use of the words melancholy and grief were compared using Fisher’s exact test. This test showed that the two terms were used differently to describe experiences arising from melancholic and grieving stimuli (odds ratio = 0.512, p < 0.01). For melancholic music, the emotion category melancholy was experienced 60% of the time, whereas grief was felt 40% of the time. For the grieving stimuli, grief was felt 57% of the time and melancholy was experienced 43% of the time.

The mean of positive emotions induced by tender stimuli was 5.3, whereas the mean of negative emotions experienced was 2.1. For happy music, the mean of positive emotions experienced was 5.8, whereas the mean of negative emotions felt was 0.8. For the tender and happy stimuli, the use of the words tender and happy were compared using

Fisher’s exact test. The test showed that the two emotions were experienced differently across tender and happy stimuli (odds ratio = 0.021, p < 0.01). For tender music, tenderness was felt 68% of the time, whereas happiness was felt 32% of the time. For happy music, happiness was experienced 96% of the time, whereas tenderness was experienced 4% of the time.

Further exploratory analysis revealed how all terms (angry, bored, compassionate, disgusted, excited, fearful, grieved, happy, invigorated, joyful,

176 melancholy, neutral, nostalgic, peaceful, power, relaxed, softhearted, surprised, sympathetic, tender, transcendent, and wonder) were used across the four stimuli types

(melancholy, grief, tender, happy) (see Figure 8.1 on page 172). Melancholic music resulted in the following emotional experiences. For the a priori selected melancholic stimuli, the term melancholy was used to describe the listeners’ emotional experiences the most often (147 times), followed by grieved (98 times), tender (63 times), tension (55 times), peaceful (53 times), relaxed (51 times), and bored (51 times). The other terms listed by participants included the following terms: coy, creepy, curious (4 times), intention, jumpy, lonely, longing (2), “makes me think of rain,” sad, somber, spooky, unrest/uncanny, and wallowing.

Emotional experiences induced by grieving music were described best by the terms tension (155), grieved (137), melancholy (105), and fearful (97), followed by transcendent (49), angry (46), and tender (47). The other terms identified by participants included the following: angst, anguish (2), anxious (2), apathetic, creepy, crying, deathly, focused/expecting, hope/persistence, intrigued, lonely, mourning, painful, sad, sorrowful, spiteful, stressed (2), stressful, teary-eyed, unstable, worrisome, a mix-mash of indescribable emotion, and wanting to stab my ex in the back.

Emotional experiences in response to tender music were described best by the terms peaceful (175), tender (139), softhearted (126), and relaxed (116), followed by nostalgic (86), happy (66), and compassionate (64). The other terms identified by participants included feelings of bittersweet, content, longing, lost, love, playful, romantic

(2), upset, and wistful.

177

Finally, emotional experiences induced by listening to happy music were best described with the terms happy (159), joyful (154), excited (145), invigorated (123), power (94), and wonder (71). The other terms listed by participants included the following feelings: alive, ambitious, anticipation, bold, curious, dancy, determination, doppie, eager, energetic, heroic, honor, intoxicated, lively, nature, playful (2), pompous, pride/proud (2), and triumph/triumphant (8).

Discussion

Chapter 7 presents results that are consistent with the idea that listeners perceive different emotions in melancholic and grieving music. The results of Study 1 are consistent with the hypothesis that melancholic and grieving musical passages also result in different emotional experiences. Among other emotions, listeners experienced melancholy when listening to the melancholic music and felt grieved when listening to the grieving music. The fact that the distributions of emotions were similar across perceived and experienced emotion could mean that the evoked emotions were induced, at least in part, by an associative process such as emotional contagion.

One interesting finding in the work on perceived emotion in melancholic and grieving music was that listeners perceived more kinds of emotion in the positively- valenced stimuli (tender and happy music) than in the negatively-valenced stimuli

(melancholic and grieving music) (see Chapter 7). The results from Study 1 of the current chapter are also consistent with this finding. Listeners reported primarily feeling melancholy and grief in response to melancholic music, and reported feeling grief, tension, fear, and melancholy from grieving music. In response to tender music, however,

178 listeners reported feeling tender, peaceful, soft-hearted, relaxed, nostalgic, happy, and compassionate; similarly, in response to happy music, listeners reported feeling happy, excited, invigorated, joyful, power, and wonder. Research from the psychological attitudes literature suggests that positive information is “less diverse and more densely clustered in spatial representations compared to negative information” (Alves et al., 2016; although see Smallman et al., 2014). According to the density hypothesis (Unkelbach et al., 2008), there are few ways to be similar to a positive object, but there are many ways to be different (negative) from a positive object (Unkelbach, 2012). Positivity is considered to be the “normal state” of the world, where deviations from this norm are often considered to be negative (Alves et al., 2016). Because of the similarity of positive information, the brain often processes positive information faster than negative information (Unkelbach et al., 2008). Positive information is also liked more than negative information, partly due to this facilitated processing fluency. The results of

Chapter 7 and the current chapter’s Study 1 suggest that music may follow this trend: there may be more similarity—and therefore more confusion or conflation—among emotions related to positively-valenced music than among emotions related to negatively- valenced music.

Study 1 provides initial support of the idea that melancholic and grieving music give rise to different emotional states. The aim of Study 2 was to replicate the findings of

Study 1 through the use of a different methodology.

179

180

Figure 8.1. Induced emotions from the four stimuli types (melancholy, grief, tender, and happy) in Study 1 Study 2

The purpose of the second study is to determine whether listeners spontaneously make distinctions between their experiences while listening to melancholic and grieving musical passages. In this study, listeners were not provided with a list of emotional terms, but rather were asked to freely write about their emotional experiences while listening to music.

Methods

The survey was conducted online via the Qualtrics Research Suite. Participants were asked to listen to two excerpts of music: one passage that had been previously characterized as exhibiting melancholy and one passage that had been previously characterized as exhibiting grief. Two versions of the study were created, each of which utilized different musical stimuli. The four one-minute passages used in Study 2 are listed in Table 8.1 on page 172. The order of the excerpts (one melancholy sample and one grief sample) was randomized across participants.

Participants were given the following instructions: “We want to know what you are experiencing when you are listening to music. While you listen to this clip, please think about how you are feeling. You may answer ‘none’ if you do not feel anything.”

After listening to a musical excerpt, listeners were asked to list up to 30 words or phrases that described their experience while listening to the music. Specifically, participants were given the following three prompts:

181

(1) Please list up to 10 emotion-related words or phrases that you feel while

listening to this music. Some examples of emotional words are “ecstatic” and

“tranquil.”

(2) Please list up to 10 general phrases or metaphors that describe your

experience while listening to this music. Some examples of a general phrase

are “makes me feel like dancing” and “feels like a summer’s day.”

(3) Please list up to 10 associational words or phrases that describe your

experience while listening to this music. Some examples of associational

words are “pastoral,” “sunny,” “yellow,” and “childhood.”

The intention of using three separate prompts was to promote flexible thinking.

The study aim was to compare the descriptions provided by participants in response to melancholic and grieving music.

Participants

Twenty-four participants took part in Study 2. These listeners were primarily first and second year music theory students at The Ohio State University. 63% (15 participants) were female. The mean age was 23.9. Participants had an average of 1.85 years of music theory training (range 0-8 years) and an average of 4.88 years of instrument training (range 0-14 years).

Recall that one of the questions in Study 1 was the following: “What kinds of thoughts went through your head as you listened to this music? It could be images, metaphors, moods, or anything else you can think of.” This data was not analyzed in

182

Study 1, but rather was analyzed along with the data from Study 2. Therefore, responses were collected from 81 participants (57 participants from Study 1 and 24 participants from Study 2).

Analysis

The 81 participants produced 793 responses (191 grief responses from Study 1,

187 melancholy responses from Study 1, 228 grief responses from Study 2, and 204 melancholy responses from Study 2). In order to decipher any underlying trends, two independent assessors unfamiliar with the study were asked to classify the 793 responses into three categories: grief-related words or phrases, melancholy-related words or phrases, and other words or phrases. Only words or phrases that both coders agreed upon were analyzed; namely, if one coder thought a phrase was melancholic and the other coder thought that phrase was grieved, this description was not analyzed.

Descriptions of experiences from the melancholic music contained 38 melancholic responses, 20 grieving responses, and 213 other responses. Descriptions of experiences from the grieving music contained 10 melancholic responses, 49 grieving responses, and 248 other responses. A simple chi-square test revealed that there was a difference in melancholic and grieving responses across the two conditions (X2 = 26.543, p < 0.01). The results are therefore consistent with the hypothesis that listeners experienced more melancholy from the a priori deemed melancholic music and experienced more grief from the a priori deemed grieving music. The difference between experienced melancholy and grief, first reported in Study 1, was therefore replicated in

Study 2.

183

The words or phrases generated by participants were also subjected to a content analysis. The purpose of this analysis was to check for any underlying dimensions of the responses that were not summarized by the terms melancholy and grief. Two new assessors were asked to independently sort the 793 responses into as many categories as they saw fit. The assessors were unaware of whether each response described a feeling evoked by melancholic or grieving music. The first assessor classified the responses into

21 categories, while the second assessor classified the responses into 36 categories (see

Appendix E, Table E.2 on page 347). The assessors then met to finalize the number and names of the categories. The result of the collaboration was 26 categories (see Appendix

E, Table E.2). After the assessors agreed on final categories, they worked together to re- classify all 793 descriptions under these 26 categories. Table E.2 provides information regarding how many descriptions were classified under each of these categories.

After all of the descriptions were classified under the final 26 categories, I examined the distribution of responses related to melancholic music versus the distribution of responses related to grieving music. The proportions of melancholic and grieving responses assigned to the 26 categories are shown in Table 8.2 (page 186). As can be seen from the table, the number of responses in each category varied between melancholic and grieving music. Grieving music was dominated by feelings of

Anticipation/Uneasy, Tension/Intensity, Crying/Distraught/Turmoil, Death/Loss, and

Epic/Dramatic/Cinematic. On the other hand, melancholic music resulted in feelings of

Sad/Melancholy/Depressed, Reflective/Nostalgic, Relaxed/Calm, and Rain/Dreary

Weather.

184

The most obvious difference between these profiles is the distribution of high arousal and low arousal terms; clearly, grieving music led to more feelings characterized by high arousal, while melancholic music resulted in more low-arousal feelings. Another observation is that listeners experienced positively-valenced emotions in response to both melancholic and grieving music. As might be expected, listeners reported feeling relaxed and calm when listening to melancholic music, but experienced transcendence in response to grief-like music (a finding also presented in the factor analysis of Study 1).

The last major finding of note is that there are more instances of crying and death/loss in response to grieving music, whereas there are more instances of reflection associated with melancholic music.

In summary, the results of Study 2 are consistent with the claim that listeners experience different emotion profiles in response to a priori deemed melancholic and grieving passages. The induced affective states are similar in Studies 1 and 2, even though these studies utilized different methodologies.

185

Grief Melancholy Anticipation/Uneasy (9%) Sad/Melancholy/Depressed (13%) Tension/Intensity (8%) Reflective/Nostalgic (10%) Crying/Distraught/Turmoil (7%) Relaxed/Calm (8%) Death/Loss (7%) Rain/Dreary Weather (8%) Epic/Dramatic/Cinematic (7%) Unclassifiable (7%) Sad/Melancholy/Depressed (6%) Imagery/Other Colors (7%) Anxiety/Stress (5%) Darkness/Dark Colors (6%) Darkness/Dark Colors (5%) Death/Loss (5%) /War (4%) Longing (4%) Unclassifiable (4%) Alone (4%) Longing (4%) No Emotion/Boredom (4%) Mixed Emotions/Expressive/Emotional (3%) Crying/Distraught/Turmoil (3%) Transcendent (3%) Anticipation/Uneasy (3%) Grief (3%) Nature (3%) Imagery/Other Colors (3%) Confused/Lost (3%) Rain/Dreary Weather (3%) Grief (3%) Anger (3%) Love/Sympathy (2%) Brave/Determined (2%) Moving/Physical (2%) Moving/Physical (2%) Mixed Emotion/Expressive/ Emotional (2%) Nature (2%) Epic/Dramatic/Cinematic (2%) No Emotion/Boredom (2%) Anxiety/Stress (1%) Relaxed/Calm (2%) Transcendent (1%) Confused/Lost (1%) Tension/Intensity (1%) Reflective/Nostalgic (1%) Anger (1%) Love/Sympathy (1%) Suffering/War (<1%) Alone (<1%) Brave/Determined (0%)

Table 8.2. Categories of responses to melancholic and grieving stimuli from the content analysis in Study 2

Discussion

In the last ten years, research has consistently found that nominally sad music gives rise to a number of experiences, both positively- and negatively-valenced (Eerola &

Peltola, 2016; Eerola, Peltola, & Vuoskoski, 2015; Eerola, Vuoskoski, Peltola, Putkinen,

& Schäfer, 2017; Laukka, Eerola, Thingujam, Yamasaki, & Beller, 2013; Peltola &

Eerola, 2016; Quinto, Thompson, & Taylor, 2014; Taruffi & Koelsch, 2014; van den Tol,

2016). The results from Study 2 replicate these findings. The idea that music—even

186 negatively-valenced music—can lead to positive experiences is a major conclusion in work surrounding the Geneva Emotional Music Scale (Zenter, Grandjean, & Scherer,

2008). Of the nine emotions these researchers claim can be induced from music (wonder, transcendence, tenderness, nostalgia, peacefulness, power, joyful activation, tension, sadness), seven are positive and only two (tension and sadness) are negative. Of course, one potential reason why negatively-valenced music may lead to positive emotions is that there are no negative real-life implications from listening to negatively-valenced music— it is “just music.” This reasoning applies to both melancholic and grieving music.

The current study adds to the music and emotion literature by suggesting that the arousal level of the musical passage is also of paramount importance: high- and low- arousal music lead to different emotional experiences. Low-arousal music, like melancholic music, may result in positive and negative low-arousal emotional experiences. High-arousal music, like grieving music, may lead to positive and negative high-arousal emotional experiences. Although this finding seems intuitive, music cognition research has not traditionally examined the difference between melancholic and grieving music. While there are consistent differences cited between tender music (which exhibits low arousal) and happy music (which exhibits high arousal), this arousal-based difference has not been recognized with regard to music-related sadness.

In Chapter 7, we saw that listeners are able to distinguish between representations of melancholy and tenderness, as well as between representations of grief and happiness.

The results of Study 2, however, may not be consistent with the idea that listeners can easily distinguish experiences evoked through musical melancholy and tenderness. Future work should test this conjecture directly.

187

General Discussion

As mentioned earlier in the dissertation, researchers have made use of at least

2,000 musical passages that they have labeled as sad (Chapters 3 and 4). This collection of sad music includes seemingly disparate musical selections, such as Barber’s Adagio for Strings, Tori Amos’s Icicle, and ’s Summer Night. Two important questions are the following: (1) do these 2,000 passages express sadness in the same way? and (2) do all these sad stimuli result in the same emotional experiences?

The theory driving the current chapter is that there are at least two kinds of musical sadness: musical melancholy and musical grief. Combining the results of the current chapter with Chapters 5-7, nominally melancholic and grieving have been shown to exhibit different structural characteristics, convey different emotions to listeners, and result in distinctive emotional experiences. The latter two of these findings were each replicated with separate methodological designs. Of course, these results are correlational in nature and therefore provide no support for any type of causal mechanism. In order to infer , some sort of manipulation would be needed.

Future work should aim to manipulate whether one views a work as melancholic or grieving. (See Chapter 11 for some suggestions of how a researcher can do this).

There are several theoretical perspectives that could explain why melancholic music can be differentiated from grieving music. The current study does not provide evidence in favor of any of the following theoretical perspectives—the conclusions of the study are limited to the fact that people seem to experience different types of emotions from melancholic and grieving music. Important directions for future work include (1) the examination of potential mechanisms of music-induced melancholy and grief and (2)

188 to directly compare some of the tenets of the emotional theories discussed in Chapter 2 with regard to melancholy and grief. The next two chapters—Chapters 9 and 10—provide a first step in this process. Each chapter examines a different theoretical perspective that may explain the results found in Chapters 5-8.

Chapter 9 investigates a theory by David Huron that claims that there may be separate biological foundations for melancholy and grief (Huron, 2015). As mentioned in

Chapter 5, melancholy is an emotion associated with low physiological arousal and relaxed facial features, while grief/anguish is an emotion associated with high physiological arousal, crying, and wailing (e.g., Darwin, 1872; Vingerhoets & Cornelius,

2012). Huron posits that the biological function of grief is to solicit help from others in times of turmoil or loss, whereas the biological function of melancholy is to reflect on life station issues. In other words, grief may act as an overt, social emotion whereas melancholy may act as a covert, self-directed emotion. The results of the current study are somewhat consistent with this idea: in Study 2, melancholic music was associated with reflection and nostalgia, while grieving music was associated with death, loss, crying, and turmoil.

A second possible interpretation for the separation of melancholy and grief is presented in Chapter 10. In this chapter, I suggest that in order to better understand emotional responses to musical stimuli, we need to change the way we use emotional terminology. I explore the psychological phenomenon called emotional granularity, which is a terminology most often used in psychological construction theories of emotion

(see Chapter 2) and refers to the degree of precision a person uses to describe their emotional states. The objective of Chapter 10 is to advocate for a training program that

189 researchers and participants can use to acquire a reasonable degree of emotional granularity. As people’s emotional granularity increases, we may find that listeners differentially perceive and experience and anxiety or happiness and tenderness the same way that they differentially perceive and experience melancholy and grief.

One other theory, not examined in this dissertation, could explain the results that people experience different patterns of emotions in response to melancholic and grieving music. Work on emotional contagion suggests that a person may sometimes experience an emotion that is being expressed by another human being. Emotional contagion is also thought to be possible in response to non-human expressions, including music (Juslin,

2013). That is, an induced emotion could arise when a person hears musical features that emulate emotional features found in human vocal expressions (e.g., Juslin & Laukka,

2003). Indeed, we have seen that some melancholic music may mirror sad speech, whereas grieving music may mirror grieving speech (Chapter 6). In response to melancholic and grieving music, then, listeners may experience melancholic or grieving feelings through the mechanism of emotional contagion, in line with how people respond to other vocal homologues (Juslin, 2013). Since melancholic music may also exhibit similar features to tender music (e.g., Juslin & Laukka, 2003), this theory could also explain why listeners feel relaxed and calm when listening to melancholic music (as seen in Study 2). Similarly, grief passages, which tend to be faster and more energetic, might increase a listener’s arousal levels and lead to feelings of passion, , or transcendence (also seen in Study 2).

190

Chapter 9: Downstream Pro-Social Tendencies from Melancholic and Grieving

Music: An Ethological Perspective

In 2015, David Huron published an article that drew attention to an understudied area of emotion research: ethology. While this article acknowledged the importance of popular theories of emotion induction, such as learned associations, innate auditory responses, and mirror neurons (e.g., Juslin, 2013; Molnar-Szakacs & Overy, 2006), it also proposed another mechanism, ethological signals and cues, which may additionally contribute to evoked affective states. In order to explain the difference between ethological signals and cues, Huron draws from the literature on animal behavior (e.g.,

Lorenz, 1970). When a rattlesnake feels threatened and wishes to warn another animal that it could harm them, the snake raises its tail and creates its signature rattling sound.

The observing animal sees and hears the rattlesnake’s warning and often scampers away quickly. In this scenario, the rattlesnake is utilizing an ethological signal—an evolved behavior that is intended to change the behavior of another animal. The rattle of a rattlesnake is not a subtle action, for it uses multiple sensory systems (sight and sound) to communicate with the observer. Ultimately, the interaction between the rattlesnake (the signaler) and the spectator (the observer) benefits both parties, as the rattlesnake will cease to feel threatened and the prey will stay alive.

191

Huron describes the behavior of another animal, the mosquito, in order to illustrate an ethological cue. Many people will relate to the of hearing a mosquito approach while they are enjoying a warm summer’s night outside. People in mosquito-filled climates learn from experience to associate the buzzing mosquito sound with potential itchy bites. When they hear this sound, they learn to swat the approaching bug or move to another location. The sound association, in this case, benefits the person

(the observer), but not the hungry mosquito (the one issuing the cue). A cue, like the sound of a mosquito flapping its wings, is an unintentional and non-functional behavior that nonetheless conveys information to an observer. The mosquito does not intend to communicate its approach to its prey; rather, the cue is involuntary.

In summary, an ethological signal is an intentional act that aims to change the behavior of an observer to the benefit of both the signaler and the observer, while an ethological cue is an unintentional act that can inadvertently change the behavior of an observer, but does not benefit the animal issuing the cue. Although recognizing both signals and cues can benefit an observer, ethologists speculate that different neural mechanisms underlie these processes. Animals, including humans, are thought to be biologically hardwired to respond to ethological signals. If signals are indeed crafted by evolution to communicate a human’s (or animal’s) intention to another human (or animal), it makes sense for evolution to also create an innate motivation to act or respond to the signal on part of the observer (Maynard Smith & Harper, 2003; Silk, Kaldor, &

Boyd, 2000). As Huron points out, human observers often experience these motivational states as emotions (Huron, 2015; Tomkins, 1980). In other words, humans may experience a biological propensity to respond to an ethological signal with a specific

192 emotional state. Humans may also feel an emotion in response to an ethological cue, but these emotional states may differ widely across people and experiences (Huron, 2015).

The main proposal in Huron’s 2015 article was that the difference between ethological signals and cues may also apply in music-listening experiences. Huron proposes, for example, that there are differences between music-related melancholy and music-related grief, a theory that echoes the psychological distinction between melancholy and grief (e.g., Darwin, 1872; Vingerhoets & Cornelius, 2012). In line with this distinction, I have presented a series of experiments that suggest that music-related melancholy and grief contain different structural features (Chapter 6), that listeners perceive different emotions in these two types of passages (Chapter 7), and that listening to melancholic and grieving music results in different emotional experiences (Chapter 8).

Huron’s claim is that melancholy represents an ethological cue, whereas grief acts as an ethological signal. As first discussed in Chapter 5, the biological goal of melancholy, according to some emotion theorists, is for the experiencer to ruminate about their current life station or to reflect on a failed goal (Ekman, 1992). People who experience melancholy tend to exhibit features of low arousal, such as relaxed facial expressions and a tendency to remain mute (Andrews & Thomson, 2009; Nesse, 1991).

People who feel melancholic often exhibit similar characteristics to people who are sleepy; in fact, some research suggests that the physiological characteristics of melancholy and sleepiness overlap extensively (Andrews & Thomson, 2009; Nesse,

1991). This research also suggests that cognitive differences are the only factors that distinguish feelings of melancholy and sleepiness. A person in a melancholic state, therefore, does not necessarily convey any specific information to an observer; rather, an

193 observer is not always able to differentiate a melancholic person from a sleepy person.

Melancholy is considered to be a covert emotion; namely, its expressions are not necessarily intended to be communicated to another person. In terms of the ethological framework, melancholic expressions act as cues, for an observer must use learned associations to recognize a person in a state of melancholy.

In contrast to melancholy, grief arises in response to experiences of loss, whether due to a significant death, loss of safety, loss of autonomy, or loss of identity (Epstein,

2019). When a person experiences grief, their expressions are often multimodal: a grieving person may breathe erratically, cry, wail, gasp, or exhibit a blotchy face

(Rosenblatt et al., 1976; Vingerhoets & Cornelius, 2012). There is no confusion about the type of emotion being expressed in these situations. Grief, therefore, is an overt emotion, or an emotion whose goal is to communicate an affective state to another person. In other words, grief may act as an ethological signal. Grieving expressions, both vocal and visual, may have been crafted by evolution to change the behavior of an observer. A person experiencing grief often needs help from another person—usually some kind of compassionate behavior. Research supports this idea: the emotional tears often associated with grieving states have been shown to contain a pheromone that solicits compassionate behaviors from observers (Gelstein et al., 2011). Displays of grief, then, should give rise to feelings of compassion or pro-social behaviors in observers.

As described in Chapters 5-8, there is support for the idea that melancholic music differs from grieving music in terms of the musical structure, perceived emotions by listeners, and experienced emotions in listeners. If Huron is correct that musical grief functions as an ethological signal and musical melancholy acts as an ethological cue, we

194 would expect to find certain patterns in the behavior and emotional experiences of listeners. The current chapter tests Huron’s conjecture directly by investigating whether listeners respond to grieving music, but not melancholic music, with feelings of compassion and pro-social tendencies. The first study aims to test whether listeners differ in pro-social tendencies after hearing grieving music in contrast to melancholic music.

The second study tests whether listeners experience different levels of compassion in response to melancholic and grieving music.

Study 1: Examining Pro-social Tendencies

The aim of the first study is to test whether listeners respond to musical displays of grief—but not musical displays of melancholy—with pro-social tendencies, in line with the ethological perspective. Accordingly, the hypothesis was as follows:

H. Listeners will exhibit more pro-social feelings or tendencies after listening to

grieving music than after listening to melancholic music.

Stimuli

Two passages of music were chosen for Study 1, each lasting about one minute in duration. The first passage, Arnold and Price’s Sherlock Series 2, Track 18 (0:42-1:32), has been shown to effectively represent and evoke grief in listeners (Chapters 5-8). In this orchestral work, the main melody is carried by the full first violin section, with lots of counterpoint in other voices. In general, the texture is large and orchestral. The second passage, Faure’s Après un rêve (0:00-0:52), has been used in previous work to represent and evoke melancholy (Chapters 5-8). The version of the Faure work used in this study is

195 an instrumental version that replaces the soprano from the original version with a cello.

In contrast to the Arnold and Price work, the Faure piece is a more intimate work with solo cello and piano accompaniment.

Methods

Study 1 was conducted on two separate days. On the first day, participants were asked to listen to a passage of music (either melancholic or grieving) and to respond to questions on a written survey. About one month later, participants completed the same task with the other musical condition (melancholy or grief). The tasks were conducted in four groups of roughly equal size, corresponding to the four Music Theory 1 classes at

The Ohio State University. Two groups were randomly assigned to hear the grieving music on the first day, while the other two groups heard the melancholic music on the first day. The purpose of creating the within-subjects design was to minimize the influence of individual characteristics, such as baseline empathy and musical tastes. The second study day took place about a month after the first study day so that participants would be unlikely to remember their responses from the first task.

In all psychological studies, it is important to try to minimize bias. One potential source of bias comes from the participants. It is natural for participants to want to please the experimenter or to want to appear in a positive light. In a study about pro-social behavior, then, it is possible that participants could unintentionally express more pro- social tendencies than they would in a more naturalistic setting. In order to minimize this type of bias, participants were deceived about the intention of the study. Specifically, participants were told that the study’s purpose was to “examine your experience while

196 listening to music, including things like your musical preferences and interests, musical memory, and understanding of the musical structure.”

After hearing these instructions, participants were told that they would hear a single passage of music. They were then asked to answer a few questions about their music-listening experiences. The questionnaires contained one measure of pro-social feelings (the IOS task discussed below) and lure questions. The lure questions were included in order to subvert any possible demand characteristics—specifically, to avoid participants concluding that the intention of the research regarded pro-social behavior.

First, participants were asked to indicate how much they liked the musical passage on a

3-point Likert scale (do not like it at all, like it, like it very much) and whether they thought they liked the music more than their classmates (Yes/No). Second, participants were asked to answer how familiar they were with the music on a three-point scale (not familiar, familiar, very familiar) and whether they thought they were more familiar with this music than their classmates (Yes/No). As all of the participants were first-year students in music theory and aural skills classes, the remaining lure question addressed characteristics of the music itself. Specifically, participants were asked to “describe features of the music that make it interesting to you as a listener.”

After participants completed the survey, they were told the study was over and were thanked for participating. The experimenters—two of the Music Theory 1 instructors—then asked participants to respond to two questions regarding future class activities on the back of the questionnaire. These questions represented two additional measures of pro-social tendencies (described below). The questionnaires were then collected by the experimenter. The entire study lasted about five minutes in duration.

197

After completing the same task on the second day, participants were asked what they believed to be the study hypothesis. Informal observation suggests that participants did not guess the true meaning of the experiment, although some individuals familiar with psychological methodologies may have figured out the actual study intention.

Participants were then fully debriefed about the true nature of the study. Students generally seemed surprised about the nature of the experiment, again suggesting that many of them may have been effectively deceived.

Operationalizations of Pro-social Tendencies

As discussed above, pro-social feelings and tendencies were operationalized in three ways. The first measure of pro-social feelings was the amount of self-other overlap a listener felt with the music. In response to displays of distress, humans often respond with empathy or sympathy. Both of these responses are thought to arise from the caregiving nature of humans; humans prefer to help those who are in distress (physically or emotionally), often leading to acts of (Silka and Housea, 2011). When an individual helps another person in need, it usually contributes to positive feelings in themselves (Cialdini et al., 1997; Hauser et al., 2014). One way the observers’ empathic and positive feelings are measured, in the psychological literature, is through self- reported feelings of self-other overlap, or the amount of shared feelings the observer feels with the person in distress. Arthur Aron, Elaine Aron, and Danny Smollan developed the

Inclusion of the Self in the Other (IOS) scale in order to measure self-other overlap

(Aron, Aron, & Smollan, 1992). The IOS, depicted in Figure 9.1, is a measure of social connectedness that has been widely used in the social science literature, although this

198 scale has also been used in the music cognition literature (e.g., Weinstein, Launay,

Pearce, Dunbar, & Stewart, 2016). In Study 1, the questionnaire asked participants to complete the IOS scale by “circl[ing] the picture that best describes your current relationship with the music.”

Figure 9.1. Inclusion of the Self in the Other (IOS) scale from Aron, Aron, and Smollan (1992)

The second operationalization of pro-social tendencies also employed a measure adapted from the psychological literature (Vohs et al., 2006). After participants were informed the study had concluded, they were told that an undergraduate needed help coding 200 data sheets (each of which would take 5 minutes to encode). The participants were asked, “If you are willing to help this student and encode some data sheets, please write on the back of the paper how many sheets you would be willing to encode.” The responses to this question fell along a continuous gradient from 0 data sheets to 200 data sheets. People who feel more pro-social should be willing to encode more data sheets.

The last operationalization of pro-social tendencies, also adapted from Vohs and colleagues (2006), regarded students’ desire to interact with others. After the completion

199 of the questionnaire, participants were told by one of the experimenters, both of whom were Theory 1 instructors, that they would be participating in a class project later in the semester. The participants were asked to indicate whether they would like to participate with another person or complete the project alone. Those who were feeling more pro- social should prefer the company of others.

Participants

The 68 participants were Music Theory I students at The Ohio State University.

No other demographic information was collected. The students were tested in four group settings, where each group corresponded to a Theory I class. Each group heard the musical passages over the classroom speaker system and filled out the questionnaires on paper.

Results: Inclusion of the Self in the Other Scale

The hypothesis tested was that after listening to grieving music, one would feel more self-other overlap, represented by the more intertwined circles, than after listening to melancholic music. Each participant’s response to the IOS scale was translated into a

7-point ordinal code, where 1 represented the picture with no circle overlap and 7 represented the complete overlap of circles. The mean score for the melancholy condition was 4.26 (SD = 1.72) and the mean score for the grief condition was 3.97 (SD = 1.30).

These two means were not significantly different (t = 1.62, p > 0.05).

200

Results: Data Sheet Encoding

The hypothesis tested was that after listening to grieving music, a student would be willing to encode more data sheets than after listening to melancholic music. On average, those who listened to melancholic music were willing to encode 1.77 sheets of data, whereas those who listened to the grieving music were willing to encode 0.96 sheets of data. These mean differences were not statistically significant (t = 1.15, p > 0.05). It is possible that the lack of difference was due to a floor effect, as most participants indicated that they were not willing to encode any data sheets.

Results: Group Project

The final hypothesis was that, after listening to grieving music, one would be more willing to work on a group project together than those who listened to melancholic music. On average, 57% of those who listened to melancholic music wanted to work with another person, whereas 53% of those who listened to the grieving music wanted to work with another person. These mean differences were not statistically significant (X2 =

0.348, p > 0.05).

Discussion

None of the tested operationalizations of pro-social tendencies were significantly different between the melancholic and grieving music conditions at the 0.05 alpha level.

It is possible that the study was underpowered, as there were only 68 participants. A post- hoc power analysis revealed that the power of the test was only 56%. Of course, there were also limitations created by the study design. A classroom setting may not be the

201 ideal place to carry out such experiments, as students may feel pressured by their peers to respond in a certain way. The experiments were also conducted close to Thanksgiving and Winter Break and so students may have been busy and therefore less willing to engage in pro-social behavior. In an ideal setting, participants would be incentivized to be pro-social by participating in an online game and listening to music individually over headphones. Finally, only one example of melancholic music and one example of grieving music were presented to the participants. The negative results of Study 1 therefore may not generalize to melancholic and grieving music in general. Future work should make use of more musical stimuli, individualized testing settings, and more naturalistic acts of pro-social behavior or tendencies.

In describing the grieving music, however, there were some prescient responses.

Some of the responses included the following sentiments:

(1) It makes me feel sympathetic as the violin sounds like it’s imitating someone

who is crying or sad.

(2) I imagine performing this distressing and dramatic duet with my friend.

(3) Melody with countermelodies makes me feel as if there are two people crying

out about something.

In response to the melancholic excerpt, someone noted that they liked the

“melancholy nature of the piece [and] the instrumental choices which add to the mournful quality.” These statements will be discussed further later in this chapter.

Nevertheless, Study 1 revealed no support for the idea that participants experienced more feelings of pro-sociality in response to grieving music, compared to

202 melancholic music. The results of Study 1 are therefore not consistent with the idea that listeners responded to grieving music as an ethological signal and melancholic music as an ethological cue. Rather than testing pro-social tendencies, the second study examined participants’ self-reported feelings in response to melancholic and grieving music. The hypothesis for the second study was that listeners would feel more compassionate after listening to grieving music than after listening to melancholic music.

Study 2: Measures of Compassionate Feelings

Chapters 7 and 8 presented studies that examined emotions that listeners perceived and experienced in response to two types of sad musical passages. The hypotheses tested were simply that listeners would perceive (and experience) different patterns of emotions in response to grieving and melancholic music. The results of these studies were consistent with these claims. If Huron’s conjecture about ethological signals and cues were true, we might expect the data to be consistent with additional hypotheses

(listed below). The current study makes use of the data from Chapters 7 and 8 in order to test these ethological conjectures.

Recall that people have similar physiological and behavioral states when melancholic, sleepy, and relaxed (Andrews & Thomson, 2009; Nesse, 1991). The fact that there are similar expressions displayed for these emotions has been used to support the idea that melancholy may act as an ethological cue (Huron, 2015). Ethological signals, like grief, tend to have distinct and unique physiological and behavioral states.

Given this research, I proposed the following hypothesis:

203

H1. Listeners tend to confuse melancholic musical passages with other low-

arousal emotions such as sleepiness, relaxation, or tenderness. This confusion

will not happen with grieving music.

An ethological perspective also suggests that grieving music may elicit compassionate feelings in listeners. Recall that ethological signals represent evolved evolutionary capacities for communication between an experiencer and an observer

(Maynard Smith & Harper, 2003). Signals tend to be conspicuous and are easily detected by the observer. In turn, detection of the signal results in behavioral and motivational changes on the part of the observer—changes that benefit both the experiencer and the observer. In the case of grieving displays, the observer should feel motivated to act compassionately (Gelstein et al., 2011). Feelings of compassion should be higher in response to grieving displays, compared to melancholic displays (Huron, 2015).

Accordingly, the second hypothesis is the following:

H2. Listeners will experience more compassion in response to grieving music

than in response to melancholic music.

Methods

Hypothesis 1 regards emotions that listeners perceive in melancholic and grieving music, while Hypothesis 2 regards emotions that listeners experience in response to melancholic and grieving music. Accordingly, the hypotheses were tested using different methodologies.

204

Chapter 7 explored the emotions listeners perceived in melancholic, grieving, happy, and tender music using two separate methodological designs. Recall that in the first study, I asked listeners to choose which emotion was best represented in a musical passage from a list of five emotions: melancholy, grief, tender, happy, and none. In the second study, I asked listeners to choose which emotion(s) they perceived in melancholic, grieving, happy, and tender musical excerpts from a list of fourteen emotions: angry, bored, disgusted, excited, fearful, grieved, happy, invigorated, melancholy, relaxed, surprised, tender, neutral/no emotion, and other emotion(s).

I also studied emotional experiences in response to melancholic, grieving, happy, and tender music using two separate methodological designs (Chapter 8). Recall that in the first study, participants were asked to listen to musical passages and select the emotion(s) they experienced from a list of terms: angry, bored, compassionate, disgusted, excited, fearful, grieved, happy, invigorated, joyful, melancholy, nostalgic, peaceful, power, relaxed, soft-hearted, surprised, sympathetic, tender, transcendent, tension, wonder, neutral/no emotion, and other emotion(s). In the second study, listeners heard grieving and melancholic musical passages and were asked to freely describe their experiences while listening to the music.

Analysis

If the data were consistent with Hypothesis 1, we would expect to find the following trends in the studies regarding perceived emotion:

(1) In the five-alternative forced-choice task, listeners will confuse melancholic

and tender musical passages.

205

(2) In the emotion selection task, listeners will perceive low-arousal emotions

(bored, melancholy, relaxed, and tender) roughly equally in the melancholic

stimuli.

These two conjectures were tested using the perceived emotion data presented in

Chapter 7. When presented with the melancholic stimuli in the five-alternative forced- choice task, participants chose the descriptor melancholy 70 times (40%), grief 63 times

(36%), tender 35 times (20%), happy 2 times (1%), and none 6 times (3%). A chi-square test suggests that these emotional terms were not used equally to describe melancholic music (X2 = 111.9, df = 4, p < 0.01). A binomial test indicated that the amount of melancholy and tender responses differed in a statistically significant manner (p < 0.01).

When presented with tender stimuli, participants chose the descriptor tender 129 times

(73%), melancholy 16 times (9%), grief 1 time (<1%), happy 22 times (12%), and none 9 times (5%). A chi-square test suggests that these emotional terms were not used equally to describe tender music (X2 = 316.31, df = 4, p < 0.01). A binomial test indicated that the amount of tender and melancholy responses differed in a statistically significant manner (p < 0.01). Therefore, these results were not consistent with Hypothesis 1.

When presented with the melancholic stimuli in the emotion selection task, listeners perceived melancholy 191 times (33%), boredom 25 times (4%), relaxation 74 times (13%), and tender 96 times (17%). A chi-square test suggests that these emotional terms were not used equally to describe melancholic music (X2 = 316.31, df = 4, p <

0.01). Binomial tests indicated that the amount of melancholy differed from the amount of boredom (p < 0.01), the amount of melancholy differed from the amount of relaxation

206

(p < 0.01), and the amount of melancholy differed from the amount of tender (p < 0.01).

Once again, the results are not consistent with Hypothesis 1.

If the data were consistent with Hypothesis 2, we would expect to find the following trends in the studies regarding experienced emotion:

(1) In the emotion selection task, listeners will report feeling more compassionate

or sympathetic in response to the grieving music than in response to the

melancholic music.

(2) In the free association task, listeners will report feeling more sympathy in

response to the grieving music than in response to the melancholic music.

These two conjectures were tested using the experienced emotion data, first discussed in Chapter 8. When presented with the melancholic stimuli in the emotion selection task, listeners reported feeling compassionate 32 times (4%) and sympathetic 44 times (6%). Proportion tests comparing the percent of compassion and sympathy to the amount of times these words should theoretically be chosen by chance (4.34%) revealed that neither of these terms was used more than chance (both ps > 0.05). When presented with the grieving stimuli, listeners reported feeling compassionate 34 times (4%) and sympathetic 30 times (3%). Proportion tests revealed that neither of these terms was used more than chance (both ps > 0.05). The amount of compassion felt in response to melancholic and grieving music was directly compared using a binomial test. The results were not consistent with the idea that people felt more compassion in response to grieving music than in response to melancholic music (p > 0.05). A second binomial test was conducted in order to test the amount of sympathy felt in response to melancholic

207 and grieving music. Once again, the results were not consistent with the idea that listeners felt more sympathy in response to grieving music than in response to melancholic music

(p > 0.05).

In the free association task, there were only four responses (1%) that were categorized as love/sympathy in response to grieving music. A binomial test indicates that this proportion was significantly less than chance (p < 0.05). There were nine responses that were categorized as love/sympathy in response to melancholic music (2%). A binomial test indicates that this proportion was not significantly different from chance (p

> 0.05). The amount of love/sympathy felt in response to melancholic and grieving music was directly compared using a binomial test. The results were not consistent with the idea that people felt more love/sympathy in response to grieving music than in response to melancholic music (p > 0.05).

Discussion

The results in Study 2 were not consistent with the idea that listeners confused melancholic musical passages with other low-arousal emotions. Similarly, the results were not consistent with the idea that listeners experienced more compassion or sympathy in response to grieving music than in response to melancholic music.

Therefore, the results do not seem to support the idea that musical melancholy acts as an ethological cue and musical grief functions as an ethological signal.

208

General Discussion

The intention of this chapter was to test Huron’s (2015) claim that music-related melancholy may operate as an ethological cue and that music-related grief may instead operate as an ethological signal. In order to test this hypothesis, we examined downstream behavioral and experiential feelings resulting from music listening. The results were not consistent with the idea that, compared to responses to musical melancholy, listeners should respond to musical grief with increased pro-social tendencies or with increased compassionate feelings. Despite these negative outcomes, however, I believe that future work on melancholic and grieving music may find results consistent with the discrepancy between ethological cues and signals. The reasons why I believe ethological signals and cues represent a fruitful avenue for future research are the following: (1) the current studies were limited to self-reports of listeners’ conscious experiences, (2) post-hoc tests on the experiential data presented in Chapter 8 suggest some differences consistent with the ethological claims, and (3) there are multiple mechanisms that induce feelings and behaviors in listeners. I will briefly explore each of these reasons below.

Limitations of the Current Study

Both of the studies presented in this chapter relied on self-report of experienced emotions and behavioral tendencies in response to music listening. A major limitation of the current studies, then, is that only conscious emotional feelings were reported.

Research suggests that some emotional experiences remain inaccessible to

209

(Winkielman & Berridge, 2004). Future research should test emotional experiences arising from music listening using methodologies of implicit emotion research.

Another limitation of the current studies is that all of the reports of pro-social or compassionate feelings were tested after the musical passages had ended. Namely, only downstream behavioral and motivational consequences were examined. Future research should examine emotional responses and behaviors while the music is playing.

In the first study presented in this chapter, listeners heard a melancholic work by

Faure and a grieving work by Arnold and Price. As mentioned in that section of the chapter, the Faure work was written for solo cello (with a piano accompaniment) and the

Arnold and Price work is written for an orchestral ensemble. It is possible that the solo instrument in the Faure work was more likely to be heard as an individual agent than the full orchestra in the Arnold and Price work, and therefore the Faure work could have evoked more pro-social feelings. Future research should compare melancholic and grieving works that contain similar textures.

Post-hoc Analyses of the Experiential Data from Chapter 8

Recall that the function of melancholy is to increase rumination about life station issues or to aid in reflection on a failed goal (Ekman, 1992), while the function of grief is to respond appropriately to experiences of loss (Epstein, 2019). In the study described in

Chapter 8 on experienced emotion in response to melancholic and grieving music, one of the major findings was that there are more instances of Crying/Distraught/Turmoil (30 for grieving music, 13 for melancholic music) and Death/Loss (29 for grieving music, 18 for melancholic music) in response to grieving music, whereas there are more instances

210 of Sad/Melancholy/Depressed (51 for melancholic music, 24 for grieving music) and

Reflective/Nostalgic (38 for melancholic music, 6 for grieving music) associated with melancholic music. Post-hoc binomial tests reveal that there are significant differences between experiences of Crying/Distraught/Turmoil, Sad/Melancholy/Depressed, and

Reflective/Nostalgic in response to melancholic and grieving music (all ps < 0.05). These findings are consistent with the idea that the function of grief is to solicit help from others in times of turmoil or loss, whereas the function of melancholy is to reflect on life-station issues.

In addition, recall that some of the responses to grieving music in the current chapter’s Study 1 included feelings of sympathy for a “crying or sad” person. These statements suggest that listeners may be perceiving distress in the music and are recognizing characteristics of grief. Listeners may be experiencing the emotions expressed in the music through a process like emotional contagion. It seems plausible that the biological tendency to help a person in a state of grief may be quickly subverted in music-listening conditions, as the listener realizes that “it’s just music.”

Multiple Mechanisms

Although ethological signals and cues may represent one (potential) mechanism of how emotions are induced in people from music listening, this is certainly not the only mechanism that may lead to emotional experiences. For example, the BRECVEMA model of Juslin (2013) summarizes multiple pathways for emotion induction. The first mechanism, brain stem reflexes, states that the brain innately responds to events that are surprising, loud, accelerating, etc. (Juslin, Harmat, & Eerola, 2014). Musical rhythms are

211 also known to influence heart rate and breathing rates so that the internal body rhythms match the musical beat, which in turn causes feelings of connection to other listeners

(Harrer & Harrer, 1977; Demos et al., 2012). Next, Juslin points to research about how unconscious, repeated associations of a piece of music with a particular emotional state can later on lead to emotions being induced through the music alone—a process called evaluative conditioning. Additionally, many researchers have been in support of the process of emotional contagion. Emotional contagion claims that the brain automatically responds to features of the music as if they were from a human (Juslin & Laukka, 2003).

Music may also evoke emotions in listeners by causing visual imagery through the use of musical metaphors (Kövecses, 2000; Lakoff & Johnson, 1980), through associations with memory like nostalgia and pride (Huron, 2009; Janata, Tomic, & Rakowski, 2007), and by confirming or denying a listener’s expectations about the music (Huron, 2006; Huron,

2009; Meyer, 1956). Finally, Juslin points to a listener’s aesthetic responses to music

(Juslin, 2013b; Konečni, 2008). He claims that aesthetic judgments about the music evoke aesthetic emotions (e.g., awe, ), which function differently than the everyday emotions of the other seven BRECVEMA mechanisms. In summary, these eight processes, along with the application of cognitive appraisal mechanisms (Scherer, 2004) represent other potential mechanisms that may work in parallel with ethological signals and cues.

If several mechanisms are working in parallel to induce emotion in listeners, it makes sense that listeners may experience multiple emotions or mixed emotions.

Listeners may experience both positive and negative emotions in response to a musical passage. Research is consistent with the idea that listeners experience multiple kinds of

212 emotions—positive and negative valence, as well as high and low arousal—in response to melancholic and grieving music (see Chapter 8). In short, the results of the current studies provide insight into the emotional experiences induced by multiple pathways. For example, listeners may feel compassionate after listening to grieving music because of the music’s status as an ethological signal, but listeners may also feel pleasurable sadness or tenderness from melancholic music because of personal associations or memories of the work in question. These distinct emotions may be conflated in the mind of the listener, even though the two positively-valenced affects arise due to different mechanisms. In order to determine which emotion(s) are the result of a specific mechanism, like ethological signals and cues, a study would require advanced methodologies and statistical analyses. This task may not even be possible, given limitations of self-report.

This chapter explored one potential theory—ethological signals and cues—that might explain the melancholy and grief results from Chapters 5-8. Two behavioral studies were conducted in order to test predictions associated with the theory. The results were not consistent with any of these conjectures. The next chapter explores an alternative emotional theory—that of psychological construction and emotional granularity. No experiments were conducted to test hypotheses consistent with this theory. Rather, the chapter serves as a call for action to the field of music and emotion and proposes future methodologies that can be used by myself and other researchers.

213

Chapter 10: The Importance of Utilizing Emotional Granularity in Music and

Emotion Research

Throughout this dissertation, I have made claims that research on music and emotion would do well to investigate a wider selection of emotions, in terms of , emotion perception, and emotion induction. The objective of Chapters 5-8 was to examine a of music-related emotion—sadness—in order to see if there was evidence consistent with the idea that more nuanced terminology could help explain inconsistencies in the music and emotion literature. In this chapter, I broaden the discussion of emotional terminology from musical sadness to music-related emotional states in general. The aim of the chapter is to propose the idea that inconsistencies among studies of expressed and evoked emotion could be a result of poorly-defined emotional labels (e.g., failing to differentiate between varieties of sadness). I furthermore suggest that in order to better understand emotional responses to musical stimuli, we need to change the way we use emotional terminology.

This chapter explores the psychological phenomenon called emotional granularity, which is a term most often used in psychological construction theories of emotion (see Chapter 2). Theories of emotional granularity, which were popularized by

Lisa Feldman Barrett, suggest that people describe emotional states with different levels of specificity. I will describe that emotional granularity can be conceptualized in two

214 ways—first, as a dispositional trait, and second, as a teachable skill. I will then explain how emotional granularity can be used to interpret the findings from the PUMS data and from the studies regarding melancholy and grief. Finally, I will discuss the implications of utilizing emotional granularity in the field of music and emotion, as well as recommend a training protocol that researchers can utilize in future studies.

What is Emotional Granularity?

Chapter 2 described the four major emotion models used by contemporary psychologists and musicians. Recall that one of these theories—psychological construction—posits that we evaluate our surroundings and stimuli using information from different domains, including previous experience, social roles, learned associations, and innate behaviors (Barrett 2013; 2017). The brain is thought to update this prediction based on the current situation, giving rise to new emotional experiences. Each emotion, according to psychological constructionists, is a result of how the brain processes and weights different pieces of information. The result of this cognitive construction is a unique emotional experience. Every instance of sadness, for example, is thought to be unique. In other words, a person can experience multiple forms of sadness, depending on the current situation. Even if similar events give rise to a state of sadness—like losing a job—the subjective feelings, neural activations, and physiological responses of the emotional experiences will be unique (Barrett, 2013; 2017). It seems, therefore, that rather than representing a discrete entity, a nominal emotion category (like sadness) represents a continuum of emotions that can vary in arousal, valence, and a number of other dimensions.

215

Because the theory of psychological construction suggests that there are no innate

“basic” emotions, a new concept has been identified to explain the variety of emotional states that humans experience: emotional granularity. Broadly speaking, emotional granularity is the ability to specify one’s own and others’ particular emotional states with precision (e.g., nostalgia, tenderness), as opposed to more general valence-dominated terminology (e.g., I feel bad, I’m upset). Emotional granularity requires the ability to identify distinct emotional terms, physiological responses, and possible eliciting conditions that cause a subjective feeling state. In other words, it requires the ability to interpret nuanced feeling states and discriminate among them.

A parallel example regarding granularity can be taken from the ancient Greek language. During the time of epic , blue seas and green grass were described with a single word: kyaneous. Despite a single categorical label for “blue/green,” the ancient

Greeks were still able to perceive a continuous range of colors. The language, however, did not contain enough granularity to express the various perceived shades of color. In today’s world, the Hadza people of Tanzania only use three color terms, which correspond to the English black, white, and red. However, they are still able to differentiate between a wide range of colors in the same manner as Western speakers

(Lindsey, Brown, Brainard, & Apicella, 2015). In both the cases of ancient Greek and

Hadza peoples, language prohibits the expression of a continuous phenomenon.

Along with Barrett, I believe that the same problem exists in today’s study of emotions. People may perceive a wide variety of affective states in music, but they might not have conscious access or appropriate language terms for how to describe such

216 affective states. This lack of vocabulary could be part of what makes music have the ineffable quality that is so often mentioned in the literature.

Emotional granularity can be used to define two separate constructs: it is both a personality characteristic (e.g., some people inherently use more granular language to describe emotional states than others) and a skill, which can be learned. Both of these definitions will be discussed in turn, below.

Emotional Granularity as a Dispositional Trait

Adults are largely able to discriminate among emotion categories (Barrett, 2004).

However, some people do not differentiate among emotional states like fear, anger, and sadness. These people may be considered to exhibit “low” dispositional emotional granularity. A person low in emotional granularity tends to explain how they are feeling by using general terminology, such as “I feel bad,” without referring to a specific emotional category. with low emotional granularity may also discuss feeling anxious and sad at the same time: they explain their emotional experiences in a global sense and only communicate general information about their emotional state (e.g., pleasure or displeasure) (Barrett, 2004; 2017). Other people use affective words with specificity and differentiate between feelings like stress and anxiety and fear—these people may exhibit “high” dispositional emotional granularity. Adults with high emotional granularity tend to report feeling one specific emotion at a time, as they communicate their affective state in a more specific sense.

The difference between high and low emotional granularity was shown in Figure

1 of Barrett’s 2004 paper (reproduced on page 218 as Figure 10.1). In this figure, there

217 are two representations of emotional granularity. The top, circular figure represents the way people high in emotional granularity conceptualize emotional states. Each emotional word exists in its own distinct space—it has its own “region of experience” (Barrett,

2004). In this case, a person’s emotional concepts correspond with the theoretical

Circumplex Model, which suggests that people’s emotional states can be mapped onto a circle (Russell, 1980). The bottom, flat figure represents the way people low in emotional granularity conceptualize emotional states. In this figure, it is clear that there are few

“regions of homogeneity and correspondingly fewer domains of experience” (Barrett,

2004). In other words, although those low in emotional granularity might report feeling satisfied or relaxed, the person is merely using two separate words to represent the same

experiential state.

Surprised Aroused Angry Interested Nervous Enthusiastic

Disappointed Happy Sad Satisfied

Sluggish Relaxed Sleepy Calm Quiet Still

Surprised Aroused Nervous Angry Interested Happy Disappointed Sad Satisfied Sluggish Still Relaxed Sleepy Quiet

Figure 10.1. High and low emotional granularity reproduced from Barrett (2004)

218

Related to this idea, research suggests that some people are high in valence focus, which means they primarily differentiate emotional states by their valence (Barrett,

2004). Valenced information refers to the intrinsic “pleasantness” (i.e., positive valence) or “unpleasantness” (i.e., negative valence) of something in the environment. For example, things that bring joy or happiness tend to have a positive valence, as they are inherently attractive. Stimuli that are fearful or sad tend to have a negative valence, as they are inherently aversive. Adults high in valence focus therefore might distinguish sadness from happiness, but not melancholy from grief.

Other adults are high in arousal focus, which means they primarily differentiate emotional states by their arousal (Barrett, 2004). Physiological arousal refers to how much energy a person feels in an emotional state. If an emotion exhibits high arousal— like excitement—the experiencer might feel an increase in blood pressure or begin to breathe faster. If a person is experiencing a low-arousal emotion—like boredom—he or she may feel symptoms similar to sleepiness. Individuals who are high in arousal focus might distinguish grief from melancholy, but not melancholy from tenderness.

According to Barrett (2004), a person’s emotional lexicon also determines which emotions he or she can experience. Recall that Barrett points out that English has fewer words for “anger” than German or Mandarin (Barrett, 2017). In order for an English language speaker to experience different kinds of anger, Barrett claims that a person first needs to learn the appropriate terminology. In other words, the acquisition of language is necessary for a person to construct new emotional experiences.

219

Developmental Research on Emotional Granularity

Work in developmental psychology is consistent with the idea of emotional granularity. Young children have limited vocabularies, especially regarding emotional words (Ridgeway, Waters, & Kuczaj, 1985; Widen & Russell, 2008). The development of emotional vocabulary begins early, with children as young as three years old expressing the ability to talk about emotion concepts (Ridgeway et al., 1985). However, research has indicated that children’s development of emotion concepts may be less well- formed than initially thought. When infants are born, they only attend to valenced information—they do not differentiate among discrete emotional categories, such as anger, sadness, and happiness (Vaish, Grossmann, & Woodward, 2008). By the age of 3 years, children understand that emotions are internal feelings and are distinct from their causes and resulting behaviors (Wellman, Harris, Banerjee, & Sinclair, 1995).

Research by Sherri Widen and James Russell (2008) suggests that children develop an understanding of various emotions at different ages. Initially, young children may have a broad understanding of emotion and differentiate only among feeling good and feeling bad—they attend to the valence of the emotion, but fail to differentiate emotions within a single valence. During this time, children may use specific emotional labels even though they have not yet developed an understanding of specific emotions

(Bosacki & Moore, 2004). For example, children will use the term happiness to describe both happy and surprised faces.

Throughout the preschool years, these broad designations of feeling good or feeling bad narrow to specific emotional categories (Widen & Russell, 2008) (see Figure

10.2 on page 222, which reproduces Figure 2 of the Widen and Russell paper). The

220 understanding of happiness is thought to arise first in young children. Then, children learn either sadness or anger, but they only use one emotional term (either sadness or anger) to refer to both sad and angry states. Eventually, children learn to differentiate sadness and anger and use the word sad to describe sad states and use the word anger to describe angry states. Subsequently, children begin to use either the word fear or surprised—they will use one of these two words to refer to both scary and surprising feelings. Within a few months, the child learns to separate these experiences and use both words appropriately. This pattern of emotion category development continues throughout childhood.

As these emotion concepts (e.g., happiness, sadness, anger, fear, surprise) narrow throughout the preschool years, new emotion concepts also begin to develop (e.g., disgust, pride, embarrassment). These later-emerging emotion concepts begin to develop later in the preschool years (Bosacki & Moore, 2004). Once again, these emotions are initially understood in a broad sense, but then narrow as a child approaches grade school.

By grade school (i.e., around 6 or 7 years old), children mostly understand all of these emotion concepts as separate emotions (Widen, 2013). However, even adolescents between 12-18 years old do not have fully-developed concepts of emotion categories. For example, many adolescents respond to fearful and angry vignettes with sad words

(O’Kearney & Dadds, 2004). The opposite of this finding is also true: adolescents use anger words to describe sad states up to 25% of the time (Bazhydai, Ivcevic, Brackett, &

Widen, 2018). Rather than being born with distinct abilities to discriminate or recognize emotion categories in vocal and facial expressions, then, emotion concepts are constantly evolving and are not fully mature until as late as 18 years old (Bazhydai et al., 2018).

221

Figure 10.2. The Differentiation Model from Widen & Russell (2008)

Emotional Granularity as a Learned Skill

An important implication of the developmental work presented above is that children become more emotionally granular with time. Similar to the way that children can learn to distinguish among emotion categories, adults may also learn to become more emotionally granular. In other words, it may be possible to teach people to become more emotionally granular, no matter what level of dispositional emotional granularity they exhibit (Barrett, 2004). Research on emotional granularity as a teachable skill is currently in progress.

I believe that the real potential for future emotion-related work depends on how emotional granularity can be explored as a skill. If it is true that a person can learn how to become more emotionally granular, it is theoretically possible to train participants about emotional distinctions before conducting an experiment. As discussed in earlier chapters,

222

I believe that participants may be able to perceive more emotions in music than has been suggested by previous research (e.g., Juslin, 2013). The reason that people do not always agree on what represents an anxious musical passage, for example, could be that the term anxious is defined in different ways by the researchers and the participants. In other words, people may disagree about what “anxiety” means. It is important to train researchers and participants on the definitions of certain emotion words. By implementing a standard training procedure before conducting an experiment, the limitations of which emotions can be perceived and induced in music will be better understood. This idea will be discussed further later in this chapter.

Applying Emotional Granularity to Music

Recall that in the analysis of the PUMS database (Chapter 4), I found that researchers have investigated 114 emotional states related to music. However, I additionally found that happy and sad stimuli comprise over 42% of all the PUMS stimuli and that 105 of the 114 emotions were each studied less than 1% of the time. In addition to the findings presented in Chapter 4, other research suggests that the study of perceived emotions tends to rely on five to seven common emotional terms: happiness, sadness, fear, anger, and tenderness (see Juslin, 2013 for a review).

While these findings may initially seem to be in line with discrete or basic models of emotion, I believe that the results can also be interpreted through the lens of psychological construction. I suggest that these few emotional terms (e.g., happy, sad) have been used to describe an unmanageably large space of emotion. Namely, I believe that there are vast differences in musical passages that have been labeled as happy or

223 labeled as sad. In the PUMS database, Albinoni’s Adagio in G Minor and Metallica’s

Fade to Black are both marked as sad musical works. A cursory aural analysis of these two works shows markedly different trends. Although they are both in the minor mode, the different instruments, paces of music, and inclusion of vocals might suggest different emotional states. Should both of these works be included in the same category of music?

The labeling of emotions in a broad and imprecise manner, such as simply referring to music as happy or sad, can be related to the idea of low emotional granularity. The typical participant base for studies on music and emotion is a group of college students who are unlikely to be experts in emotion. Compared to trained emotion researchers, these participants are likely to exhibit comparatively low emotional granularity. Study participants may not know the difference among joyful, elated, pleasant, peaceful, tender, and lively states. Presented with a musical sample, these listeners may simply reply that the music sounds happy. With training, however, a participant may be able to apply more specific terminology to the emotional states they are perceiving or feeling. In a similar manner, rather than simply rely on the word sadness to refer to a wide range of emotional states, participants may learn to categorize musical passages as melancholic, grieving, anxious, nostalgic, and sorrowful.

The results of Chapters 5-8 are consistent with the idea that participants can delineate more emotionally-granular affective states in response to music listening. These studies showed that participants can differentiate between melancholic and grieving emotional states, both in terms of which emotion(s) a musical passage is expressing and in terms of which emotion(s) they feel in response to the music. Namely, rather than relying on a single broad category of sad music, participants identified multiple “sad”

224 states (i.e., melancholy and grief). The next step, of course, is to examine how many

“sad” emotions there are in music. It is unlikely that melancholy and grief are the only two sad states that can be differentiated in music. By teaching participants to be more emotionally granular, future research may find that listeners can perceive five or ten different states in music that has been previously labeled as sad.

In summary, the results from the PUMS database and the studies on melancholy and grief suggest that the field of music cognition can benefit from utilizing more emotionally-granular terms when investigating music-related emotion.

Implications

There are initial studies in other domains of emotion research that have investigated wide selections of emotional categories (e.g., Cowen & Keltner, 2017).

Aspects of such studies should also be applied to studies on music and emotion. In this section, I will discuss some of the implications of emotional granularity for music studies, including the study of perceived emotions, ways to control for dispositional emotional granularity, theoretical underpinnings of emotional granularity, potential impacts on reviews or meta-analyses, and the potential to apply emotional granularity to emotions other than sadness.

Which Emotions Can Music Express?

Individual differences in emotional granularity (including children of all ages) will affect how people perceive emotions in response to music. Children and those low in emotional granularity may be bounded by what affects they can perceive in emotional

225 music samples. Those exhibiting low emotional granularity (including children) should theoretically not be able to distinguish among certain emotion classes. For example, if these individuals are low in valence-focus, they may combine, say, all negative emotions together, such as melancholy and grief. These individuals should theoretically not be able to recognize or experience these emotions differently, because both melancholy and grief will convey the same type of information to the listener. Furthermore, people low in emotional granularity may be more likely to confuse certain affects of music like relaxation and melancholy. In turn, high emotional differentiators may be less likely to confuse affects of music, like melancholy and tenderness.

If an experiment using forced-choice, same/different, or free-labeling paradigms finds that listeners cannot differentiate between, say, melancholic and grieving music (or happy and peaceful, etc.), there are two possible interpretations. The common interpretation is that the music can only convey sadness—listeners cannot differentiate melancholic and grieving emotions because they are categorizing the cues of both music types as sad. The second interpretation is that, although the music contains separable melancholic and grieving cues that are differentially perceived by those low in emotional granularity, the emotional experience of these two music types are afforded the same label by the brain. Namely, those low in emotional granularity may understand that the melancholic and grieving musics are different, and yet interpret these differences as belonging to one type of affect (sad). If this is the case, the interpretation that people cannot differentiate cues of musical melancholy from cues of musical grief is incorrect: people may understand the differences in the musical structure, but may refer to both of these emotion categories with a single word (sad).

226

Relatedly, developmental studies such as Stachó and colleagues (2013) found that some children could not differentiate music-related sadness from anger. The researchers concluded that children at this age are too young to be able to understand the differences between the two types of musical stimuli. However, it is also possible that children do hear differences between musical sadness and musical anger. Although they may understand differences in the musical structure between these two emotional classes, the children may simply use the same word to describe both sadness and fear. This interpretation is consistent with the Emotion Differentiation Model of Widen and Russell

(2008), presented above.

Controlling for Differences in Participant Emotional Granularity

In music cognition research, the use of emotional labels has only recently begun to be investigated (Kantor-Martynuska & Bigand, 2013; Kantor-Martynuska &

Horabik, 2015). In a series of tasks, these researchers assessed whether different groups of people would perceive (or feel) more granularity amongst musical selections designed to express (or induce) musical emotions. In both of these studies, granularity was operationalized as the number of emotional categories a person perceived (or experienced) in a set of musical stimuli, using either a free categorization paradigm or a same/different paradigm. Namely, a person who listened to 27 excerpts of music and made 8 groups of emotional music would be considered to exhibit more emotional granularity than a person who listened to the same 27 excerpts of music and made 3 groups of emotional music. The researchers found that people with more musical training exhibited more emotional granularity than did people with less musical training. This

227 finding reveals that musicians may have more fine-grained emotional experiences in response to music. Both of these studies examined emotional granularity as a kind of individual characteristic that can be attributed to a participant’s status as a musician or non-musician. This type of research can be followed up in the future by measuring a person’s dispositional level of emotional granularity before conducting an emotion- related experiment.

Since we might expect there to be differences in how people describe their emotional experiences, due to differences in trait emotional granularity, among other factors, a method should aim to address these potential confounds. Unfortunately, the measurement of a person’s baseline emotional granularity takes months to complete and involves complicated statistical analyses (Barrett, 2004). It may be possible, however, to employ a proxy for emotional granularity in future studies. For example, researchers could ask participants to classify a number of emotion words into as many categories as they see fit, the researchers could ask participants to select (from a list) how many emotions they tend to feel over the course of a typical week, or they could ask participants to identify how many emotional states they tend to feel at one time. Future researchers should aim to control for baseline emotional granularity in the same way that they aim to control for other personality characteristics like Openness or trait empathy.

Theoretical Underpinnings

So far, the idea of emotional granularity has been discussed with regard to the theory of psychological construction, as the main advocate of emotional granularity, Lisa

Feldman Barrett, is a proponent of psychological construction. The idea of emotional

228 granularity, however, could also apply to basic emotions or appraisal theories, as well as any combination of emotional theory categories. Basic emotions scholars, like Huron and

Juslin, may use concepts of emotional granularity to posit the existence of tens or hundred of basic emotions. The distinction between melancholy and grief, for example, may correspond to two basic emotions, each with distinctive neurological patterns and physiological responses. Appraisal theorists, like Scherer, can use emotionally-granular semantic labels as a way to further describe the cognitive appraisal process. Finally, psychological construction theorists, like myself, can use emotional granularity to help understand how an emotion may be constructed (refer back to Figure 2.2 on page 64 to see how I have done this).

Recall that the theory of psychological construction states that there is not a single

“fingerprint” of an emotion—whether it is sadness, fear, or any other emotion. In the second part of this dissertation, I have discussed the division of sadness into two parts

(melancholy and grief). This division, however, is not a solution to the “fingerprint” problem. According to psychological construction theories, there are endless possibilities that can give rise to feelings of melancholy or grief. Each instance of a musical emotion, for example, is colored by our past experiences, the context in which we experience them, and our enculturation. The process of applying more emotional granularity to music-related emotions, therefore, only represents the first step into discovering how emotions are constructed from music listening.

In response to this claim of psychological constructionists, a basic emotions scholar may claim that, although everyone may have slightly different experiences listening to a particular musical passage, these experiences may tend to cluster around

229 certain attractors. Namely, people will have similar—but not identical—responses to particular musical passages in the same way that people can observe similar shades of color that may all be labeled as “red.” In other words, the claim that each person has a unique emotional experience does not, by itself, mean that there are no viable categories of emotion.

In the future, theories of music and emotion should address the specific functions served by music, both now and during human evolution. It is certainly important to clarify the theoretical underpinnings of why we should expect there to be a wide selection of emotions expressed and induced by music. Such theories should ideally be grounded in both biology and social learning. For example, future work should further investigate the role of animacy in musical works. From an evolutionary standpoint, it is important to infer agency from animate objects in the environment because these animate may affect you. A person intrinsically seeks an understanding of what animate agents are feeling and thinking. It is possible that music provides animacy cues, inadvertently, to which humans respond. Earlier work on music suggests that music-related movement and voice-like features contribute to perceptions of animacy in music (e.g., Broze, 2013).

Second, although the results of Chapter 9 were not consistent with the idea that melancholy and grief act as ethological cues and signals, future work may want to investigate whether there are other emotions that may correspond to signals and cues.

Ethology provides a potential biological explanation for various emotional states, as signals are thought to be hardwired neurologically. Finally, future research should investigate the role of social learning as related to musical emotions. Social learning may

230 lead to different kinds of emotional states when listening to music, such as feelings of love and transcendence.

Reviews or Meta-Analyses

The particular emotional terms that researchers choose to describe musical stimuli are important for theoretical and practical reasons. It is natural to compare musical passages that represent or evoke the same affective category. Musical passages that express or elicit sadness, for example, may impact a listener using similar mechanisms.

These sad stimuli may additionally share similar music theoretic or structural characteristics. If musical passages with different characteristics are summarized using a single term, however, problems may arise when comparing statistics across stimuli of this class. A lack of emotional granularity therefore has implications for any review or meta- analysis on music and emotion. An analysis that combines, say, melancholic and grieving passages together will likely produce inconclusive results. Since melancholic music tends to be quiet, contain small interval sizes, and be low in register, while grieving music tends to be loud, contain wide leaps, and be high in register, an analysis that combines equal numbers of melancholic and grieving musical passages will summarize this body of music as containing medium dynamics, medium interval sizes, and medium registers. In essence, the melancholic and grieving effects will “cancel each other out.” By applying more precise emotional terminology to the musical affective space, we can help reduce this type of problem in future research.

Past researchers may have taken emotionally-granular states, with distinctive arousal and valence spaces (like the circular shape in Figure 10.1, page 218) and instead

231 described these states with less emotionally-granular terms (like the elliptical shape in

Figure 10.1). Recall that in Juslin and Laukka’s (2003) meta-analysis of perceived emotion in music, some of the words used to define sad music included the following list of terms: crying despair, depressed-sad, depression, despair, gloomy-tired, grief, quiet sorrow, sad, sad-depressed, sadness, sadness-grief-crying, and sorrow. Although these words could correspond to different physiological and subjective feelings states, the researchers combined data from studies analyzing these disparate terms to look for overall trends of sad music. As discussed earlier, the synthesis of these musical works may lead to misleading analytical findings. If the meta-analysis contained relatively few passages exhibiting crying despair and grief, but many passages exhibiting quiet sorrow and gloomy-tired, for example, the features that contribute to crying despair and grief may have been attributed to statistical noise and ignored.

Other Emotions

The difference I found between melancholic and grieving music may be a symptom of a broader problem; in other words, many emotional states may be conflated in music. The case of melancholy and grief is but one example of using more emotional granularity in music. I believe that music and emotion studies are currently semantically underdetermined: the words researchers are using to classify emotional concepts in music do not give a listener enough information to understand what is happening in the music on a more nuanced level. I believe that by using more nuanced emotional terms, we will come to learn more about music’s structures and associated emotional states.

232

I have shown that music research has conflated multiple emotional states with the term sadness. A similar may apply to music-related fear. Research outside of the musical context suggests that fear consists of (at least) two types of emotion: anxiety and panic (Ji & Maren, 2007; Mobbs et al., 2007). According to this research, anxiety and panic are activated by two distinctive kinds of stimuli. If you see a bear on the horizon, for example, the bear is far away from you and is not an immediate threat. This type of occurrence is thought to activate anxiety, which is processed in prefrontal areas of the neural cortex. If the bear appears a few feet away from you, however, you will experience sheer terror or panic. This panic is processed in the brainstem region known as the periaqueductal gray matter. Based on these two definitional differences, it is possible that this distinction translates to music, as well, so that nominally fear-related music actually consists of anxious and panicked music, each of which may have distinctive musical features. Anxious music may contain features like low tremolos and drones, whereas panicked music may contain dramatic crescendos, abrupt changes in tempo, new or unexpected harmonies, and sudden changes of texture. A conjecture that follows is that panicked music—but not anxious music—may result in frisson experiences (Huron,

2006). This type of research is currently being conducted by Caitlyn Trevor, who is investigating scary film music using Topic Theory, theories of ethology, behavioral studies, and neuroimaging methodologies (Trevor, 2018).

Training Participants and Researchers to become more Emotionally Granular

The utilization of more distinct and nuanced (“emotionally granular”) terms may benefit the music field as a whole. Pioneers of music cognition, such as Kate Hevner and

233

Malcolm Rigg, found that musical expression of emotion can be broad and imprecise, such that different listeners do not always agree on more exact emotion characterizations.

Another possibility, however, is that the current definitions of music-related emotions do not provide enough information to describe exactly what is experienced subjectively or semantically. Without clarification, listeners could interpret an emotional word, like melancholy, in different ways. If some participants believe melancholy to mean desolation or despair and others believe melancholy to mean downhearted or glum, it would make sense for different listeners to expect this music to contain disparate features.

It is paramount, then, that researchers explain what they mean by a possibly nebulous term, such as melancholy. Of course, it is also possible that the music itself may be emotionally ambiguous. As a temporal art, music often changes dramatically (or subtly) from moment to moment. These musical changes may represent (or elicit) different emotions or a combination of emotional states. Within a single passage of music, there are also different textural streams, like a solo instrument and its accompaniment. As mentioned in Chapter 6, each of these textural streams may express different emotions.

Future research should directly investigate how these factors impact the perceived and evoked emotions from music listening.

An important conclusion of the dissertation is that the study of music and emotions is currently semantically underdetermined. In other words, the definitions of music-related emotions do not provide enough information to tell you exactly what is experienced subjectively. I propose that the field agrees to explicitly define more emotional terms for use with participants and across researchers. With training, a person may be able to learn to apply more specific terminology to better represent the emotional

234 state they are perceiving or feeling. The purpose of this section is to suggest future ways that researchers can begin to utilize more emotionally-granular terms into their work.

As discussed above, fear and sadness can be separated into different emotional terms based on biological bases. However, emotional granularity can apply to emotional states when there is less biological evidence, as well. Emotional granularity refers to the precision of emotional language—the terms need to be specific and accurate. To apply emotional granularity to music, then, we must first acquire emotionally-granular vocabulary in non-aesthetic contexts. Emotional granularity can be learned through the simple process of studying the definitions of many emotions (Barrett, 2004). For example, people may experience a core affect that is high in arousal and positive in valence. Their heart may be beating fast and they may be smiling broadly. Whereas many people may simply refer to this as being in a good mood, those with high emotional granularity may call this state elated or jubilant. To apply this idea to music, then, a first step would be to look up the definitions of a number of emotions. One may then interpret the level of arousal and the type of valence that each emotion might exhibit. Then, using information from the ethological and psychological literature, a researcher could investigate which types of behaviors and subjective feelings would relate to these affective states in non-aesthetic contexts. Next, the researcher could relate these behaviors and feeling states to musical correlates (e.g., dynamics, articulation). Finally, the scholar could examine musical passages that may contain these features, as prototypes of this emotion.

This process can also work in reverse, where a person listens to a piece of music, examines the musical features involved, relates these features to every day emotional

235 behaviors, and then picks the appropriate emotion from a list of emotionally-granular terms.

Once the researcher has compiled a list of emotionally-granular terms, it becomes important to communicate the definitions of these words to participants. It is possible to explain emotional vocabulary without referring to any features of speech, music, or natural sounds, reducing potential confirmation bias. In Chapter 5, I defined melancholy and grief to participants by showing them facial expressions taken from the NimStim database (Tottenham et al., 2009). By defining an emotion to a participant with words, pictures, or facial expressions, it may be possible for participants to distinguish among more nuanced emotional states, rather than referring simply to “common” emotions, like happiness and sadness. Furthermore, conducting interviews with participants to examine which emotion terms they naturally use when listening to music may lead researchers to decide which emotional terms to begin to investigate.

Researchers may also use music theory analysis or participant responses to analyze if there are structural features in the music that may correspond to nuanced emotional states. For example, if participants cite some musical stimuli as exhibiting melancholy and others as exhibiting grief, researchers could investigate these musical passages to determine whether there are structural differences in melancholic and grieving conditions (see Chapters 5 and 6 for a test of this conjecture).

The main takeaway of this section is that it is important to train researchers and participants on emotionally-granular categories before conducting a research study.

Through the process of training, researchers can ensure that the results of their studies are comparable to the studies of other researchers. Additionally, training limits the potential

236 of listeners using different definitions for the same emotional word. Training, therefore, makes the comparison of responses across participants more valid. In a similar manner, training also reduces some of the unreliability from introspection by participants. Most importantly, with training, we may be better able to refine the understanding of which emotions can accurately be perceived and induced by music listening.

Conclusions

This chapter explained the importance of utilizing emotional granularity in music research. I explained that emotional granularity is used to define two separate constructs, as it is both a personality characteristic and a skill. I further discussed the implications of relying on a limited set of emotion words to describe music-related affect. Although the reliance of early studies on a small set of emotion words is likely due to the influence of basic emotions theory (with its focus on a small number of biologically-grounded emotions), I showed that using more emotionally-granular terms can benefit research operating from the perspectives of basic emotions, appraisal, and psychological construction theories of emotion. Whether or not musically-expressed emotions are perceived as categorical or continuous, we must make use of more precise terms in labeling emotions in the musical domain. In order to test for empirical support of discrete or dimensional theories, we must first introduce some aspects of emotional granularity into the field.

The final objective of this chapter was to advocate for a training program that researchers and participants can use to acquire a reasonable degree of emotional granularity. As people’s emotional granularity increases, we may find that listeners

237 differentially perceive and experience panic and anxiety or happiness and tenderness in the same way that I have shown that listeners can differentially perceive and experience melancholy and grief. A successful training program should result in agreement among participants and researchers regarding the definitions of various emotional states, as where as where these emotional states should lie on a map like the Affect Grid.

Regardless of the psychological framework one chooses to adopt, it is clear that the emotion literature writ large has turned away from the idea that there are only a few emotional states (e.g., Barrett, 2017; Cowen & Keltner, 2017). Rather, current scholars recognize the complexity of emotions, whether they are expressed and elicited through an art form, like music, or through personal situations. The time has come for the music cognition literature to follow suit and begin to demystify the emotions expressed and evoked in music. The first step to do this is to break down large categories of music into smaller categories that can be defined with differing musical structures and syntaxes. The hope of the current chapter is to inspire other music scholars to begin adopting a wider vocabulary to describe music-related emotions. Through careful analysis of the musical structure, along with perceptual and cognitive experiments, the field of music and emotion may be redefined by recognizing tens or even hundreds of emotions related to music. This chapter represents an important first step, but only that first step; the future remains bright and open for those studying music and emotion to take a closer look at how our field is defined.

238

Chapter 11: Conclusions

This dissertation explored the kinds of emotion that are related to music, using a tripartite approach. The first objective of the dissertation (Chapters 2-4) was to investigate how other researchers have categorized music-related emotions in the past, including the types of words they have used to describe musical passages and how they have operationalized these emotional terms. Chapter 2 presents a literature review integrating psychological and music research in order to examine what is meant by the word emotion. In this chapter, I showed that researchers do not agree on what constitutes an emotion and advocate for disparate methodologies, conclusions, and future directions of the field. Four major models of emotion—basic emotions theory, appraisal theory, psychological construction theory, and social construction theory—were examined in depth and a comparative chart enumerating philosophical differences among these four theories was presented. Additionally, I directly compared theories of music-related emotion with psychological theories of emotion. This comparison resulted in the understanding that many music researchers align with the basic emotions theory, although the theory of psychological construction is gaining traction. Finally, I proposed a preliminary model of emotion that researchers can build on in the future.

Chapters 3 and 4 also exhibited a wide theoretical focus, but the emphasis of these chapters was to explore emotion-related music. Chapter 3 describes the creation of the

239

PUMS corpus, which is a publicly available database that lists the 22,417 emotional music stimuli used in 306 studies, along with summary features of each stimulus (date, emotion/mood/arousal, composer, passage name, track number, minute and second durations or measure numbers, the length of the sample, its status as an excerpt or full work, its utilization in a study of induced or perceived emotion, listener familiarity with the stimulus, and how the emotion was operationalized by the researchers). Chapter 4 uses the PUMS database to create a second literature review, this time regarding the way researchers have utilized emotion-related musical stimuli from 1928 to 2018. Descriptive statistics were presented regarding each coded category (e.g., sample length), as well as interactions between categories (e.g., how sample length varies across studies of induced or perceived locus). A longitudinal approach is also taken, where trends emerged across the decades of study. The main conclusions of these two chapters were that researchers have inconsistently utilized emotional terminology and that this variable approach may account for some of the inconclusive results from previous reviews.

The second objective of the dissertation (Chapters 5-8) was to provide an in-depth analysis of sad music. Through a series of studies, I tested whether nominally sad music can be better explained through the use of two terms, melancholy and grief. Specifically,

I examined whether the distinction in the psychological research between melancholy and grief was relevant in the music literature. I hypothesized that musical melancholy and grief may contain different structural features and that listeners may perceive and experience different emotions in response to these passages. Chapter 5 describes the hypothesized differences between melancholic and grieving music and asks musicians of three levels of training (untrained participants, participants trained with facial expressions

240 of melancholy and grief, and experts in emotional theory of psychology and music) to solicit suggestions of melancholic and grieving musical passages. Around 400 musical passages were collected, some of which were utilized in the ensuing chapters. Chapter 6 described an exploration of the musical structure of melancholic and grieving passages, using highly skilled musicians to rate 18 musical parameters in a collection of passages.

The results were consistent with the idea that music that is quieter, lower-in-pitch, and contains narrow pitch intervals predicts music that is melancholic in nature, whereas music that contains sustained tones, gliding pitches, and harsh timbres predicts music that has been previously categorized as grieving.

The next two chapters described sets of behavioral studies that asked listeners which emotions they perceived and experienced from melancholic and grieving musical passages. Chapter 7 focused on how music represented melancholy and grief using two methodological designs, a five-alternative forced-choice paradigm, and a selection of emotion(s) from a list provided by the experimenter. The emphasis of Chapter 8 was on exploring how listeners felt in response to melancholic and grieving stimuli. Again, two correlational studies were conducted; the first design employed an experimenter-provided emotion list from which listeners could select emotion(s), while the second design asked participants to spontaneously describe their experiences while listening to the music. The results presented in Chapters 7 and 8 are consistent with the idea that listeners both perceive and experience different emotions in response to music-related melancholy and grief.

Ultimately, the goal of the second part of the dissertation was to show that we cannot understand music’s influence on emotions by simply studying a few emotional

241 states, such as happiness and sadness; instead, we may need to make subtle distinctions previously un-recognized in the music field.

This idea was elaborated in the third part of the dissertation, which returned to a wide theoretical focus after the more narrow focus of Chapters 5-8. The objective of this third part was to explore two alternative theories that could explain the results of the second part of the dissertation. The first of these theories (described in Chapter 9) was

David Huron’s idea that melancholy functions as an ethological cue, whereas grief functions as an ethological signal. None of the hypotheses tested in this chapter were consistent with this theory. The second theory (described in Chapter 10) was Lisa

Feldman Barrett’s idea of emotional granularity—a concept that is drawn from the theory of psychological construction. Although no hypotheses were formally tested, I explained how future researchers might train their participants to be more emotionally granular, which could help clarify our understanding of which emotions can be expressed and elicited by music listening.

In this dissertation, I approached music and emotion research through the lenses of music theory, music cognition and perception, psychology, and ethology. I incorporated a diverse number of methodologies, including literature reviews, a model of emotion, a database of musical stimuli, various behavioral studies, and multiple theoretical perspectives. Despite the variety of methods presented, there are several limitations to the current work that must be acknowledged. These limitations can help elucidate directions for future research.

242

Limitations

In order to conduct any empirical research, it is necessary to rely on assumptions and to make use of methods that have limitations. Some of the most important limitations are listed below, in order to inspire future work.

The dissertation began with a detailed discussion of emotion theory, including the distinction between basic emotions theory and the theory of psychological construction.

Although I believe that the existing research is more consistent with the theory of psychological construction than with basic emotions theory, the findings of the chapters on melancholy and grief could be interpreted as supporting either of these two major theories. In other words, melancholy and grief could represent two universal, distinct emotions, with different phylogenetic origins (as suggested by basic emotions theorists) or as two separate points on a possible continuum of sad emotions (as suggested by psychological constructionists). In order to compare these two theories empirically, one would need to use many more musical samples, examine examples of music that contain features of both melancholy and grief, utilize machine learning techniques, and require participants to listen to the same musical passages in different contexts.

Although the PUMS corpus summarizes features on stimuli from over 300 studies, there are many more studies on music and emotion that were not included in this database. It is possible that the PUMS database is unintentionally biased or does not accurately represent the field of music and emotion writ large. An especially important point is that the PUMS database was limited to English language studies. Future research should supplement the corpus with articles from other languages and also increase the

243 number of stimuli from other cultures. Since the PUMS database is a publicly-available and online resource, the potential for collaboration and crowd-sourcing is promising.

Another limitation of the dissertation is the focus on sad music. Although the narrow focus on sad affective states represents an important first step in the investigation of more granular emotions, the ability to generalize beyond music-related sadness is hindered. As will be discussed further below, an important avenue for future research is to investigate whether the findings presented here generalize to other emotion families, such as happiness or fear.

A related limitation of the dissertation is that it only tests the difference between two types of sad music—melancholy and grief. As mentioned throughout the dissertation, there have been many other descriptions of sad music, including terms like crying despair, comforting sorrow, and sublime sorrow (e.g., Juslin & Laukka, 2003; Laukka,

Eerola, Thingujam, Yamasaki, & Beller, 2013; Peltola & Eerola, 2016; Quinto,

Thompson, & Taylor, 2014). Additionally, there are a variety of emotions that listeners experience when listening to sad music (e.g., Eerola, Vuoskoski, Peltola, Putkinen, &

Schäfer, 2017; Peltola & Eerola, 2016; Taruffi & Koelsch, 2014; van den Tol, 2016).

Given this body of research, it is likely that there are more kinds of music-related sadness than just melancholy and grief. The aim of the dissertation was simply to determine whether or not there is more than one type of sad music. An important next step, then, is to determine how many types of sad music might exist. It is possible that there are five or ten types of sad music. It is also possible that sad music may exist on continuum with an

244 infinite number of categories. This idea will be addressed in the Recommendations section.

Next, the results of the studies regarding melancholy and grief are all correlational in nature and thus provide no support for any type of causal mechanism. In order to infer causality, some sort of manipulation would be needed.

Finally, in order to compare results across chapters, a small number of melancholic and grieving stimuli were utilized. It will be important for future researchers to replicate the behavioral studies with other musical passages, in order to test whether the current findings are limited to the stimuli used here.

Recommendations for Future Research

Based on the limitations presented above, I end with several recommendations for future research practices. The first set of recommendations draws on the findings of the first part of the dissertation. Based on the creation of the PUMS database and the subsequent literature review of musical stimuli, I believe that the field can benefit from the following practices.

Recommendation 1: Researchers Should Report which Musical Stimuli they are Using with Specificity.

The first recommendation is for researchers to provide information about which musical stimuli they use in their studies. This should include measure numbers or recording times, if possible. When constructing the PUMS database, there were many times where I could not find any reference to the stimuli that were used in the study.

245

Many other times, the researchers provided the name of the musical work, but failed to provide information about the specific excerpt that was used in the study. By specifying the exact passages of music that were used in the research process, it will become possible to replicate studies and examine features of specific passages in future research.

Moreover, other scholars can examine the musical passages in ways not intended by the original researchers. For example, if scholars used an excerpt of Prokofiev’s Russia

Under the Mongolian Yoke to induce depression, future researchers can examine this passage for structural features that could lead to increased depression, such as harmonic function and melodic contour.

Recommendation 2: Do Not Rely Solely on the Stimuli of Past Studies.

The second recommendation is to avoid total reliance on the stimuli of previous studies. In analyzing the PUMS database, I discovered that 37% of the stimuli used were chosen because they had been used in previous studies and that this trend has increased across the decades of music and emotion research. Sampling theory suggests that a single randomly-chosen sample may not truly represent the “population” from which the sample was chosen. In other words, although stimuli were chosen carefully in past studies, it is possible that these stimuli are unintentionally biased in some way. The musical samples used in a single study, while well chosen, may not truly reflect the population of musical works under investigation. By relying on the same sample multiple times, it is possible that the results may preserve some of the bias from the random sample. The more random samples that are used, the more the collective results will reflect general musical trends.

One option is to utilize the PUMS database. Rather than simply using the small set of

246 stimuli from a single study, researchers could use selections across a wider variety of studies. In other words, the PUMS database can be a guide to making selections for future stimuli. For example, the PUMS corpus contains information regarding 2,013 sad musical passages. Researchers could choose sad stimuli from different studies listed in the PUMS, hopefully reducing any potential bias that could arise due to reusing the stimuli from a single musical study.

Recommendation 3: Induced Emotion Studies Should Rely on Longer Musical Samples.

In a recent review, Tuomas Eerola and Jonna Vuoskoski (2013) recommended that studies examining induced emotion should utilize stimuli that are around 30 seconds in duration or longer. However, 50% of the musical stimuli used in the PUMS studies for induced emotions were under 30 seconds. While the remaining 50% of stimuli were longer than 30 seconds (some even longer than 10 minutes), I recommend that the field adopt the recommendation in the Eerola and Vuoskoski (2013) paper. Of course, relying on longer stimuli introduces the problem of internal validity, as the music may change substantially over the course of a minute. Therefore, the reader is cautioned with the fact that it becomes impossible to know which musical event may cause a particular affective response when longer clips are used. It is recommended that researchers carefully select works that have a consistent emotion or range of emotions for 30 seconds or longer.

Recommendation 4: Pilot Test the Musical Stimuli.

I also suggest that studies rely on pilot testing. In the studies comprising the

PUMS database, pilot testing only occurred in only 3% of the cases. From 2010-2018,

247 less than one percent of the stimuli were pilot tested before being used in the main experiment. Although time and financial considerations will restrict the ability to conduct pilot tests in all studies, in some cases it may be prudent to administer a pilot-study before the main experiment is run. A supposedly clear-cut example of musical sadness, for example, may be influenced by a single harmonic moment, the use of a joyful stimulus before the sad stimulus, or could be impacted by the familiarity of most participants with the passage. Knowing these confounds before the main experiment will allow the study to run more smoothly and could possibly increase the power to detect an experimental effect. Pilot testing stimuli offers the opportunity to improve the study design and has few drawbacks. In short, I urge researchers to engage in the pilot testing of stimuli before conducting an experiment.

Recommendation 5: Compare Musical Stimuli with Nonmusical Stimuli

The PUMS database contains few studies that compare emotional musical stimuli with emotional non-musical stimuli. Without such comparisons, it becomes impossible to determine whether an experimental effect is due to the music or due to valenced sounds in general. There exist databases for emotional speech samples (e.g., Crowd-Sourced

Emotional Multimodal Actors Dataset (CREMA-D) (Cao et al., 2014) and emotional natural sound stimuli (e.g., International Affective Digital Sounds) (IADS-2) (Bradley &

Lang, 2007). I believe that comparing emotional music stimuli to emotional speech or natural sounds stimuli will benefit the field of music and emotion writ large.

248

The second part of the dissertation suggested that instead of just one emotional state labeled as sadness, music may be capable of expressing a richer variety of negative affects. The main conclusion was that previous research has conflated findings from melancholic and grieving emotional states and that this distinction may help to explain why researchers have been puzzled by nominally sad music. Although it is, of course, true that listeners can respond to identical music in different ways, I proposed the idea that some of the markedly different effects of sad music on listeners can also be attributed to the fact that there are multiple types of sad music. This idea should be applied to future research in the following ways.

Recommendation 6: Test for a “Sadness Spectrum”

Current psychological literature suggests that emotions like sadness may exist on a continuum: there may be an infinite number of sad emotional states (e.g., Barrett, 2017;

Russell, 2003). It is possible, then, that music-related sadness can express or evoke a large—or even infinite—number of sad emotions. Music-related melancholy and grief may simply represent two kinds of sad emotional states out of, say, tens or hundreds of possible sad states. This idea would align with the emotional theory of psychological construction (e.g., Céspedes Guevara & Eerola, 2018). In other words, one could imagine a sadness spectrum that represents a possible continuum of emotional states, all of which have been previously combined under the label sadness. There has been little empirical work examining how to map various subjective feelings, behavioral characteristics, and physiological symptoms onto a continuous sadness spectrum, even in the psychological literature. In fact, the construction of such a spectrum may be impossible, as some

249 theorists speculate that each instance of sadness may be incomparable (e.g., Barrett,

2017). It is not known how many types of sadness can be evoked or portrayed by music.

It is possible that music represents and elicits only a few distinct kinds of sadness, corresponding to, say, melancholy and grief. However, I am intrigued by the possibility that a collection of musical passages can be mapped onto an entire sadness spectrum.

Future research needs to address this idea directly by investigating a wide variety of music previously labeled as sad.

Recommendation 7: Investigate a Wide Variety of Musical Features

When identifying which music-related features might contribute to expressed or perceived emotion, a researcher would ideally combine methods from the Music

Information Retrieval (MIR) literature, the Music Emotion Recognition (MER) field, corpus studies, and listener data. In the MIR and MER fields, there have been many studies that have examined audio features of emotional music, such as spectral centroid, spectral flatness, roll-off frequency, etc. (e.g., Hong, Chau, & Horner, 2017; Panda,

Malheiro, & Paiva, 2018). Corpus studies have examined structural features of emotional music, such as average interval size, dynamic level, and nPVI (e.g., Warrenburg &

Huron, 2018). Unfortunately, corpus studies are typically limited to a few encoded corpora or require researchers to encode hundreds of musical samples. While MIR, MER, and corpus methods allow the analysis of thousands of works at once, there are additional features that may be of interest that cannot be studied by using these methods. When I asked participants with superior aural skills to explain which features they used to make discriminations between melancholic and grieving music, these students listed 87 features

250 that were not accounted for by the features proposed by Huron (see Appendix C, Table

C.1 on page 333 for all of these features). Some of these responses included features like harmonic complexity, orchestration (sparse, dense, woodwind-focused, string-focused), timbral contrasts, texture (homophonic, polyphonic, monophonic), instrumental density or extensity, use of close mikes (intimacy of the sound), and melodic structure (ascending, descending, parabolic). By asking listeners to classify features of a wide range of nominally sad music and then performing an unsupervised machine learning analysis, it may be possible to identify features that contribute to a number of types of sad states.

The results may, at least, be consistent with the idea that there are more types of music- related sadness than just melancholy and grief.

Recommendation 8: Construct a Similarity Space of Music-Related Emotions

Recall that psychological research suggests that negative concepts may be more different from other negative concepts than positive concepts are from other positive concepts (Alves et al., 2016; Unkelbach, 2012; Unkelbach et al., 2008). Furthermore, linguistic work indicates that there tend to be more negative emotion words than positive emotion words and that the negative terms are more variable than are the positive terms

(Averill, 1980; Clore & Ortony, 1988; Russell et al., 1995; Van Goozen & Frijda, 1993).

These findings could imply that a person experiences a greater number of distinctive negative emotions and a smaller number of distinctive positive emotions. In other words, negative emotional states (in both musical and in non-aesthetic contexts) may be more diverse than are positive emotional states.

251

Future research should investigate whether there is any evidence consistent with the above conjecture. I can imagine utilizing Russell and colleagues’ Affect Grid (1989) in such studies. On the Affect Grid, a researcher could delineate sections that correspond to paradigmatic examples of different emotions. Fear, anger, and grief, for example, would be mapped onto the second quadrant, which implies that stereotypes of these emotional states will include feelings of negative valence and high arousal. Peacefulness and tenderness would be mapped onto the fourth quadrant, corresponding to experiences of low arousal and positive valence. In a similar manner to cluster analysis, the theoretical “distance” between two emotions could then be calculated, such as the mathematical distance between anger and panic or between anger and stress. Emotions that are closer together on the Affect Grid could be considered to be more similar to each other. By mapping emotions onto the Affect Grid, then, we could create a “similarity space” of emotional experiences. Namely, we could see if positively-valenced emotions are more similar to each other than negatively-valenced emotions are to other negatively- valenced emotions.

In order to map emotions onto such a similarity space, a researcher could utilize listener reports of their felt emotions or psychophysical methods. Other researchers may want to create such a similarity space based on how listeners perceive emotions (in music or in non-aesthetic contexts).

It is also possible for musical structural elements to inform the creation of such a map. Musical excerpts that exemplify a particular emotional category (like melancholy) would contribute to this analysis. For example, it may be that Albinoni’s Adagio is the

252 most paradigmatic example of melancholy. Future researchers could use the theoretical spectrum, as well as the musical exemples, to aid in their own work.

In order to use any of these methodologies, it is imperative for researchers to use emotionally-granular terms.

The last set of recommendations describes how the two theories presented in the third section of the dissertation can catalyze further investigation. In other words, researchers can design future studies to further explore the theories of ethology and psychological work on granularity.

Recommendation 9: Investigate the Ethological Framework by Testing Psychological

Perspectives of Music Listening

One potential avenue for future research is to investigate whether melancholic and grieving music is experienced psychologically from a first-person (1p) or third-person perspective (3p). With a 1p perspective, music evokes a congruent emotion in the listener so that the listener feels the emotion expressed by the music. If the music is fearful, a listener will feel afraid while listening to the music. With a 3p perspective, the music is perceived as an agent expressing an emotion; listeners respond to these displays with a different emotion than the one expressed by the music. In this case, if the music is fearful, listeners may experience transcendence or awe in response to the music.

Recent work has found that these two perspectives (1p and 3p) differentially facilitate two qualitatively distinct processing styles (Libby & Eibach, 2011; Libby,

Valenti, Hines, & Eibach, 2014). Specifically, 1p imagery leads a person to adopt an

253 experiential processing style, relying on visceral responses and cognitive associations. 3p imagery leads a person to adopt a conceptual processing style, integrating information according to propositional beliefs and conceptual knowledge.

Given that melancholy-inducing music, as an ethological cue, may operate via an associative mechanism, and that 1p imagery facilitates associative processing, one might hypothesize that 1p (vs. 3p) imagery should heighten the congruent emotional experience of melancholy in response to melancholy-inducing music. If grief-inducing music operates as a biologically-based ethological signal, one could also hypothesize that 3p

(vs. 1p) imagery would facilitate the complementary emotional experience of compassion in the listener. Such a finding could shed light on the psychological process by which grieving music operates, given the function of 3p imagery in facilitating conceptual processing. These findings may be consistent with the hypothesis that music induces melancholy through associative processes facilitated by 1p and grief arises through more conceptual processes facilitated by 3p.

Recommendation 10: Utilize Emotional Granularity in Many Emotion “Categories”

As mentioned throughout the dissertation, the act of using emotionally-granular terms can aid in future research investigations regarding which features of music convey or evoke emotion in listeners. By refining which emotions are associated with specific musical passages, we can better analyze how listeners perceive and experience emotions from music listening.

The review of the PUMS data showed that a large percentage of all of the stimuli used by researchers were classified happy or sad. While it is likely that the stimuli fit

254 these broad designations, it is possible that more specificity can be used to identify happy or sad music. As mentioned in Chapter 10, using emotionally-granular terms can aid in this process. For example, instead of deeming a musical sample as happy, it could also be considered joyful, elated, pleasant, peaceful, tender, or lively. A nominally sad passage of music may better be categorized as melancholic, grieving, anxious, nostalgic, or sorrowful. It is likely that nominally happy (and sad) music contains a wide variety of musical structures and can evoke different forms of emotion. Additionally, as discussed in Chapter 10, there is some evidence consistent with the idea that music-related fear can be better explained with terms like panic and anxiety.

Alan Cowen and Dacher Keltner recently published an article entitled “Self-report captures 27 distinct categories of emotion bridged by continuous gradients” (Cowen &

Keltner, 2017). The emotion categories they identified included admiration, , aesthetic appreciation, , anger, anxiety, awe, awkwardness, boredom, calmness, confusion, craving, disgust, empathic pain, entrancement, excitement, fear, horror, interest, joy, nostalgia, relief, romance, sadness, satisfaction, sexual desire, and surprise. Each of these categories can likely be broken down into additional categories

(notice, for example, that they do not differentiate between melancholy and grief, as they combined data for sadness, extreme sadness, and sympathy under a single heading of sadness). However, this list could be a good starting point for future researchers. For example, aesthetic appreciation could be broken down into awe, wonder, and transcendence. Future researchers could take each of these 27 emotion categories and investigate which broad categories—as well as more nuanced emotional states within

255 each category—apply to music. I can imagine that it is difficult for music to relate to disgust but easy for music to relate to nostalgia, for example.

Recommendation 11: Assess Listeners’ Emotional Granularity

Recall that Lisa Feldman Barrett discusses two operationalizations of emotional granularity—it is both a dispositional characteristic and a teachable skill. Although a person’s baseline emotional granularity is difficult to measure, future work on music and emotion should either employ the methodologies proposed by Barrett (2004) or one of the proxies I presented in Chapter 10 in order to control for listeners’ baseline emotional granularity.

Recommendation 12: Directly Test Tenets of Psychological Construction

As the theory of psychological construction is relatively new, most of the work arising from psychological constructionists involves proposing theoretical ideas or disproving basic emotions and appraisal theories. It is important to also test the validity of psychological construction claims. In order to test theories of psychological construction, it is necessary to show that an emotion varies by context.

Consistent with psychological construction theories, other experiments (e.g.,

Céspedes Guevara, in preparation) have shown that context (e.g., an “autobiographical” summary of a composer during a good year or a bad year) can result in participants interpreting the same musical passage as nominally sad or nominally tender. In other words, Céspedes Guevara’s work is consistent with the idea that priming participants can change the emotion that they perceive in a particular excerpt.

256

Inspired by this finding, future research could conduct more exploratory studies that would also be consistent with psychological construction claims. A simple experiment would be to perform a study utilizing a within-subjects design, in which a listener hears the same musical excerpt in different circumstances (e.g., varying the level of background noise, inducing exhaustion due to a separate task, asking participants to exercise). If a person always experiences the same emotion in response to the music, the basic emotions perspective would be supported. If a person experiences different emotions in response to the music, the psychological construction theory would be supported.

For example, a researcher could examine emotional responses and non-emotional responses to three one-minute passages of music: one that is melancholic, one that is grieving, and one that is melancholy/grief-ambiguous (i.e., people did not agree whether it was melancholic or grief-like). Participants could be asked to watch one minute of

YouTube videos from YouTube Roulette (https://ytroulette.com/). This channel shows random videos from YouTube, whether they are videos of cats, videos of soldiers being reunited with their families, or clips from movies. Using random YouTube channels allows the situation to vary by context and reduces the possibility that listeners will be biased by their own preferences. The researcher could then investigate the extent to which the emotional responses to the three types of emotional music vary across different listening experiences.

A basic emotions perspective for induced emotion would suppose that there is a one-to-one relation between a musical sample and the corresponding evoked emotion because the features of the music should theoretically activate the same hormone or

257 neural circuit during each listening. Once this hypothetical neural circuit or hormone is active, a person will always experience the same emotion. Results would be consistent with basic emotions or signaling theory, then, if a listener responds to all grieving music in a similar way and all melancholic music in a similar way: it will not matter which videos the participants watched before listening to the music.

A psychological construction perspective for induced emotion, in turn, would suggest that the same musical excerpt could induce different emotions on different occasions. According to this theory, in different contexts, the emotional experience is constructed anew, even if the source of the emotion (i.e., the musical sample) remains constant. In this case, there should be little shared emotional experiences in response to stimuli within the same affective category (melancholic, ambiguous, and grieving) or among the different affective categories. In other words, a participant should be primed by the random YouTube videos. Across each participant, then, a particular musical sample should evoke different emotions after watching different videos.

Conclusions

Throughout this dissertation, I hope to have provided some insight into how (and why) listeners perceive and experience emotions from music listening. By investigating trends in past studies, conducting new studies, and proposing future studies, I hope to have contributed to the field of music and emotion. I found that the current literature relies on approximately nine emotional terms, that these emotion terms are used inconsistently across researchers, and that the type of emotions studied have changed from the 1920s through 2018. I furthermore suggested that some of the inconclusive

258 results from previous emotion reviews could be due to the inconsistent use of emotional language throughout the music community. I posited that the variable way listeners respond to emotional music passages does not mean that people cannot perceive subtle shades of emotion, but rather indicates that researchers need to be more clear about which emotions are being studied. I examined music-related sadness as a case study and found that listeners were able to find structural differences between melancholic and grieving passages and that they perceived and experienced different emotions in response to these music types. Finally, I recommended that researchers use a new method for classifying musical emotions: emotional granularity. Emotional granularity provides researchers with the ability to re-define broad categories of emotions, such as sadness, and its use can result in future researchers finding additional subtle shades of emotion, previously unrecognized in the music and emotion literature. We are at a critical and exciting point in music and emotion research. Hopefully, future scholars can use some of the methods presented in this dissertation to be further stimulated into discovering why and how music expresses and makes us feel emotions—the reason why so many of us chose to study music in the first place.

259

References

Abu-Lughod, L. (1986). Veiled sentiments. Berkeley, CA: University of California Press.

Adachi, M., & Trehub, S. E. (2002). Decoding the expressive intentions in children’s songs. , 18(2), 213-224.

Adolphs, R. (2017). How should neuroscience study emotions? By distinguishing emotion states, concepts, and experiences. Social Cognitive and Affective Neuroscience, 12(1), 24-31.

Adolphs, R. (2017). Reply to Barrett: affective neuroscience needs objective criteria for emotions. Social Cognitive and Affective Neuroscience, 12(1), 32-33.

Al’tman, Y. A., Alyanchikova, Y. O., Guzikov, B. M., & Zakharova, L. E. (2000). Estimation of short musical fragments in normal subjects and patients with chronic depression. Human Physiology, 26(5), 553-557.

Albersnagel, F. A. (1988). Velten and musical mood induction procedures: A comparison with accessibility in thought associations. Behaviour Research and Therapy, 26(1), 79-96.

Alfredson, B. B., Risberg, J., Hagberg, B., & Gustafson, L. (2004). Right activation when listening to emotionally significant music. Applied , 11(3), 161-166.

Ali, S. O., & Peynircioglu, Z. F. (2006). Songs and emotions: Are lyrics and equal partners? Psychology of Music, 34(4), 511-534.

Aljanaki, A., Wiering, F., & Veltkamp, R. (2014). Computational modeling of induced emotion using GEMS. In Proceedings of the 15th Conference of the International Society for Music Information Retrieval (ISMIR 2014) (pp. 373-378).

Aljanaki, A., Wiering, F., & Veltkamp, R. C. (2016). Studying emotion induced by music through a crowdsourcing game. Information Processing & Management, 52(1), 115-128.

Altenmüller, E., Schuermann, K., Lim, V. K., & Parlitz, D. (2002). Hits to the left, flops to the right: Different emotions during listening to music are reflected in cortical lateralization patterns. Neuropsychologia, 40(13), 2242-2256.

260

Alves, H., Koch, A., Unkelbach, C. (2016). My friends are all alike – the relation between liking and perceived similarity in person perception. Journal of Experimental , 62, 103-117.

Andrews, P. W., & Thomson Jr., J. A. (2009). The bright side of being blue: depression as an adaptation for analyzing complex problems. , 116(3), 620- 654.

Angliss, S. (2003). Soundless Music. In Arends, B. & Thackara, D. (Eds.), Experiment: Conversations in art and science (pp. 132 – 171). : The Wellcome .

Arnold, M. B. (1960). Emotion and personality. New York, NY, US: Columbia University Press.

Aron, A., Aron, E. N., & Smollan, D. (1992). Inclusion of Other in the Self Scale and the structure of interpersonal closeness. Journal of Personality and Social Psychology, 63(4), 596-612.

Asmus, E. P. (1985). The development of a multidimensional instrument for the measurement of affective responses to music. Psychology of Music, 13(1), 19-30.

Aucouturier, J. J., & Canonne, C. (2017). Musical friends and foes: the of affiliation and control in improvised interactions. Cognition, 161, 94-108.

Averill, J. R. (1980). A constructivist view of emotion. In Theories of emotion (pp. 305- 339). Academic Press.

Averill, J. R. (2012). The future of social constructionism: introduction to a special section of emotion review. Emotion Review, 4(3), 215-220.

Bachorik, J. P., Bangert, M., Loui, P., Larke, K., Berger, J., Rowe, R., & Schlaug, G. (2009). Emotion in motion: Investigating the time-course of emotional judgments of musical stimuli. Music Perception: An Interdisciplinary Journal, 26(4), 355-364.

Baker, F. A., Gleadhill, L. M., & Dingle, G. A. (2007). Music therapy and emotional exploration: Exposing clients to the experiences of non-drug-induced emotions. The in , 34(4), 321-330.

Balch, W. R., Myers, D. M., & Papotto, C. (1999). Dimensions of mood in mood- dependent memory. Journal of : Learning, Memory, and Cognition, 25(1), 70-83.

Balkwill, L. L., & Thompson, W. F. (1999). A cross-cultural investigation of the perception of emotion in music: Psychophysical and cultural cues. Music Perception, 17(1), 43-64.

261

Balkwill, L. L., Thompson, W. F., & Matsunaga, R. I. E. (2004). Recognition of emotion in Japanese, Western, and Hindustani music by Japanese listeners. Japanese Psychological Research, 46(4), 337-349.

Baltes, F. R., & Miu, A. C. (2014). Emotions during live music performance: Links with individual differences in empathy, visual imagery, and mood. Psychomusicology: Music, Mind, and Brain, 24(1), 58-65.

Balteş, F. R., Avram, J., Miclea, M., & Miu, A. C. (2011). Emotions induced by operatic music: Psychophysiological effects of music, plot, and acting: A scientist’s tribute to . Brain and Cognition, 76(1), 146-157.

Baltrušaitis, T., Robinson, P., & Morency, L. P. (2016, March). Openface: an open source facial behavior analysis toolkit. In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 1-10). IEEE.

Banse, R., & Scherer, K. R. (1996). Acoustic profiles in vocal emotion expression. Journal of Personality and Social Psychology, 70(3), 614-636.

Bard, P. (1928). A diencephalic mechanism for the expression of rage with special reference to the sympathetic nervous system. American Journal of Physiology-Legacy Content, 84(3), 490-515.

Barrett, L. F. (2004). Feelings or words? Understanding the content in self-report ratings of experienced emotion. Journal of Personality and Social Psychology, 87(2), 266-281.

Barrett, L. F. (2006). Are emotions natural kinds? Perspectives on Psychological Science, 1(1), 28-58.

Barrett, L. F. (2006). Solving the emotion paradox: Categorization and the experience of emotion. Personality and Social Psychology Review, 1(10), 20–46.

Barrett, L. F. (2013). Psychological construction: The Darwinian approach to the science of emotion. Emotion Review, 5(4), 379-389.

Barrett, L. F. (2017). How emotions are made: The secret life of the brain. Houghton Mifflin Harcourt.

Bartel, L. R. (1992). The development of the cognitive-affective response test-music. Psychomusicology, 11(1), 15-26.

Baumgartner, T., Esslen, M., & Jäncke, L. (2006). From emotion perception to emotion experience: Emotions evoked by pictures and . International Journal of , 60(1), 34-43.

262

Bazhydai, M., Ivcevic, Z., Brackett, M. A., & Widen, S. C. (2018). Breadth of emotion vocabulary in early adolescence. Imagination, Cognition and Personality: Consciousness in Theory, Research, and Clinical Practice, 0(0), 1-27.

Behrens, G. A., & Green, S. B. (1993). The ability to identify emotional content of solo performed vocally and on three different instruments. Psychology of Music, 21(1), 20-33.

Berlin, B., & Kay, P. (1969). Basic color terms: Their university and evolution. California UP.

Bigand, E., Tillmann, B., Poulin, B., D’Adamo, D. A., & Madurell, F. (2001). The effect of harmonic context on phoneme monitoring in vocal music. Cognition, 81(1), B11-B20.

Bigand, E., Vieillard, S., Madurell, F., Marozeau, J., & Dacquet, A. (2005). Multidimensional scaling of emotional responses to music: The effect of musical expertise and of the duration of the excerpts. Cognition & Emotion, 19(8), 1113-1139.

Bishop, D. T., Karageorghis, C. I., & Kinrade, N. P. (2009). Effects of musically-induced emotions on choice reaction time performance. The , 23(1), 59-76.

Blood, A. J., & Zatorre, R. J. (2001). Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proceedings of the National Academy of , 98(20), 11818-11823.

Blood, A. J., Zatorre, R. J., Bermudez, P., & Evans, A. C. (1999). Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions. Nature Neuroscience, 2(4), 382-387.

Bolinger, D. (1978). Intonation across languages. Universals of Human Language, 2, 471-524.

Boone, R. T., & Cunningham, J. G. (2001). Children’s expression of emotional meaning in music through expressive body movement. Journal of Nonverbal Behavior, 25(1), 21- 41.

Bosacki, S. L., & Moore, C. (2004). Preschoolers’ understanding of simple and complex emotions: Links with gender and language. Sex Roles, 50(9-10), 659-675.

Bouhuys, A. L., Bloem, G. M., & Groothuis, G. G. T. (1995). Induction of depressed and elated mood by music influences the perception of facial emotional expressions in healthy subjects. Journal of Affective Disorders, 33(4), 215-26.

Boutcher, S. H., & Trenske, M. (1990). The effects of sensory deprivation and music on perceived exertion and affect during exercise. Journal of Sport and Exercise Psychology, 12(2), 167-176.

263

Bower, G. H., & Mayer, J. D. (1989). In search of mood-dependent retrieval. Journal of Social Behavior and Personality, 4(2), 121-156.

Bowlby, J. (1969). Attachment. New York: Basic Books.

Bradley, M. M. & Lang, P. J. (2007). The International Affective Digitized Sounds (2nd Edition; IADS-2): Affective ratings of sounds and instruction manual. Technical Report B-3. University of Florida, Gainesville, Fl.

Brattico, E., Alluri, V., Bogert, B., Jacobsen, T., Vartiainen, N., Nieminen, S. K., & Tervaniemi, M. (2011). A functional MRI study of happy and sad emotions in music with and without lyrics. Frontiers in Psychology, 2(308).

Bresin, R., & Friberg, A. (2000). Emotional coloring of computer-controlled music performance. Computer Music Journal, 24(4), 44-63.

Brittin, R. V., & Duke, R. A. (1997). Continuous versus summative evaluations of musical intensity: A comparison of two methods for measuring overall effect. Journal of Research in , 45(2), 245-258.

Brown, S., Martinez, M. J., & Parsons, L. M. (2004). Passive music listening spontaneously engages limbic and paralimbic systems. Neuroreport, 15(13), 2033-2037.

Broze, G. J. (2013). Animacy, anthropomimesis, and musical line. PhD dissertation, School of Music, The Ohio State University

Brunswik, E. (1956). Perception and the representative design of experiments. Berkeley: University of California Press.

Burger, B., Saarikallio, S., Luck, G., Thompson, M. R., & Toiviainen, P. (2013). Relationships between perceived emotions in music and music-induced movement. Music Perception: An Interdisciplinary Journal, 30(5), 517-533.

Camras, L. A., & Witherington, D. C. (2005). Dynamical systems approaches to emotional development. Developmental Review, 25(3-4), 328-350.

Camurri, A., Mazzarino, B., Ricchetti, M., Timmers, R., & Volpe, G. (2003, April). Multimodal analysis of expressive gesture in music and dance performances. In International Gesture Workshop (pp. 20-39). Springer, , Heidelberg.

Canazza, S., & Orio, N. (1999). The communication of emotions in jazz music: A study on piano and saxophone performances. In M. O. Belardinelli & C. Fiorelli (Eds.), Musical behavior and cognition (pp. 261–276). Rome: Edizioni Scientifiche Magi.

264

Cannon, W.B. (1929). Bodily changes in pain, hunger, fear, and rage, 2nd Ed. New York: Appleton.

Cao, H., Cooper, D.G., Keurmann, M.K., Gur, R.C., Nenkova, A., & Verma, R. (2014). CREMA-D: Crowd-sourced emotional multimodal actors dataset. IEEE Transactions on 5(4), 377-390.

Carter, F. A., Wilson, J. S., Lawson, R. H., & Bulik, C. M. (1995). Mood induction procedure: Importance of individualizing music. Behavior Change, 12(1), 159–161.

Céspedes Guevara, in preparation

Céspedes Guevara, J., & Eerola, T. (2018). Music Communicates Affects, Not Basic Emotions–A Constructionist Account of Attribution of Emotional Meanings to Music. Frontiers in Psychology, 9(215).

Chan, D. (2009). So why ask me? Are self-report data really that bad? In C. E. Lance, & E. J. Vandenberg (Eds.), Statistical and methodological myths and urban legends: Doctrine, verity and fable in the organizational and social sciences (pp. 309-336). Taylor & Francis.

Chapados, C., & Levitin, D. (2008). Cross-modal interactions in the experience of musical performances: Physiological correlates. Cognition, 108(3), 639-651.

Chapin, H., Jantzen, K., Kelso, J. S., Steinberg, F., & Large, E. (2010). Dynamic emotional and neural responses to music depend on performance expression and listener experience. PloS one, 5(12), e13812.

Chen, J., Yuan, J., Huang, H., Chen, C., & Li, H. (2008). Music-induced mood modulates the strength of emotional negativity bias: An ERP study. Neuroscience Letters, 445(2), 135-139.

Chen, L., Zhou, S., & Bryant, J. (2007). Temporal changes in mood repair through music consumption: Effects of mood, mood salience, and individual differences. , 9(3), 695-713.

Cialdini, R. B., Brown, S. L., Lewis, B. P., Luce, C., & Neuberg, S. L. (1997). Reinterpreting the empathy–altruism relationship: When one into one equals oneness. Journal of Personality and Social Psychology, 73(3), 481-494.

Clark, D. M. (1983). On the induction of depressed mood in the laboratory: Evaluation and comparison of the Velten and musical procedures. Advances in Behaviour Research and Therapy, 5(1), 27-49.

Clark, D. M., & Teasdale, J. D. (1985). Constraints on the effect of mood on memory. Journal of Personality and Social Psychology, 48(6), 1595-1608.

265

Clark, L., Iversen, S. D., & Goodwin, G. M. (2001). The influence of positive and negative mood states on risk taking, verbal fluency, and salivary cortisol. Journal of Affective Disorders, 63(1-3), 179-187.

Clore, G. L. & Huntsinger, J.R. (2007). How emotions inform judgment and regulate thought. Trends in , 11(9), 393-399.

Clore, G. L. & Ortony, A. (2008). Appraisal theories: How cognition shapes affect into emotion. In M. Lewis, J.M. Haviland-Jones, & L.F. Barrett (Eds.) Handbook of Emotions, 3rd Ed. (pp. 628-642). New York: Guilford Press.

Coffman, D. D., Gfeller, K., & Eckert, M. (1995). Effect of textual setting, training, and gender on emotional response to verbal and musical information. Psychomusicology: A Journal of Research in Music Cognition, 14(1-2), 117-136.

Collier, G. L. (2002). Why does music express only some emotions? A test of a . Empirical Studies of the Arts, 20(1), 21-31.

Collier, G. L. (2007). Beyond valence and activity in the emotional connotations of music. Psychology of Music, 35(1), 110-131.

Collier, W. G., & Hubbard, T. L. (2001). Musical scales and evaluations of happiness and awkwardness: Effects of pitch, direction, and scale mode. The American Journal of Psychology, 114(3), 355-375

Colombetti, G. (2014). The feeling body: meets the enactive mind. MIT Press.

Cornelius, R. R. (2000). Theoretical approaches to emotion. In ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion. Newcastle, Northern , UK.

Costa-Giomi, E. (1996). Mode discrimination abilities of pre-school children. Psychology of Music, 24(2), 184-198.

Costa-Giomi, E., & Davila, Y. (2014). Infants’ discrimination of female voices. International Journal of Music Education, 32(3), 324-332.

Costa, M., Fine, P., & Bitti, P. E. R. (2004). Interval distributions, mode, and tonal strength of melodies as predictors of perceived emotion. Music Perception, 22(1), 1-14.

Coutinho, E., & Cangelosi, A. (2011). Musical emotions: Predicting second-by-second subjective feelings of emotion from low-level psychoacoustic features and physiological measurements. Emotion, 11(4), 921-937.

266

Coutinho, E., & Dibben, N. (2013). Psychoacoustic cues to emotion in speech prosody and music. Cognition & emotion, 27(4), 658-684.

Cowen, A. S., & Keltner, D. (2017). Self-report captures 27 distinct categories of emotion bridged by continuous gradients. Proceedings of the National Academy of Sciences, 114(38), E7900-E7909.

Cowie, R., Douglas-Cowie, E., Savvidou, S., McMahon, E., Sawey, M., & Schröder, M. (2000). ‘FEELTRACE’: An instrument for recording perceived emotion in real time. In ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion. Newcastle, Northern Ireland, UK.

Craig, D. G. (2005). An exploratory study of physiological changes during “chills” induced by music. Musicae Scientiae, 9(2), 273-287.

Craske, M. G., Treanor, M., Conway, C. C., Zbozinek, T., & Vervliet, B. (2014). Maximizing exposure therapy: An inhibitory learning approach. Behaviour Research and Therapy, 58, 10-23.

Cunningham, J. G., & Sterling, R. S. (1988). Developmental change in the understanding of affective meaning in music. Motivation and Emotion, 12(4), 399-413.

Dahl, S., & Friberg, A. (2007). of expressiveness in musicians’ body movements. Music Perception, 24(5), 433-454.

Dalla Bella, S., Peretz, I., Rousseau, L., & Gosselin, N. (2001). A developmental study of the affective value of tempo and mode in music. Cognition, 80(3), B1-B10.

Daly, I., Malik, A., Hwang, F., Roesch, E., Weaver, J., Kirke, A., Williams, D., Miranda, E., & Nasuto, S. J. (2014). Neural correlates of emotional responses to music: an EEG study. Neuroscience Letters, 573, 52-57.

Damasio, A. R. (1999). The feeling of what happens: Body and emotion in the making of consciousness. New York, NY: Harcourt Brace.

Damasio, A. R., Grabowski, T. J., Bechara, A., Damasio, H., Ponto, L. L., Parvizi, J., & Hichwa, R. D. (2000). Subcortical and cortical brain activity during the feeling of self- generated emotions. Nature Neuroscience, 3(10), 1049-1056.

Darrow, A. A. (2006). The role of music in deaf culture: Deaf students’ perception of emotion in music. Journal of Music Therapy, 43(1), 2-15.

Darwin, C. (1872). The expression of emotions in man and animals. New York: Philosophical Library.

267

David Frego, R. J. (1999). Effects of aural and visual conditions on response to perceived artistic tension in music and dance. Journal of Research in Music Education, 47(1), 31- 43.

Davies, J. B. (1978). The psychology of music. London: Hutchinson.

Davis, M. H. (1983). Measuring individual differences in empathy: Evidence for a multidimensional approach. Journal of Personality and Social Psychology, 44(1), 113- 126.

De Clercq, T., & Temperley, D. (2011). A corpus analysis of rock harmony. Popular Music, 30(1), 47-70. de Gelder, B., & Vroomen, J. (1996). Categorical perception of emotional speech. The Journal of the Acoustical Society of America 100(2818), 2818.

Dellacherie, D., Ehrléé, N., & Samson, S. (2008). Is the neutral condition relevant to study musical emotion in patients? Music Perception, 25(4), 285-294.

Demos, A. P., Chaffin, R., Begosh, K. T., Daniels, J. R., & Marsh, K. L. (2012). Rocking to the beat: Effects of music and partner’s movements on spontaneous interpersonal coordination. Journal of Experimental Psychology: General, 141(1), 49-53.

DeNora, T. (2000). Music in everyday life. Cambridge University Press.

Dibben, N. (2004). The role of peripheral feedback in emotional experience with music. Music Perception, 22(1), 79-115.

Dimberg, U., & Öhman, A. (1996). Behold the wrath: Psychophysiological responses to facial stimuli. Motivation and Emotion, 20(2), 149-182.

Dolgin, K. G., & Adelson, E. H. (1990). Age changes in the ability to interpret affect in sung and instrumentally-presented melodies. Psychology of Music, 18(1), 87-98.

Drapeau, J., Gosselin, N., Gagnon, L., Peretz, I., & Lorrain, D. (2009). Emotional recognition from face, voice, and music in of the Alzheimer type. Annals of the New York Academy of Sciences, 1169(1), 342-345.

Droit-Volet, S., Bueno, L. J., & Bigand, E. (2013). Music, emotion, and time perception: the influence of subjective emotional valence and arousal? Frontiers in Psychology, 4(417).

Duinker, B., & Léveillé Gauvin, H. (2017). Changing content in flagship music theory journals, 1979-2014. Music Theory Online, 23(4).

268

Dutton, D. G., & Aron, A. P. (1974). Some evidence for heightened sexual attraction under conditions of high anxiety. Journal of Personality and Social Psychology, 30(4), 510-517.

Ebie, B. D. (1999). The effects of verbal, vocally modeled, kinesthetic, and audio-visual treatment conditions on male and female middle school vocal music students’ abilities to expressively sing melodies. Psychology of Music, 32(4), 405-417.

Eerola, T. (2011). Are the emotions expressed in -specific? An audio-based evaluation of datasets spanning classical, film, pop and mixed genres. Journal of New Music Research, 40(4), 349-366.

Eerola, T., & Peltola, H. R. (2016). Memorable experiences with sad music—reasons, reactions and mechanisms of three types of experiences. PLoS One, 11(6), e0157444.

Eerola, T., & Vuoskoski, J. K. (2011). A comparison of the discrete and dimensional models of emotion in music. Psychology of Music, 39(1), 18-49.

Eerola, T., & Vuoskoski, J. K. (2013). A review of music and emotion studies: Approaches, emotion models, and stimuli. Music Perception, 30(3), 307-340.

Eerola, T., Lartillot, O., & Toiviainen, P. (2009, October). Prediction of multidimensional emotional ratings in music from audio using multivariate regression models. In ISMIR (pp. 621-626).

Eerola, T., Peltola, H. R., & Vuoskoski, J. K. (2015). Attitudes toward sad music are related to both preferential and contextual strategies. Psychomusicology: Music, Mind, and Brain, 25(2), 116-123.

Eerola, T., Vuoskoski, J. K., & Kautiainen, H. (2016). Being moved by unfamiliar sad music is associated with high empathy. Frontiers in Psychology, 7(1176).

Eerola, T., Vuoskoski, J. K., Peltola, H. R., Putkinen, V., & Schäfer, K. (2018). An integrative review of the enjoyment of sadness associated with music. Physics of Life Reviews, 25, 100-121.

Efimova, I. V., & Budyka, E. V. (2006). The use of emotional evaluations of short musical fragments for analyzing the functional states of students. Human Physiology, 32(3), 278-281.

Egermann, H., & McAdams, S. (2013). Empathy and emotional contagion as a link between recognized and felt emotions in music listening. Music Perception: An Interdisciplinary Journal, 31(2), 139-156.

269

Egermann, H., Fernando, N., Chuen, L., & McAdams, S. (2015). Music induces universal emotion-related psychophysiological responses: Comparing Canadian listeners to Congolese Pygmies. Frontiers in Psychology, 5(1341).

Egermann, H., Grewe, O., Kopiez, R., & Altenmüller, E. (2009). Social feedback influences musically induced emotions. Annals of the New York Academy of Sciences, 1169(1), 346-350.

Egermann, H., Nagel, F., Altenmüller, E., & Kopiez, R. (2009). Continuous measurement of musically-induced emotion: A web experiment. International Journal of Internet Science, 4(1), 4-20.

Egermann, H., Sutherland, M. E., Grewe, O., Nagel, F., Kopiez, R., & Altenmüller, E. (2011). Does music listening in a social context alter experience? A physiological and psychological perspective on emotion. Musicae Scientiae, 15(3), 307-323.

Eibl-Eibesfeldt, I. (1989). Human ethology. New York: Aldine de Gruyter.

Eich, J. E., & Metcalfe, J. (1989). Mood-dependent memory for internal versus external events. Journal of Experimental Psychology: Learning, Memory, & Cognition, 15(3), 433-455.

Ekman, P. (1971). Universals and cultural differences in facial expressions of emotion. In Nebraska symposium on motivation. University of Nebraska Press.

Ekman, P. (1977). Biological and cultural contributions to body and facial movement. In J. Blacking (Ed.), Anthropology of the body (pp. 34–84). London: Academic Press.

Ekman, P. (1992). An argument for basic emotions. Cognition & Emotion, 6(3-4), 169- 200.

Ekman, P. (2003). Darwin, deception, and facial expression. Annals of the New York Academy of Sciences, 1000(1), 205-221.

Elias, N. (1978). The history of manners: The civilizing process, Vol. 1. New York: Pantheon.

Ellsworth, P. C. (2013). Appraisal theory: Old and new questions. Emotion Review, 5(2), 125-131.

Ellsworth, P. C., & Scherer, K. R. (2003). Appraisal processes in emotion. In R. J. Davidson, K. R., Scherer, & H. H. Goldsmith (Eds.), Handbook of affective sciences (pp. 572-595). New York: .

270

Emerson, G., & Egermann, H. (2018). Gesture-sound causality from the audience’s perspective: Investigating the aesthetic experience of performances with digital musical instruments. Psychology of , , and the Arts, 12(1), 96-109.

Epstein, S. (2019, April 17). Four types of grief nobody told you about. Retrieved from https://www.psychologytoday.com/us/blog/between-the-generations/201904/four-types- grief-nobody-told-you-about

Eschrich, S., Münte, T., & Altenmüller, E. (2008). Unforgettable film music: The role of emotion in episodic long-term memory for music. BMC Neuroscience, 9(48).

Etzel, J. A., Johnsen, E. L., Dickerson, J., Tranel, D., & Adolphs, R. (2006). Cardiovascular and respiratory responses during musical mood induction. International Journal of Psychophysiology, 61(1), 57-69.

Evans, P., & Schubert, E. (2008). Relationships between expressed and felt emotions in music. Musicae Scientiae, 12(1), 75-99.

Faith, M., & Thayer, J. F. (2001). A dynamical systems interpretation of a dimensional model of emotion. Scandinavian Journal of Psychology, 42(2), 121-133.

Farinelli, M., Panksepp, J., Gestieri, L., Maffei, M., Agati, R., Cevolani, D., Predone, V., & Northoff, G. (2015). Do brain lesions in stroke affect basic emotions and attachment? Journal of Clinical and Experimental Neuropsychology, 37(6), 595-613.

Farnsworth, P. R. (1954). A study of the Hevner adjective list. The Journal of Aesthetics and , 13(1), 97-103.

Fletcher, H., & Munson, W. A. (1933). Loudness, its definition, measurement and calculation. Bell System Technical Journal, 12(4), 377-430.

Flom, R., Gentile, D., & Pick, A. (2008). Infants’ discrimination of happy and sad music. Infant Behavior and Development, 31(4), 716-728.

Flores-Gutiérrez, E. O., Díaz, J. L., Barrios, F. A., Favila-Humara, R., Guevara, M. Á., del Río-Portilla, Y., & Corsi-Cabrera, M. (2007). Metabolic and electric brain patterns during pleasant and unpleasant emotions induced by music masterpieces. International Journal of Psychophysiology, 65(1), 69-84.

Fogel, A., & Thelen, E. (1987). Development of early expressive and communicative action: Reinterpreting the evidence from a dynamic systems perspective. Developmental Psychology, 23(6), 747-761.

Frank, R. H. (1988). Passions within reason: The strategic role of the emotions. New York: W W Norton & Co.

271

Fredrickson, W. E. (1995). A comparison of perceived musical tension and aesthetic response. Psychology of Music, 23(1), 81-87.

Frick, R. W. (1985). Communicating emotion: The role of prosodic features. Psychological Bulletin, 97(3), 412-429.

Frijda, N. H. (1986). The emotions. New York, NY: Cambridge University Press

Frijda, N. H., & Mesquita, B. (1994). The social roles and functions of emotions. In S. Kitayama & H. R. Markus (Eds.), Emotion and culture: Empirical studies of mutual influence (pp. 51-87). Washington, DC, US: American Psychological Association.

Frijda, N. H., & Sundararajan, L. (2007). Emotion refinement: A theory inspired by Chinese poetics. Perspectives on Psychological Science, 2(3), 227-241.

Frijda, N. H., Kuipers, P., & Ter Schure, E. (1989). Relations among emotion, appraisal, and emotional action readiness. Journal of Personality and Social Psychology, 57(2), 212-228.

Fritz, T., Jentschke, S., Gosselin, N., Sammler, D., Peretz, I., Turner, R., Friederici, A.D., & Koelsch, S. (2009). Universal recognition of three basic emotions in music. Current Biology, 19(7), 573-576.

Gabrielsson, A. (2001). Emotions in strong experiences with music. In P. N. Juslin & J. A. Sloboda (Eds.), Series in affective science. Music and emotion: Theory and research (pp. 431-449). New York, NY, US: Oxford University Press.

Gabrielsson, A. (2011). Strong experiences with music: Music is much more than just music. Oxford University Press.

Gabrielsson, A., & Juslin, P. N. (1996). Emotional Expression in Music Performance: Between the Performer’s Intention and the Listener’s Experience. Psychology of Music, 24(1), 68-91.

Gabrielsson, A., & Lindström, E. (1995). Emotional expression in synthesizer and sentograph performance. Psychomusicology, 14(1-2), 94-116.

Gabrielsson, A., & Lindström, E. (2010). The role of structure in the musical expression of emotions. In P. N. Juslin & J. A. Sloboda (Eds.), Series in affective science. Handbook of music and emotion: Theory, research, applications (pp. 367-400). New York: Oxford University Press.

Gagnon, L., & Peretz, I. (2000). Laterality effects in processing tonal and atonal melodies with affective and nonaffective task instructions. Brain and Cognition, 43(1-3), 206-210.

272

Gagnon, L., & Peretz, I. (2003). Mode and tempo relative contributions to “happy – sad” judgments in equitone melodies. Cognition and Emotion, 17(1), 25-40.

Garrido, S. (2014). A systematic review of the studies measuring mood and emotion in response to music. Psychomusicology: Music, Mind, and Brain, 24(4), 316-327.

Garrido, S., & Schubert, E. (2011). Individual differences in the enjoyment of negative emotion in music: A literature review and experiment. Music Perception: An Interdisciplinary Journal, 28(3), 279-296.

Garrido, S., & Schubert, E. (2011). Negative emotion in music: What is the attraction? A qualitative study. Empirical Musicology Review, 6(4), 214-230.

Gelstein, S., Yeshurun, Y., Rozenkrantz, L., Shushan, S., Frumin, I., Roth, Y., Sobel, N. (2011). Human tears contain a chemosignal. Science, 331(6014), 226-230.

Gerardi, G. M., & Gerken, L. (1995). The development of affective responses to modality and melodic contour. Music Perception, 12(3), 279-290.

Geringer, J.M., Cassidy, J. W., & Byo, J. L. (1996). Effects of music with video on responses of nonmusic majors: An exploratory study. Journal of Research in Music Education, 44(3), 240-251.

Gerling C. C., & dos Santos, R. A. T. (2007). Intended versus perceived emotion. In International Symposium on Performance Science: Theories, methods and applications in Music (Vol. 1, pp. 233-238).

Gfeller, K., & Coffman, D. D. (1991). An investigation of emotional response of trained musicians to verbal and music information. Psychomusicology: A Journal of Research in Music Cognition, 10(1), 31-48.

Gfeller, K., Asmus, E. P., & Eckert, M. (1991). An investigation of emotional response to music and text. Psychology of Music, 19(2), 128-141.

Giomo, C. J. (1993). An experimental study of children’s sensitivity to mood in music. Psychology of Music, 21(2), 141-162.

Giovannelli, F., Banfi, C., Borgheresi, A., Fiori, E., Innocenti, I., Rossi, S., Zaccara, G., Viggiano, M. P., & Cincotta, M. (2013). The effect of music on corticospinal excitability is related to the perceived emotion: A transcranial magnetic study. Cortex, 49(3), 702-710.

Goffman, E. (1967). On face-work. In E. Goffman (Ed.), Interaction (pp. 5–45). New York: Doubleday Anchor.

273

Goldblatt, D., & Williams, D. (1986). “I An Sniling!” Möbius’ Syndrome Inside and Out. Journal of Child Neurology, 1(1), 71-78.

Gomez, P., & Danuser, B. (2004). Affective and physiological responses to environmental noises and music. International Journal of Psychophysiology, 53(2), 91- 103.

Gomez, P., & Danuser, B. (2007). Relationships between musical structure and psychophysiological measures of emotion. Emotion, 7(2), 377-387.

Gorn, G., Pham, M. T., & Sin. L. Y. (2001). When arousal influences ad evaluation and valence does not (and vice versa). Journal of Consumer Psychology, 11(1), 43-55.

Gosselin, N., Peretz, I., Johnsen, E., & Adolphs, R. (2007). Amygdala damage impairs emotion recognition from music. Neuropsychologia, 45(2), 236-244.

Gosselin, N., Peretz, I., Noulhiane, M., Hasboun, D., Beckett, C., Baulac, M., & Samson, S. (2005). Impaired recognition of scary music following unilateral temporal lobe excision. Brain, 128(3), 628–640.

Gosselin, N., Samson, S., Adolphs, R., Noulhiane, M., Roy, M., Hasboun, D., Baulac, M., & Peretz, I. (2006). Emotional responses to unpleasant music correlates with damage to the parahippocampal cortex. Brain, 129(10), 2585-2592.

Goydke, K. N., Altenmüller, E., Möller, J., & Münte, T. F. (2004). Changes in emotional tone and instrumental timbre are reflected by the mismatch negativity. Cognitive Brain Research, 21(3), 351-359.

Green, A., Bærentsen, K., Stødkilde-Jørgensen, H., Wallentin, M., Roepstorff, A., & Vuust, P. (2008). Music in minor activates limbic structures: A relationship with dissonance? Neuroreport, 19(7), 711-715.

Gregory, A. H., & Varney, N. (1996). Cross-cultural comparisons in the affective response to music. Psychology of Music, 24(1), 47-52.

Grewe, O., Kopiez, R., & Altenmüller, E. (2009). The chill parameter: Goose bumps and shivers as promising measures in emotion research. Music Perception: An Interdisciplinary Journal, 27(1), 61-74.

Grewe, O., Nagel, F., Kopiez, R., & Altenmüller, E. (2005). How does music arouse “Chills”? Annals of the New York Academy of Sciences, 1060(1), 446-449.

Grewe, O., Nagel, F., Kopiez, R., & Altenmüller, E. (2007). Emotions over time: Synchronicity and development of subjective, physiological, and facial affective reactions to music. Emotion, 7(4), 774-788.

274

Grewe, O., Nagel, F., Kopiez, R., & Altenmüller, E. (2007). Listening to music as a re- creative process: Physiological, psychological, and psychoacoustical correlates of chills and strong emotions. Music Perception, 24(3), 297-314.

Gross, J. J., & Feldman Barrett, L. (2011). Emotion generation and emotion regulation: One or two depends on your point of view. Emotion Review, 3(1), 8-16.

Gupta, U., & Gupta, B. (2005). Psychophysiological responsivity to Indian instrumental music. Psychology of Music, 33(4), 363-372.

Hailstone, J. C., Omar, R., Henley, S. M., Frost, C., Kenward, M. G., & Warren, J. D. (2009). It’s not what you play, it’s how you play it: Timbre affects perception of emotion in music. Journal of Experimental Psychology, 62(11), 2141-2155.

Hair, H. I. (1981). Verbal identification of music concepts. Journal of Research in Music Education, 29(1), 11-21.

Hannon, E. E., & Trehub, S. E. (2005). Tuning in to musical rhythms: Infants learn more readily than adults. Proceedings of the National Academy of Sciences, 102(35), 12639- 12643.

Harmon-Jones, C., Bastian, B., & Harmon-Jones, E. (2016). The discrete emotions questionnaire: A new tool for measuring state self-reported emotions. PloS one, 11(8), e0159915.

Harmon-Jones, E., Harmon-Jones, C., Abramson, L., & Peterson, C. K. (2009). PANAS positive activation is associated with anger. Emotion, 9(2), 183-196.

Harmon-Jones, C., Schmeichel, B. J., Mennitt, E., & Harmon-Jones, E. (2011). The expression of determination: Similarities between anger and approach-related positive affect. Journal of Personality and Social Psychology, 100(1), 172-181.

Harré, R. (1986). The social constructionist viewpoint. In R. Harré (Ed.), The social construction of emotions (pp. 2–14). Oxford, UK: Blackwell.

Harrer, G., & Harrer, H. (1977). Music, emotion, and autonomic function. In M. Critchley & R.A. Henson (Eds.), Music and the brain: Studies in the neurology of music (pp. 202-216). London: William Heinemann Medical Books.

Hauser, D. J., Preston, S. D., & Stansfield, R. B. (2014). Altruism in the wild: When affiliative motives to help positive people overtake empathic motives to help the distressed. Journal of Experimental Psychology: General, 143(3), 1295-1305.

Heatherton, T. E., Striepe, M., & Wittenberg, L. (1998). Emotional distress and disinhibited eating: The role of the self. Personality and Social Psychology Bulletin, 24(3), 301-313.

275

Herz, R. S. (1998). An examination of objective and subjective measures of experience associated to odors, music and . Empirical Studies of the Arts, 16(2), 137-152.

Hess, U. (2014). Anger is a positive emotion. In Gerrod Parrott (Ed.) The Positive Side of Negative Emotions. Guilford Press, pp. 55-75.

Hevner, K. (1935). The affective character of the major and minor modes in music. The American Journal of Psychology, 47(1), 103-118.

Hevner, K. (1936). Experimental studies of the elements of expression in music. The American Journal of Psychology, 48(2), 248-268.

Hevner, K. (1937). The affective value of pitch and tempo in music. The American Journal of Psychology, 49(4), 621-630.

Holbrook, M. B., & Anand, P. (1992). The effects of situation, sequence, and features on perceptual and affective responses to product designs: The case of aesthetic consumption. Empirical Studies of the Arts, 10(1), 19-31.

Hong, Y., Chau, C. J., & Horner, A. (2017). An analysis of low-arousal piano music ratings to uncover what makes calm and sad music so difficult to distinguish in music emotion recognition. Journal of the Audio Engineering Society, 65(4), 304-320.

Horn, K., & Huron, D. (2015). On the changing use of the major and minor modes 1750– 1900. Music Theory Online, 21(1).

Hoshino, E. (1996). The feeling of musical mode and its emotional character in a melody. Psychology of Music, 24(1), 29-46.

Hunter, P. G., Schellenberg, E. G., & Griffith, A. T. (2011). Misery company: Mood-congruent emotional responding to music. Emotion, 11(5), 1068-1072.

Hunter, P. G., Schellenberg, E. G., & Schimmack, U. (2010). Feelings and perceptions of happiness and sadness induced by music: Similarities, differences, and mixed emotions. Psychology of Aesthetics, Creativity, and the Arts, 4(1), 47-56.

Hunter, P. G., Schellenberg, G. E., & Schimmack, U. (2008). Mixed affective responses to music with conflicting cues. Cognition & Emotion, 22(2), 327-352.

Huq, A., Bello, J. P., & Rowe, R. (2010). Automated music emotion recognition: A systematic evaluation. Journal of New Music Research, 39(3), 227-244.

Huron, D. (2002). A six-component theory of auditory-evoked emotion. In Proceedings of the 7th International Conference on Music Perception and Cognition (pp. 673-676). , .

276

Huron, D. (2006). Sweet anticipation: Music and the psychology of expectation. Boston: MIT press.

Huron, D. (2008). A comparison of average pitch height and interval size in major-and minor-key themes: Evidence consistent with affect-related pitch prosody. Empirical Musicology Review, 3(2), 59-63.

Huron, D. (2009). Aesthetics. In S. Hallam, I. Cross, & M. Thaut (Eds.) Oxford handbook of music psychology (pp. 151-159). New York: Oxford University Press.

Huron, D. (2015). Affect induction through musical sounds: An ethological perspective. Philosophical Transactions of the Royal Society, B. 370(20140098).

Huron, D. (2015). Cues and signals: An ethological approach to music-related emotion. Signata: Annales des sémiotiques/Annals of , 6, 331-351.

Huron, D. (in preparation). The Science of Sad Sounds.

Huron, D., Anderson, N., & Shanahan, D. (2014). You can't play sad music on a banjo: Acoustic factors in the judgment of instrument capacity to convey sadness. Empirical Musicology Review, 9(1), 29-41.

Huron, D., & Margulis, E. H. (2010). Musical expectancy and thrills. In P. N. Juslin & J. A. Sloboda (Eds.), Series in affective science. Handbook of music and emotion: Theory, research, applications (pp. 575-604). New York, NY, US: Oxford University Press.

Huron, D., & Shanahan, D. (2013). Eyebrow movements and vocal pitch height: Evidence consistent with an ethological signal. Journal of the Acoustical Society of America, 133(5), 2947-2952.

Huron, D., & Warrenburg, L.A. (2017). Sadness versus grief: Has research on musical ‘sadness’ conflated two different affective states? Paper presented at the biannual national conference of the Society for Music Perception and Cognition, San Diego, CA.

Huron, D., Dahl, S., & Johnson, R. (2009). Facial expression and vocal pitch height: Evidence of an intermodal association. Empirical Musicology Review, 4(3), 93-100.

Huron, D., Kinney, D. & Precoda, K. (2006). Influence of pitch height on the perception of submissiveness and threat in musical passages. Empirical Musicology Review 1(3), 170-177.

Husain, G., Thompson, W. F., & Schellenberg, E. G. (2002). Effects of musical tempo and mode on arousal, mood, and spatial abilities. Music Perception, 20(2), 151-171.

277

Ilie, G., & Thompson, W. F. (2006). A comparison of acoustic cues in music and speech for three dimensions of affect. Music Perception, 23(4), 319-329.

Iwaki, T., Hayashi, M., & Hori, T. (1997). Changes in alpha band EEG activity in the frontal area after stimulation with music of different affective content. Perceptual and Motor Skills, 84(2), 515-526.

Iwanaga, M., & Tsukamoto, M. (1997). Effects of excitative and sedative music on subjective and physiological relaxation. Perceptual and Motor Skills, 85(1), 287-296.

Iwanaga, M., & Tsukamoto, M. (1998). Preference for musical tempo involving systematic variations of presented tempi for known and unknown musical excerpts. Perceptual and Motor Skills, 86(1), 31-41.

Iwanaga, M., Kobayashi, A., & Kawasaki, C. (2005). Heart rate variability with repetitive exposure to music. Biological Psychology, 70(1), 61-66.

Izard, C. E. (1992). Basic emotions, relations among emotions, and emotion-cognition relations. Psychological Review, 99(3), 561-565.

Izard, C. E. (1993). Four systems for emotion activation: Cognitive and noncognitive processes. Psychological Review, 100(1), 68–90.

Izard, C. E. (2007). Basic emotions, natural kinds, emotion schemas, and a new paradigm. Perspectives on Psychological Science, 2(3), 260-280.

James, W. (1884). What is an emotion? Mind, 9(34), 188-205.

Janata, P., Tomic, S. T., & Haberman, J. M. (2012). Sensorimotor coupling in music and the psychology of the groove. Journal of Experimental Psychology: General, 141(1), 54- 75.

Janata, P., Tomic, S. T., & Rakowski, S. K. (2007). Characterisation of music-evoked autobiographical memories. Memory, 15(8), 845-860.

Jansens, S., Bloothooft, G., & de Krom, G. (1997). Perception and of emotions in singing. In Proceedings of the Fifth European Conference on Speech Communication and Technology: Vol. IV. Eurospeech 97 (pp. 2155–2158). Rhodes, Greece: European Speech Communication Association.

Järvinen, A., Dering, B., Neumann, D., Ng, R., Crivelli, D., Grichanik, M., Korenberg, J. R., & Bellugi, U. (2012). Sensitivity of the autonomic nervous system to visual and auditory affect across social and non-social domains in Williams syndrome. Frontiers in Psychology, 3, 343.

278

Järvinen, A., Ng, R., Crivelli, D., Neumann, D., Arnold, A. J., Woo‐VonHoogenstyn, N., Lai, P., Trauner, D., & Bellugi, U. (2016). Social functioning and autonomic nervous system sensitivity across vocal and musical emotion in Williams syndrome and autism spectrum disorder. Developmental Psychobiology, 58(1), 17-26.

Ji, J., & Maren, S. (2007). Hippocampal involvement in contextual modulation of fear extinction. Hippocampus, 17(9), 749-758.

John, O. P., & Srivastava, S. (1999). The Big-Five trait taxonomy: History, measurement, and theoretical perspectives. In L. A. Pervin & O. P. John (Eds.), Handbook of personality: Theory and research, Vol.2 (pp. 102–138). New York: Guilford Press.

Johnsen, E. L., Tranel, D., Lutgendorf, S., & Adolphs, R. (2009). A neuroanatomical dissociation for emotion induced by music. International Journal of Psychophysiology, 72(1), 24-33.

Juslin, P. N. (1997). Can results from studies of perceived expression in musical performance be generalized across response formats? Psychomusicology, 16(1-2), 77– 101.

Juslin, P. N. (1997). Emotional communication in music performance: A functionalist perspective and some data. Music Perception, 14(4), 383-418.

Juslin, P. N. (1997). Perceived emotional expression in synthesized performances of a short melody: Capturing the listener’s policy. Musicae Scientiae, 1(2), 225- 256.

Juslin, P. N. (2000). Cue utilization in communication of emotion in music performance: Relating performance to perception. Journal of Experimental Psychology: Human Perception and Performance, 26(6), 1797–1813.

Juslin, P. N. (2013). From everyday emotions to aesthetic emotions: Towards a unified theory of musical emotions. Physics of Life Reviews, 10(3), 235-266.

Juslin, P. N. (2013). What does music express? Basic emotions and beyond. Frontiers in Psychology, 4, 596.

Juslin, P. N., & Laukka, P. (2000). Improving emotional communication in music performance through cognitive feedback. Musicae Scientiae, 4(2), 151-183.

Juslin, P. N., & Laukka, P. (2003). Communication of emotions in vocal expression and music performance: Different channels, same code? Psychological Bulletin, 129(5), 770- 814.

279

Juslin, P. N., & Laukka, P. (2004). Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of New Music Research, 33(3), 217-238.

Juslin, P. N., & Madison, G. (1999). The role of timing patterns in recognition of emotional expression from musical performance. Music Perception, 17(2), 197-221.

Juslin, P. N., & Sloboda, J. (Eds.). (2011). Handbook of music and emotion: Theory, research, applications. Oxford University Press.

Juslin, P. N., Friberg, A., & Bresin, R. (2002). Toward a computational model of expression in music performance: The GERM model. Musicae Scientiae, Special Issue 2001–2002, 63-122.

Juslin, P. N., Harmat, L., & Eerola, T. (2014). What makes music emotionally significant? Exploring the underlying mechanisms. Psychology of Music, 42(4), 599-623.

Juslin, P. N., Liljeströom, S., Västfjäll, D., Barradas, G., & Silva, A. (2008). An experience sampling study of emotional reactions to music: Listener, music, and situation. Emotion, 8(5), 668-683.

Kabuto, M., Kageyama, T., & Nitta, H. (1993). EEG power spectrum changes due to listening to pleasant musics and their relation to relaxation effects. Nippon Eiseigaku Zasshi (Japanese Journal of Hygiene), 48(4), 807-818.

Kallinen, K. (2005). Emotional ratings of music excerpts in the western art music repertoire and their self-organization in the Kohonen neural network. Psychology of Music, 33(4), 373-393.

Kallinen, K., & Ravaja, N. (2004). Emotion-related effects of speech rate and rising vs. falling melody during audio news: The moderating influence of personality. Personality and Individual Differences, 37(2), 275-288.

Kallinen, K., & Ravaja, N. (2006). Emotion perceived and emotion felt: Same and different. Musicae Scientiae, 10(2), 191-213.

Kamenetsky, S. B., Hill, D. S., & Trehub, S. E. (1997). Effect of tempo and dynamics on the perception of emotion in music. Psychology of Music, 25(2), 149-160.

Kaminska, Z., & Woolf, J. (2000). Melodic line and emotion: Cooke’s theory revisited. Psychology of Music, 28(2), 133-153.

Kantor-Martynuska, J., & Bigand, E. (2013). Individual differences in granularity of the affective responses to music. Polish Psychological Bulletin, 44(4), 399-408.

280

Kantor-Martynuska, J., & Horabik, J. (2015). Granularity of emotional responses to music: The effect of musical expertise. Psychology of Aesthetics, Creativity, and the Arts, 9(3), 235-247.

Kastner, M. P., & Crowder, R. G. (1990). Perception of the major/minor distinction: IV. Emotional connotations in young children. Music Perception, 8(2), 189-202.

Kawakami, A., & Katahira, K. (2015). Influence of trait empathy on the emotion evoked by sad music and on the preference for it. Frontiers in Psychology, 6, 1541-1550.

Kawakami, A., Furukawa, K., Katahira, K., & Okanoya, K. (2013). Sad music induces pleasant emotion. Frontiers in Psychology, 4, 311-326.

Keltner, D., & Haidt, J. (1999). Social functions of emotions at four levels of analysis. Cognition & Emotion, 13(5), 505-521.

Kenealy, P. (1988). Validation of a music mood induction procedure: Some preliminary findings. Cognition & Emotion, 2(1), 41-48.

Khalfa, S., Delbe, C., Bigand, E., Reynaud, E., Chauvel, P., & Liégeois-Chauvel, C. (2008). Positive and negative music recognition reveals a specialization of mesio- temporal structures in epileptic patients. Music Perception, 25(4), 295-302.

Khalfa, S., Roy, M., Rainville, P., Dalla Bella, S., & Peretz, I. (2008). Role of tempo entrainment in psychophysiological differentiation of happy and sad music? International Journal of Psychophysiology, 68(1), 17-26.

Khalfa, S., Schon, D., Anton, J., & Liégeois-Chauvel, C. (2005). Brain regions involved in the recognition of happiness and sadness in music. Neuroreport, 16(18), 1981-1984.

King, S. C., & Meiselman, H. L. (2010). Development of a method to measure consumer emotions associated with foods. Food Quality and Perference, 21(2), 168-177.

Kinsella, G., Prior, M., & Jones, V. (1990). Judgement of mood in music following right hemisphere damage. Archives of Clinical Neuropsychology, 5(4), 359-371.

Kivy, P. (1990). Music alone: Philosophical refletions on the purely musical experience. Ithaca: Press.

Kleinsmith, A. L., Friedman, R. S., & Neill, W. T. (2016). Exploring the impact of final ritardandi on evaluative responses to cadential closure. Psychomusicology: Music, Mind, and Brain, 26(4), 346-357.

Koelsch, S., Fritz, T., Cramon, D. Y., Müller, K., & Friederici, A. D. (2006). Investigating emotion with music: An fMRI study. Human Brain Mapping, 27(3), 239- 250.

281

Koelsch, S., Jacobs, A. M., Menninghaus, W., Liebal, K., Klann-Delius, G., von Scheve, C., & Gebauer, G. (2015). The quartet theory of human emotions: An integrative and neurofunctional model. Physics of Life Reviews, 13, 1-27.

Koelsch, S., Kilches, S., Steinbeis, N., & Schelinski, S. (2008). Effects of unexpected chords and of performer’s expression on brain responses and electrodermal activity. PLoS One, 3(7), e2631.

Koelstra, S., Mühl, C., Soleymani, M., Lee, J. S., Yazdani, A., Ebrahimi, T., Pun, T., Nijholt, A., & Patras, I. (2012). DEAP: A database for emotion analysis; using physiological signals. IEEE Transactions on Affective Computing, 3(1), 18-31.

Koelstra, S., Yazdani, A., Soleymani, M., Mühl, C., Lee, J. S., Nijholt, A., Pun, T., Ebrahimi, T., & Patras, I. (2010, August). Single trial classification of EEG and peripheral physiological signals for recognition of emotions induced by music videos. In International Conference on Brain Informatics (pp. 89-100). Springer, Berlin, Heidelberg.

Kohn, D., & Eitan, Z. (2016). Moving music: Correspondences of musical parameters and movement dimensions in children’s motion and verbal responses. Music Perception: An Interdisciplinary Journal, 34(1), 40-55.

Konečni, V. J. (2008). Does music induce emotion? A theoretical and methodological analysis. Psychology of Aesthetics, Creativity, and the Arts, 2(2), 115-129.

Konečni, V. J., Brown, A., & Wanic, R. A. (2008). Comparative effects of music and recalled life-events on emotional state. Psychology of Music, 36(3), 289-308.

Konečni, V. J., Wanic, R. A., & Brown, A. (2007). Emotional and aesthetic antecedents and consequences of music-induced thrills. The American Journal of Psychology, 120(4), 619-643.

Kornreich, C., Brevers, D., Canivet, D., Ermer, E., Naranjo, C., Constant, E., Verbanck, P., Campanella, S., & Noël, X. (2013). Impaired processing of emotion in music, faces and voices supports a generalized emotional decoding deficit in alcoholism. , 108(1), 80-88.

Korsakova-Kreyn, M., & Dowling, W. J. (2014). Emotional processing in music: Study in affective responses to tonal modulation in controlled harmonic progressions and real music. Psychomusicology: Music, Mind, and Brain, 24(1), 4-20.

Kottler, J. A. (1996). The language of tears. Jossey-Bass.

Kövecses, Z. (2000). Metaphor and emotion: Language, culture, and body in human feeling. Cambridge: Cambridge University Press.

282

Kövecses, Z. (2000). The scope of metaphor. In A. Barcelona (Ed.), Metaphor and metonymy at the crossroads: A cognitive perspective (pp. 79-92), Walter de Gruyter.

Kraepelin, E. (1921). Manic depressive insanity and paranoia. The Journal of Nervous and Mental Disease, 53(4), 345-346.

Kreutz, G., Bongard, S., Rohrmann, S., Hodapp, V., & Grebe, D. (2004). Effects of singing or listening on secretory immunoglobulin A, cortisol, and emotional state. Journal of Behavioral Medicine, 27(6), 623-635.

Kreutz, G., Ott, U., Teichmann, D., Osawa, P., & Vaitl, D. (2008). Using music to induce emotions: Influences of musical preference and absorption. Psychology of Music, 36(1), 101-126.

Krueger, J., & Szanto, T. (2016). Extended emotions. Philosophy Compass, 11(12), 863- 878.

Krumhansl, C. L. (1997). An exploratory study of musical emotions and psychophysiology. Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale, 51(4), 336-352.

Krumhansl, C. L. (1998). Topic in music: An empirical study of memorability, openness, and emotion in Mozart’s String Quintet in C Major and Beethoven’s String Quartet in A Minor. Music Perception, 16(1), 119-134.

Kuo, F. F., Chiang, M. F., Shan, M. K., & Lee, S. Y. (2005, November). Emotion-based music recommendation by association discovery from film music. In Proceedings of the 13th annual ACM International Conference on Multimedia (pp. 507-510).

Ladinig, O., & Schellenberg, E. G. (2012). Liking unfamiliar music: Effects of felt emotion and individual differences. Psychology of Aesthetics, Creativity, and the Arts, 6(2), 146-154.

Lakoff, G., & Johnson, M. (1980). Metaphors we live by. : University of Chicago Press.

Lakoff, G., & Johnson, M. (1980). The metaphorical structure of the human . Cognitive Science, 4(2), 195-208.

Lange, C. G. (1885). The mechanism of the emotions. The Classical Psychologists, 672- 684.

Larsen, J. T., & McGraw, A. P. (2011). Further evidence for mixed emotions. Journal of Personality and Social Psychology, 100(6), 1095-1110.

283

Lartillot, O., & Toiviainen, P. (2007, September). A Matlab toolbox for musical feature extraction from audio. In Proceedings of the 10th International Conference on Digital Audio Effects (DAFx-07) (pp. 237-244), Bordeaux, .

Laukka, P. (2005). Categorical perception of vocal emotion expressions. Emotion, 5(3), 277-295.

Laukka, P., & Gabrielsson, A. (2000). Emotional expression in drumming performance. Psychology of Music, 28(2), 181-189.

Laukka, P., Eerola, T., Thingujam, N. S., Yamasaki, T., & Beller, G. (2013). Universal and culture-specific factors in the recognition and performance of musical affect expressions. Emotion, 13(3), 434-449.

Lazarus, R. S. (1966). and the coping process. New York, NY: McGraw-Hill.

Lazarus, R. S. (1991). Emotion and adaptation. New York, NY: Oxford University Press.

LeDoux, J. E., & Brown, R. (2017). A higher-order theory of emotional consciousness. Proceedings of the National Academy of Sciences, 114(10), E2016-E2025

LeDoux, J. E., & Hofmann, S. G. (2018). The subjective experience of emotion: A fearful view. Current Opinion in Behavioral Sciences, 19, 67-72.

Leman, M., & Maes, P. J. (2014). Music perception and . In L. Shapiro (Ed.), The Routledge handbook of (pp. 81-89). Routledge.

Lenton, S. R., & Martin, P. R. (1991). The contribution of music vs. instructions in the musical mood induction procedure. Behaviour Research and Therapy, 29(6), 623-625.

Lerner, J. S., & Keltner, D. (2000). Beyond valence: Toward a model of emotion-specific influences on judgement and choice. Cognition & Emotion, 14(4), 473-493.

Levenson, R. W. (1994). Human emotions: A functional view. In P. Ekman & R. J. Davidson (Eds.), The nature of emotion: Fundamental questions (pp. 123–126). New York, NY: Oxford University Press.

Levenson, R. W. (2011). Basic emotion questions. Emotion Review, 3(4), 379-386.

Leventhal, H. (1984). A perceptual-motor theory of emotion. Advances in Experimental Social Psychology, 17, 117–182.

Leveque, Y., Teyssier, P., Bouchet, P., Bigand, E., Caclin, A., & Tillmann, B. (2018). Musical emotions in congenital : Impaired recognition, but preserved emotional intensity. Neuropsychology, 32(7), 880-894.

284

Levinson, J. (2014). Suffering art gladly: The paradox of negative emotion in art. Springer.

Levitin, D. J., Menon, V. (2003). Musical structure is processed in “language” areas of the brain: A possible role for Brodmann Area 47 in temporal coherence. Neuroimage, 20(4), 2142– 2152.

Libby, L. K., & Eibach, R. P. (2011). Visual perspective in mental imagery: An integrative model explaining its function in judgment, emotion, and self-insight. In M. P. Zanna and J. M. Olson (Eds.), Advances in experimental social psychology, Vol. 44 (pp. 185-245). San Diego: Academic Press.

Libby, L. K., Valenti, G., Hines, K. A., & Eibach, R. P. (2014). Using imagery perspective to access two distinct forms of self-knowledge: Associative evaluations versus propositional self-beliefs. Journal of Experimental Psychology: General, 143(2), 492-497.

Lin, Y. P., Duann, J. R., Chen, J. H., & Jung, T. P. (2010). Electroencephalographic dynamics of musical emotion perception revealed by independent spectral components. Neuroreport, 21(6), 410-415.

Lin, Y. P., Wang, C. H., Jung, T. P., Wu, T. L., Jeng, S. K., Duann, J. R., & Chen, J. H. (2010). EEG-based emotion recognition in music listening. IEEE Transactions on Biomedical Engineering, 57(7), 1798-1806.

Lin, Y. P., Wang, C. H., Wu, T. L., Jeng, S. K., & Chen, J. H. (2007, October). Multilayer perceptron for EEG signal classification during listening to emotional music. In TENCON 2007-2007 IEEE Region 10 Conference (pp. 1-3). IEEE.

Lin, Y. P., Wang, C. H., Wu, T. L., Jeng, S. K., & Chen, J. H. (2008, October). Support vector machine for EEG signal classification during listening to emotional music. In IEEE 10th workshop on Multimedia Signal Processing (pp. 127-130).

Lindström, E., Juslin, P. N., Bresin, R., & Williamon, A. (2003). “Expressivity comes from within your soul”: A questionnaire study of music students’ perspectives on expressivity. Research Studies in Music Education, 20(1), 23-47.

Liu, C. C., Yang, Y. H., Wu, P. H., & Chen, H. H. (2006, October). Detecting and Classifying Emotion in Popular Music. In 9th joint international conference on information sciences (JCIS-06). Atlantis Press.

Livingstone, S. R., & Brown, A. R. (2005, November). Dynamic response: Real-time adaptation for music emotion. In Proceedings of the second Australasian Conference on Interactive (pp. 105-111). Creativity & Cognition Studios Press.

285

Livingstone, S. R., Muhlberger, R., Brown, A. R., & Thompson, W. F. (2010). Changing musical emotion: A computational rule system for modifying score and performance. Computer Music Journal, 34(1), 41-64.

Logeswaran, N., & Bhattacharya, J. (2009). Crossmodal transfer of emotion by music. Neuroscience Letters, 455(2), 129-133.

Lorenz, K. (1970). Studies in animal and human behaviour, Vol. 1. London, UK: Methuen.

Luck, G., Saarikallio, S., Burger, B., Thompson, M. R., & Toiviainen, P. (2010). Effects of the Big Five and musical genre on music-induced movement. Journal of Research in Personality, 44(6), 714-720.

Luck, G., Toiviainen, P., Erkkilä, J., Lartillot, O., Riikkilä, K., Mäkelä, A., Pyhäluoto, K., Raine, H., Varkila, L., & Värri, J. (2008). Modelling the relationships between emotional responses to, and musical content of, music therapy improvisations. Psychology of Music, 36(1), 25-45.

Lundqvist, L. O., Carlsson, F., Hilmersson, P., & Juslin, P. N. (2009). Emotional responses to music: Experience, expression, and physiology. Psychology of Music, 37(1), 61-90.

Lutz, C., & White, G. M. (1986). The anthropology of emotions. Annual Review of nthropology, 15(1), 405-436.

Ma, W., & Thompson, W. F. (2015). Human emotions track changes in the acoustic environment. Proceedings of the National Academy of Sciences, 112(47), 14563-14568.

MacDorman, K. F., Ough, S., & Ho, C. C. (2007). Automatic emotion prediction of song excerpts: Index construction, algorithm design, and empirical comparison. Journal of New Music Research, 36(4), 281-299.

Madsen, C. K. (1998). Emotion versus tension in Haydn’s Symphony no. 104 as measured by the two-dimensional continuous response digital interface. Journal of Research in Music Education, 46(4), 546-554.

Marin, M. M., Gingras, B., & Bhattacharya, J. (2012). Crossmodal transfer of arousal, but not pleasantness, from the musical to the visual domain. Emotion, 12(3), 618-631.

Martin, M. (1990). On the induction of mood. Review, 10(6), 669- 697.

Martin, M. A., & Metha, A. (1997). Recall of early childhood memories through musical mood induction. The Arts in Psychotherapy, 24(5), 447-454.

286

Mas-Herrero, E., Zatorre, R. J., Rodriguez-Fornells, A., & Marco-Pallarés, J. (2014). Dissociation between musical and monetary reward responses in specific musical anhedonia. Current Biology, 24(6), 699-704.

Mathews, A., & Bradley, B. (1983). Mood and the self-reference bias in recall. Behavior Research and Therapy, 21(3), 233-239.

Mattila, A. S., & Wirtz, J. (2001). Congruency of scent and music as a driver of in-store evaluations and behavior. Journal of Retailing, 77(2), 273-289.

Mayer, J. D., Gayle, M., Meehan, M. E., & Haarman, A. K. (1990). Towards better specification of the mood-congruency effect in recall. Journal of Experimental Social Psychology, 26(6), 465-80.

Maynard Smith, J. M., & Harper, D. (2003). Animal signals. Oxford, UK: Oxford University Press.

Mazo M. (1994). Lament made visible: A study of paramusical elements in Russian lament. In B. Yung & J. Lam (Eds.), Themes and variations: Writings on music in honor of Rulan Chao Pian (pp. 164-211). Harvard University Press.

McAdams, S., Vines, B. W., Vieillard, S., Smith, B. K., & Reynolds, R. (2004). Influences of large-scale form on continuous ratings in response to a contemporary piece in a live concert setting. Music Perception: An Interdisciplinary Journal, 22(2), 297-350.

McFarland, R. A. (1985). Relationship of skin temperature changes to the emotions accompanying music. Biofeedback and Self-regulation, 10(3), 255-267.

Menon, V., & Levitin, D. J. (2005). The rewards of music listening: Response and physiological connectivity of the mesolimbic system. Neuroimage, 28(1), 175-184

Mesquita, B. (2010). Emoting: A contextualized process. In B. Mesquita, L. F. Barrett & E. Smith (Eds.), The mind in context (pp. 83–104). New York, NY: Guilford.

Mesquita, B., Boiger, M., & De Leersnyder, J. (2016). The cultural construction of emotions. Current Opinion in Psychology, 8, 31-36.

Meyer, L. B. (1956). Emotion and meaning in music. University of Chicago Press.

Meyer, R. K., Palmer, C., & Mazo, M. (1998). Affective and coherence responses to Russian laments. Music Perception, 16(1), 135-150.

Miller, W. I . (1997). The anatomy of disgust. Cambridge, MA: Harvard University Press.

287

Mitterschiffthaler, M. T., Fu, C. H., Dalton, J. A., Andrew, C. M., & Williams, S. C. (2007). A functional MRI study of happy and sad affective states induced by classical music. Human Brain Mapping, 28(11), 1150-1162.

Miu, A. C., & Balteş, F. R. (2012). Empathy manipulation impacts music-induced emotions: A psychophysiological study on . PloS One, 7(1), e30618.

Mobbs, D., Petrovic, P., Marchant, J. L., Hassabis, D., Weiskopf, N., Seymour, B., Dolan, R. J., & Frith, C. D. (2007). When fear is near: Threat imminence elicits prefrontal-periaqueductal gray shifts in humans. Science, 317(5841), 1079-1083.

Moll, J., de Oliveira-Souza, R., Eslinger, P. J., Bramati, I. E., Mourão-Miranda, J., Andreiuolo, P. A., & Pessoa, L. (2002). The neural correlates of moral sensitivity: A functional magnetic resonance imaging investigation of basic and . Journal of Neuroscience, 22(7), 2730-2736.

Molnar-Szakacs, I., & Overy, K. (2006). Music and mirror neurons: From motion to ‘e’motion. Social Cognitive and Affective Neuroscience, 1(3), 235-241.

Moors, A., Ellsworth, P. C., Scherer, K., & Frijda, N. (2013). Appraisal theories of emotion: State of the art and future developments. Emotion Review, 5(2), 119-124.

Mori, K., & Iwanaga, M. (2014). Pleasure generated by sadness: Effect of sad lyrics on the emotions induced by happy music. Psychology of Music, 42(5), 643-652.

Morrow, J., & Nolen-Hoeksema, S. (1990). Effects of responses to depression on the remediation of depressive affect. Journal of Personality and Social Psychology, 58(3), 519-527.

Morton, B., & Trehub, S. E. (2007). Children’s expression of emotion in song. Psychology of Music, 35(4), 629-639.

Morton, E. S. (1977). On the occurrence and significance of motivation-structural rules in some bird and mammal sounds. The American Naturalist, 111(981), 855-869.

Morton, E. S. (1994). Sound and its role in non-human vertebrate communication. In L. Hinton, J. Nicholls, & J. J. Ohala (Eds.), Sound symbolism (pp. 348–365). Cambridge: Cambridge University Press.

Morton, J., & Trehub, S. E. (2007). Children’s of emotion in song. Psychology of Music, 35(4), 629-639.

Nagel, F., Kopiez, R., Grewe, O., & Altenmüller, E. (2007). EMuJoy: Software for continuous measurement of perceived emotions in music. Behavior Research Methods, 39(2), 283-290.

288

Nagel, F., Kopiez, R., Grewe, O., & Altenmüller, E. (2008). Psychoacoustical correlates of musically induced chills. Musicae Scientiae, 12(1), 101-113.

Naranjo, C., Kornreich, C., Campanella, S., Noël, X., Vandriette, Y., Gillain, B., de Longueville, X., Delatte, B., Verbanck, P., & Constant, E. (2011). Major depression is associated with impaired processing of emotion in music as well as in facial and vocal stimuli. Journal of Affective Disorders, 128(3), 243-251.

Nater, U. M., Abbruzzese, E., Krebs, M., & Ehlert, U. (2006). Sex differences in emotional and psychophysiological responses to musical stimuli. International Journal of Psychophysiology, 62(2), 300–308.

Nawrot, E. (2003). The perception of emotional expression in music: Evidence from infants, children and adults. Psychology of Music, 31(1), 75-92.

Nesse, R. M. (1991). What good is feeling bad? The evolutionary benefits of psychic pain. The Sciences, 31(6), 30-37.

Nettl, B. (1983). The study of . Chicago, IL: University of Press.

Nisbett, R. E., & Schachter, S. (1966). Cognitive manipulation of pain. Journal of Experimental Social Psychology, 2(3), 227-236.

O’Kearney, R., & Dadds, M. (2004). Developmental and gender differences in the language for emotions across the adolescent years. Cognition & Emotion, 18(7), 913-938.

Ochsner, K. N., & Gross, J. J. (2005). The cognitive control of emotion. Trends in Cognitive Sciences, 9(5), 242-249.

Ollen, J. E. (2006). A criterion-related validity test of selected indicators of musical sophistication using expert ratings (Doctoral dissertation). The Ohio State University, Columbus, OH.

Olsen, K. N., Dean, R. T., Stevens, C. J., & Bailes, F. (2015). Both acoustic intensity and loudness contribute to time-series models of perceived affect in response to music. Psychomusicology: Music, Mind, and Brain, 25(2), 124-137.

Omar, R., Hailstone, J. C., Warren, J. E., Crutch, S .J., Warren, J. D. (2010). The cognitive organisation of music knowledge: A clinical analysis. Brain, 133(4), 1200– 1213.

Omar, R., Henley, S. M. D., Bartlett, J. W., Hailstone, J. C., Gordon, E., Sauter, D. A., Frost, C., Scott, S. K., & Warren, J. D. (2011). The structural neuroanatomy of music emotion recognition: Evidence from frontotemporal lobar degeneration. Neuroimage, 56(3), 1814-1821.

289

Orini, M., Bailon, R., Enk, R., Koelsch, S., Mainardi, L., & Laguna, P. (2010). A method for continuously assessing the autonomic response to music-induced emotions through HRV analysis. Medical & Biological Engineering & Computing, 48(5), 423-433.

Orio, N., & Canazza, S. (1998). How are expressive deviations related to musical instruments? Analysis of sax and piano performances of “How High the Moon” theme. In Proceedings of the XII Colloqium on Musical Informatics, Udine: Associazione di Informatica Musicale Italiana (pp. 75-78).

Paddison. M. (1996). Adorno, , and mass culture: Essays on critical theory and music. London: Kahn & Averill.

Panda, R., Malheiro, R. M., & Paiva, R. P. (2018). Novel audio features for music emotion recognition. IEEE Transactions on Affective Computing.

Panksepp, J. (1995). The emotional sources of “chills” induced by music. Music Perception: An Interdisciplinary Journal, 13(2), 171-207.

Panksepp, J. (1998). Affective neuroscience: The foundations of human and animal emotions. New York, NY: Oxford University Press.

Panksepp, J. (2016). The psycho-neurology of cross-species affective/social neuroscience: Understanding animal affective states as a guide to development of novel psychiatric treatments. In M. Wöhr & S. Krach (Eds.), Social Behavior from Rodents to Humans (pp. 109-125). Springer.

Panksepp, J., & Watt, D. (2011). What is basic about basic emotions? Lasting lessons from affective neuroscience. Emotion Review, 3(4), 387-396.

Parke, R., Chew, E., & Kyriakakis, C. (2007). Quantitative and visual analysis of the impact of music on perceived emotion of film. Computers in Entertainment (CIE), 5(3), 1-60.

Parrott, A. C. (1982). Effect of paintings and music, both alone and in combination, on emotional judgments. Perceptual and Motor Skills, 54(2), 635-641.

Parrott, W. G. (1991). Mood induction and instructions to sustain moods: A test of the subject compliance hypothesis of mood congruent memory. Cognition & Emotion, 5(1), 41-52.

Parrott, W. G., & Sabini, J. (1990). Mood and memory under natural conditions: Evidence for mood incongruent recall. Journal of Personality and Social Psychology, 59(2), 321-336.

Paul, B., & Huron, D. (2010). An association between breaking voice and grief-related lyrics in . Empirical Musicology Review, 5, 27-35.

290

Pearce, M. T., & Halpern, A. R. (2015). Age-related patterns in emotions evoked by music. Psychology of Aesthetics, Creativity, and the Arts, 9(3), 248-253.

Pecher, C., Lemercier, C., & Cellier, J. M. (2009). Emotions drive attention: Effects on driver’s behaviour. Safety Science, 47(9), 1254-1259.

Peltola, H. R. (2017). Sharing experienced sadness: Negotiating meanings of self-defined sad music within a group interview session. Psychology of Music, 45(1), 82-98.

Peltola, H. R., & Eerola, T. (2016). Fifty shades of blue: Classification of music-evoked sadness. Musicae Scientiae, 20(1), 84-102.

Peretz, I., & Gagnon, L. (1999). Dissociation between recognition and emotional judgements for melodies. Neurocase, 5(1), 21-30.

Peretz, I., Blood, A. J., Penhune, V., & Zatorre, R. (2001). Cortical deafness to dissonance. Brain, 124(5), 928-940.

Peretz, I., Gagnon, L., & Bouchard, B. (1998). Music and emotion: Perceptual determinants, immediacy, and isolation after brain damage. Cognition, 68(2), 111-141.

Pignatiello, M. F., Camp, C. J., & Rasar, L. A. (1986). Musical mood induction: An alternative to the Velten technique. Journal of , 95(3), 295-297.

Post, O., & Huron, D. (2009). Western classical music in the minor mode is slower (except in the Romantic period). Empirical Musicology Review 4(1), 2-10.

Punkanen, M., Eerola, T., & Erkkilä, J. (2011). Biased emotional recognition in depression: Perception of emotions in music by depressed patients. Journal of Affective Disorders, 130(1-2), 118-126.

Quinto, L., Thompson, W. F., & Taylor, A. (2014). The contributions of compositional structure and performance expression to the communication of emotion in music. Psychology of Music, 42(4), 503-524.

Ramos, D., Bueno, J. L. O., & Bigand, E. (2011). Manipulating Greek musical modes and tempo affects perceived musical emotion in musicians and nonmusicians. Brazilian Journal of Medical and Biological Research, 44(2), 165-172.

Rentfrow, P. J., & Gosling, S. D. (2003). The do re mi’s of everyday life: the structure and personality correlates of music preferences. Journal of Personality and Social Psychology, 84(6), 1236-1254.

291

Rentfrow, P. J., Goldberg, L. R., & Levitin, D. J. (2011). The structure of musical preferences: A five-factor model. Journal of Personality and Social Psychology, 100(6), 1139-1157.

Resnicow, J. E., Salovey, P., & Repp, B. H. (2004). Is recognition of emotion in music performance an aspect of emotional intelligence? Music Perception, 22(1), 145-158.

Rickard, N. S. (2004). Intense emotional responses to music: A test of the physiological arousal hypothesis. Psychology of Music, 32(4), 371-388.

Ridgeway, D., Waters, E., & Kuczaj, S. A. (1985). Acquisition of emotion-descriptive language: Receptive and productive vocabulary norms for ages 18 months to 6 years. Developmental Psychology, 21(5), 901-908.

Ritossa, D. A., & Rickard, N. S. (2004). The relative utility of ‘pleasantness ‘and ‘liking’ dimensions in predicting the emotions expressed by music. Psychology of Music, 32(1), 5-22.

Robazza, C., Macaluso, C., & D’urso, V. (1994). Emotional reactions to music by gender, age, and expertise. Perceptual and Motor Skills, 79(2), 939-944.

Rodger, H., Vizioli, L., Ouyang, X., & Caldara, R. (2015). Mapping the development of facial expression recognition. Developmental Science, 18(6), 926-939.

Rosenblatt, P. C., Walsh, R. P., & Jackson, D. A. (1976). Grief and mourning in cross- cultural perspective. New Haven: Yale University, Human Relations Area Files.

Roy, M., Mailhot, J. P., Gosselin, N., Paquette, S., & Peretz, I. (2009). Modulation of the startle reflex by pleasant and unpleasant music. International Journal of Psychophysiology, 71(1), 37-42.

Roy, M., Peretz, I., & Rainville, P. (2008). Emotional valence contributes to music- induced analgesia. Pain, 134(1-2), 140-147.

Rozin, A., Rozin, P., & Goldberg, E. (2004). The feeling of music past: How listeners remember musical affect. Music Perception, 22(1), 15-39.

Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), 1161-1178.

Russell, J. A. (2003). Core affect and the psychological construction of emotion. Psychological Review, 110(1), 145-172.

Russell, J. A., Fernandez-Dols, J. M., Manstead, A. S. R., & Wellenkamp, H. (Eds.). (1995). Everyday conceptions of emotion: An introduction to the psychology,

292 anthropology, and linguistics of emotion. Dordrecht, The : Kluwer Academic.

Russell, J. A., Weiss, A., & Mendelsohn, G. A. (1989). The Affect Grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology, 57(3), 493- 502.

Saarikallio, S., & Erkkilä, J. (2007). The role of music in adolescents’ mood regulation. Psychology of Music, 35(1), 88-109.

Saarimäki, H., Gotsopoulos, A., Jääskeläinen, I. P., Lampinen, J., Vuilleumier, P., Hari, R., Sams, M., & Nummenmaa, L. (2015). Discrete neural signatures of basic emotions. Cerebral Cortex, 26(6), 2563-2573.

Salice, A., Høffding, S., & Gallagher, S. (2017). Putting plural self-awareness into practice: The phenomenology of expert musicianship. Topoi, 38(1), 197-209.

Salimpoor, V. N., Benovoy, M., Larcher, K., Dagher, A., & Zatorre, R. J. (2011). Anatomically distinct release during anticipation and experience of peak emotion to music. Nature Neuroscience, 14(2), 257-262.

Salimpoor, V. N., Benovoy, M., Longo, G., Cooperstock, J. R., & Zatorre, R. J. (2009). The rewarding aspects of music listening are related to degree of emotional arousal. PloS One, 4(10), e7487.

Sammler, D., Grigutsch, M., Fritz, T., & Koelsch, S. (2007). Music and emotion: Electrophysiological correlates of the processing of pleasant and unpleasant music. Psychophysiology, 44(2), 293-304.

Samson, S., Dellacherie, D., & Platel, H. (2009). Emotional power of music in patients with memory disorders. Annals of the New York Academy of Sciences, 1169(1), 245-255.

Sandstrom, G. M., & Russo, F. A. (2013). Absorption in music: Development of a scale to identify individuals with strong emotional responses to music. Psychology of Music, 41(2), 216-228.

Särkämo, T., Ripolles, P., Vepsäläinen, H., Autti, T., Silvennoinen, H. M., Salli, E., Laitinen, S., Forsblom, A., Soinila, S., & Rodríguez-Fornells, A. (2014). Structural changes induced by daily music listening in the recovering brain after middle cerebral artery stroke: A voxel-based morphometry study. Frontiers in Human Neuroscience, 8, 245.

Sauve, S. A., Sayed, A., Dean, R. T., & Pearce, M. T. (2018). Effects of pitch and timing expectancy on musical emotion. Psychomusicology: Music, Mind, and Brain, 28(1), 17- 39.

293

Saver, J. L., & Rabin, J. (1997). The neural substrates of religious experience. Journal of Neuropsychiatry and Clinical , 9, 498-510.

Schachter, S., & Singer, J. (1962). Cognitive, social, and physiological determinants of emotional state. Psychological Review, 69(5), 379-299.

Schäfer, T., Sedlmeier, P., Städtler, C., & Huron, D. (2013). The psychological functions of music listening. Frontiers in Psychology, 4, 511.

Schellenberg, E. G., Bigand, E., Poulin‐Charronnat, B., Garnier, C., & Stevens, C. (2005). Children’s implicit knowledge of harmony in Western music. Developmental Science, 8(6), 551-566.

Schellenberg, E. G., Krysciak, A. M., & Campbell, R. J. (2000). Perceiving emotion in melody: Interactive effects of pitch and rhythm. Music Perception, 18(2), 155-171.

Schellenberg, E. G., Nakata, T., Hunter, P. G., & Tamoto, S. (2007). Exposure to music and cognitive performance: Tests of children and adults. Psychology of Music, 35(1), 5- 19.

Schellenberg, E. G., Peretz, I., & Vieillard, S. (2008). Liking for happy- and sad- sounding music: Effects of exposure. Cognition & Emotion, 22(2), 218-237.

Scherer, K. R. (1984). On the nature and function of emotion: A component process approach. In K. R. Scherer & P. E. Ekman (Eds.), Approaches to emotion (pp. 89-126). Dordrecht, The Netherlands: Martinus Nijhoff.

Scherer, K. R. (1995). Expression of emotion in voice and music. Journal of Voice, 9(3), 235-248.

Scherer, K. R. (2004). Feelings integrate the central representation of appraisal-driven response organization in emotion. In A. S. R. Manstead, N. Frijda, & A. Fischer (Eds.), Feelings and emotions: The symposium (pp. 136-157). Cambridge University Press.

Scherer, K. R. (2004). Which emotions can be induced by music? What are the underlying mechanisms? And how can we measure them? Journal of New Music Research, 33(3), 239-251.

Scherer, K. R., & Coutinho, E. (2013). How music creates emotion: A multifactorial process approach. In T. Cochrane, B. Fantini, & K. R. Scherer (Eds.), The emotional power of music: Multidisciplinary perspectives on musical arousal, expression, and social control (pp. 121-146). Oxford, UK: Oxford University Press.

Scherer, K. R., & Oshinsky, J. S. (1977). Cue utilization in emotion attribution from auditory stimuli. Motivation and Emotion, 1(4), 331–346.

294

Scherer, K. R., & Zentner, M. R. (2001). Emotional effects of music: Production rules. In P. N. Juslin & J. A. Sloboda (Eds.), Music and emotion: Theory and research (pp. 361- 392). Oxford, UK: Oxford University Press.

Scherer, K. R., Zentner, M. R., & Schacht, A. (2002). Emotional states generated by music: An exploratory study of music experts. Musicae Scientiae, 5(1_suppl), 149-172.

Schiavio, A., van der Schyff, D., Céspedes Guevara, J., & Reybrouck, M. (2016). Enacting musical emotions: Enaction, dynamic systems and the embodied mind. Phenomenology and the Cognitive Sciences, 16(5), 785-809.

Schimmack, U., & Grob, A. (2000). Dimensional models of core affect: A quantitative comparison by means of structural equation modeling. European Journal of Personality, 14(4), 325-345.

Schmidt, L. A., & Trainor, L. J. (2001). Frontal brain electrical activity (EEG) distinguishes valence and intensity of musical emotions. Cognition & Emotion, 15(4), 487-500.

Schubert, E. (2001). Continuous measurement of self-report emotional response to music. In P. N. Juslin & J. A. Sloboda (Eds.), Music and emotion: Theory and research (pp. 393-414). New York, NY: Oxford University Press.

Schubert, E. (2003). Update of Hevner’s adjective checklist. Perceptual and Motor Skills, 96(3_suppl), 1117-1122.

Schubert, E. (2004). Modeling perceived emotion with continuous musical features. Music Perception: An Interdisciplinary Journal, 21(4), 561-585.

Schubert, E. (2007). Locus of emotion: The effect of task order and age on emotion perceived and emotion felt in response to music. Journal of Music Therapy, 44(4), 344- 368.

Schubert, E. (2010). Affective, evaluative, and collative responses to hated and loved music. Psychology of Aesthetics, Creativity, and the Arts, 4(1), 36-46.

Schubert, E. (2013). Emotion felt by the listener and expressed by the music: Literature review and theoretical perspectives. Frontiers in Psychology, 4, 837.

Schuller, B., Dorfner, J., & Rigoll, G. (2010). Determination of nonprototypical valence and arousal in popular music: Features and performances. EURASIP Journal on Audio, Speech, and Music Processing, 2010, 1–20.

295

Schutz, M., Huron, D., Keeton, K., & Loewer, G. (2008). The happy xylophone: Acoustics affordances restrict an emotional palate. Empirical Musicology Review, 3(3), 126-135.

Senju, M., & Ohgushi, K. (1987). How are the player’s ideas conveyed to the audience? Music Perception: An Interdisciplinary Journal, 4(4), 311-323.

Shanahan, D., & Huron, D. (2014). Heroes and villains: The relationship between pitch tessitura and sociability of operatic characters. Empirical Musicology Review, 9(2), 141- 153.

Shapiro, K. L., & Lim, A. (1989). The impact of anxiety on visual attention to central and peripheral events. Behaviour Research and Therapy, 27(4), 345-351.

Shatin, L. (1970). Alteration of mood via music: A study of the vectoring effect. The Journal of Psychology, 75(1), 81-86.

Sherman, M. (1928). Emotional character of the singing voice. Journal of Experimental Psychology, 11(6), 495-497.

Shute, N. (2014, March 6). Strange by true: Music doesn’t make some people happy. Retrieved from https://www.npr.org/sections/health-shots/2014/03/06/286786987/for- some-people-music-truly-doesnt-make-them-happy

Siegel, E. H., Sands, M. K., Van den Noortgate, W., Condon, P., Chang, Y., Dy, J., Quigley, K. S., & Barrett, L. F. (2018). Emotion fingerprints or emotion populations? A meta-analytic investigation of autonomic features of emotion categories. Psychological Bulletin, 144(4), 343-393.

Siegwart, H., & Scherer, K. (1995). Acoustic concomitants of emotional expression in operatic singing: The case of Lucia in Ardi gli incense. Journal of Voice, 9(3), 249-260.

Silk, J. B., & House, B. R. (2011). Evolutionary foundations of human prosocial sentiments. Proceedings of the National Academy of Sciences, 108(Supplement 2), 10910-10917.

Silk, J. B., Kaldor, E., & Boyd, R. (2000). Cheap talk when interests conflict. Animal Behaviour, 59(2), 423-432.

Silvia, P. J., & Abele, A. E. (2002). Can positive affect induce self-focused attention? Methodological and measurement issues. Cognition & Emotion, 16(6), 845-853.

Sloboda, J. A. (1991). Music structure and emotional response: Some empirical findings. Psychology of Music, 19(2), 110-120.

296

Sloboda, J. A., & Lehmann, A. C. (2001). Tracking performance correlates of changes in perceived intensity of emotion during different interpretations of a Chopin piano prelude. Music Perception: An Interdisciplinary Journal, 19(1), 87-120.

Smallman, R., Becker, B., & Roese, N. J. (2014). Preferences for expressing preferences: People prefer finer evaluative distinctions for liked than disliked objects. Journal of Experimental Social Psychology, 52, 25-31.

Solomon, R. C. (2003). Not passion’s slave: Emotions and choice. New York, NY: Oxford University Press.

Song, Y., Dixon, S., & Pearce, M. (2012, October). Evaluation of Musical Features for . In Proceedings of the 13th International Conference on Music Information Retrieval, ISMIR (pp. 523-528). Porto, .

Song, Y., Dixon, S., Pearce, M. T., & Halpern, A. R. (2016). Perceived and induced emotion responses to popular music: Categorical and dimensional models. Music Perception: An Interdisciplinary Journal, 33(4), 472-492.

Spackman, M. P., Fujiki, M., Brinton, B., Nelson, D., & Allen, J. (2005). The ability of children with language impairment to recognize emotion conveyed by facial expression and music. Communication Disorders Quarterly, 26(3), 131-143.

Speck, J. A., Schmidt, E. M., Morton, B. G., & Kim, Y. E. (2011). A comparative study of collaborative vs. traditional musical mood annotation. In Proceedings of the 12th International Conference on Music Information Retrieval, ISMIR (pp. 549-554). Miami, Florida.

Stachó, L., Saarikallio, S., Van Zijl, A., Huotilainen, M., & Toiviainen, P. (2013). Perception of emotional content in musical performances by 3-7-year-old children. Musicae Scientiae, 17(4), 495-512.

Steinbeis, N., Koelsch, S., & Sloboda, J. A. (2006). The role of harmonic expectancy violations in musical emotions: Evidence from subjective, physiological, and neural responses. Journal of , 18(8), 1380-1393.

Stephens, C. L., Christie, I. C., & Friedman, B. H. (2010). Autonomic specificity of basic emotions: Evidence from pattern classification and cluster analysis. Biological Psychology, 84(3), 463-473.

Stöber, J. (1997). Trait anxiety and pessimistic appraisal of risk and chance. Personality and Individual Differences, 22(4), 465-76.

Stratton, V. N., & Zalanowski, A. H. (1989). The effects of music and paintings on mood. Journal of Music Therapy, 26(1), 30-41.

297

Stratton, V. N., & Zalanowski, A. H. (1991). The effects of music and cognition on mood. Psychology of Music, 19(2), 121-127.

Stratton, V. N., & Zalanowski, A. H. (1994). Affective impact of music vs. lyrics. Empirical Studies of the Arts, 12(2), 173-184.

Susino, M., & Schubert, E. (2017). Cross-cultural anger communication in music: Towards a stereotype theory of emotion in music. Musicae Scientiae, 21(1), 60-74.

Sutcliffe, R., Rendell, P. G., Henry, J. D., Bailey, P. E., & Ruffman, T. (2017). Music to my ears: Age-related decline in musical and facial emotion recognition. Psychology and Aging, 32(8), 698-709.

Sweeney, J. C., & Wyber, F. (2002). The role of cognitions and emotions in the music- approach-avoidance behavior relationship. Journal of Services Marketing, 16(1), 51-69.

Tan, S. L., Spackman, M. P., & Bezdek, M. A. (2007). Viewers’ interpretations of film characters’ emotions: Effects of presenting film music before or after a character is shown. Music Perception, 25(2), 135-152.

Taruffi, L., & Koelsch, S. (2014). The paradox of music-evoked sadness: An online survey. PLoS One, 9(10), e110490.

Thayer, R. E. (1989). The biology of mood and arousal. New York, NY: Oxford University Press.

Thoma, M. V., La Marca, R., Brönnimann, R., Finkel, L., Ehlert, U., & Nater, U. M. (2013). The effect of music on the human stress response. PloS One, 8(8), e70156.

Thoma, M., Ryf, S., Ehlert, U., & Nater, U. (2006). Regulation of emotions by listening to music in emotional situations. In M. Baroni, A. R. Addessi, R. Caterina, & M. Costa (Eds.), Proceedings of the 9th International Conference on Music Perception and Cognition (pp. 1088-1093). Bologna, Italy.

Thompson, W. F., & Balkwill, L. L. (2006). Decoding speech prosody in five languages. Semiotica, 2006(158), 407-424.

Thompson, W. F., & Balkwill, L. L. (2008). Cross-cultural similarities and differences. In P. N. Juslin & J. Sloboda (Eds.), Handbook of music and emotion: Theory, research, applications. Oxford University Press.

Thompson, W. F., & Robitaille, B. (1992). Can composers express emotions through music? Empirical Studies of the Arts, 10(1), 79-89.

Thompson, W. F., Schellenberg, E. G., & Husain, G. (2001). Arousal, mood, and the . Psychological Science, 12(3), 248-251.

298

Timmers, R., Marolt, M., Camurri, A., & Volpe, G. (2006). Listeners’ emotional engagement with performances of a Scriabin étude: An explorative case study. Psychology of Music, 34(4), 481-510.

Tomkins, S. (1962). Affect imagery consciousness, Volume I: The positive affects. Springer.

Tomkins, S. S. (1980). Affect as amplification: Some modifications in theory. In R. Plutchik & H. Kellerman (Eds.), Theories of emotion (pp. 141-164). New York, NY: Academic Press.

Tottenham, N., Tanaka, J. W., Leon, A. C., McCarry, T., Nurse, M., Hare, T. A., Marcus, D. J., Westerlund, A., Casey, B. J., & Nelson, C. (2009). The NimStim set of facial expressions: Judgments from untrained research participants. Research, 168(3), 242-249.

Tracy, J. L., & Randles, D. (2011). Four models of basic emotions: A review of Ekman and Cordaro, Izard, Levenson, and Panksepp and Watt. Emotion Review, 3(4), 397-405.

Trevor, C. (2018). Screaming strings and looming drones: Ethological perspectives of music for terror and suspense in films. Paper presented at the The Music and the Moving Image Conference, New York, NY.

Trochidis, K., & Bigand, E. (2013). Investigation of the effect of mode and tempo on emotional responses to music using EEG power asymmetry. Journal of Psychophysiology, 27(3), 142–147.

Tsang, C. D., Trainor, L. J., Santesso, D. L., Tasker, S. L., & Schmidt, L. A. (2001). Frontal EEG responses as a function of affective musical features. Annals of the New York Academy of Sciences, 930(1), 439-442.

Turner, B., & Huron, D. (2008). A comparison of dynamics in major-and minor-key works. Empirical Musicology Review, 3(2), 64-68.

Tuuri, K., & Eerola, T. (2012). Formulating a revised taxonomy for modes of listening. Journal of New Music Research, 41(2), 137-152.

Unkelbach, C. (2012). Positivity advantages in social information processing. Social and Compass, 6(1), 83-94.

Unkelbach, C., Fiedler, K., Bayer, M., Stegmüller, M., & Danner, D. (2008). Why postitive information is processed faster: The density hypothesis. Journal of Personality and Social Psychology, 95(1), 36-49.

299

Urban, G. (1988). Ritual wailing in amerindian Brazil. American Anthropologist, 90(2), 385-400.

Vaish, A., Grossmann, T., & Woodward, A. (2008). Not all emotions are created equal: The negativity bias in social-emotional development. Psychological Bulletin, 134(3), 383-403.

Vaish, A., Grossmann, T., & Woodward, A. (2008). Not all emotions are created equal: The negativity bias in social-emotional development. Psychological Bulletin, 134(3), 383-403. van den Tol, A. J. M. (2016). The appeal of sad music: A brief overview of current directions in research on motivations for listening to sad music. The Arts in Psychotherapy, 49, 44-49. van der Schyff, D. (2013). Emotion, embodied mind, and the therapeutic aspects of musical experience in everyday life. Approaches: Music Therapy and Special Music Education, 5(1), 50-58.

Van Goozen, S., & Frijda, N. H. (1993). Emotion words used in six European countries. European Journal of Social Psychology, 23(1), 89-95.

Västfjäll, D. (2002). Emotion induction through music: A review of the musical mood induction procedure. Musicae Scientiae, 5(1_suppl), 173-211.

Velten Jr., E. (1968). A laboratory task for induction of mood states. Behaviour Research and Therapy 6(4), 473-482.

Vempala, N. N., & Russo, F. A. (2012, June). Predicting emotion from music audio features using neural networks. In Proceedings of the 9th International Symposium on Computer Music Modeling and Retrieval (CMMR) (pp. 336-343). London, UK.

Vieillard, S., Peretz, I., Gosselin, N., Khalfa, S., Gagnon, L., & Bouchard, B. (2008). Happy, sad, scary and peaceful musical excerpts for research on emotions. Cognition & Emotion, 22(4), 720-752.

Vines, B. W., Krumhansl, C. L., Wanderley, M. M., & Levitin, D. J. (2006). Cross-modal interactions in the perception of musical performance. Cognition, 101(1), 80-113.

Vines, B. W., Krumhansl, C. L., Wanderley, M. M., Dalca, I. M., & Levitin, D. J. (2005). Dimensions of emotion in expressive musical performance. Annals of the New York Academy of Sciences, 1060(1), 462-466.

Vines, B. W., Krumhansl, C. L., Wanderley, M. M., Dalca, I. M., & Levitin, D. J. (2011). Music to my eyes: Cross-modal interactions in the perception of emotions in musical performance. Cognition, 118(2), 157-170.

300

Vingerhoets, A. J., & Cornelius, R. R. (Eds.). (2012). Adult crying: A biopsychosocial approach. Routledge.

Vohs, K. D., Mead, N. L., & Goode, M. R. (2006). The psychological consequences of money. Science, 314(5802), 1154-1156.

Vuoskoski, J. K., & Eerola, T. (2011). Measuring music-induced emotion: A comparison of emotion models, personality biases, and intensity of experiences. Musicae Scientiae, 15(2), 159-173.

Vuoskoski, J. K., & Eerola, T. (2011). The role of mood and personality in the perception of emotions represented by music. Cortex, 47(9), 1099-1106.

Vuoskoski, J. K., & Eerola, T. (2012). Can sad music really make you sad? Indirect measures of affective states induced by music and autobiographical memories. Psychology of Aesthetics, Creativity, and the Arts, 6(3), 204-213.

Vuoskoski, J. K., & Eerola, T. (2015). Extramusical information contributes to emotions induced by music. Psychology of Music, 43(2), 262-274.

Vuoskoski, J. K., & Eerola, T. (2017). The pleasure evoked by sad music is mediated by feelings of being moved. Frontiers in Psychology, 8, 439.

Vuoskoski, J. K., Gatti, E., Spence, C., & Clarke, E. F. (2016). Do visual cues intensify the emotional responses evoked by musical performance? A psychophysiological investigation. Psychomusicology: Music, Mind, and Brain, 26(2), 179-188.

Vuoskoski, J. K., Thompson, W. F., McIlwain, D., & Eerola, T. (2012). Who enjoys listening to sad music and why? Music Perception: An Interdisciplinary Journal, 29(3), 311-317.

Wanderley, M. M. (2001, April). Quantitative analysis of non-obvious performer gestures. In International Gesture Workshop (pp. 241-253). Berlin, Heidelberg: Springer.

Wang, J. C., Yang, Y. H., Wang, H. M., & Jeng, S. K., (2012, October). The acoustic emotion Gaussians model for emotion-based music annotation and retrieval. In Proceedings of the 20th ACM International Conference on Multimedia (pp. 89-98). ACM.

Warrenburg, L. A. (2016). Examining contrasting expressive content in first and second musical themes. Master's Thesis, Ohio State University.

Warrenburg, L. A. (accepted). Comparing musical emotion models and psychological emotion models, Psychomusicology: Music, Mind, and Brain.

301

Warrenburg, L. A. (in preparation). Mapping a similarity space of musical affects.

Warrenburg, L. A. (submitted). Assembling melancholic and grieving musical passages. DOI 10.17605/OSF.IO/VMR7J

Warrenburg, L. A. (submitted). Choosing the right tune: A review of music stimuli used in emotion research. DOI 10.17605/OSF.IO/ZP6WA

Warrenburg, L. A. (submitted). Corpus of Previously-Used Musical Stimuli (PUMS).

Warrenburg, L. A. (submitted). Melancholy and grief music express different affective states. DOI 10.17605/OSF.IO/AGBZT

Warrenburg, L. A. (submitted). People experience different emotions from melancholic and grieving music. DOI 10.17605/OSF.IO/3ET9W

Warrenburg, L. A. (submitted). Redefining sad music: Music’s structure suggests at least two sad states. DOI 10.17605/OSF.IO/DPYWT

Warrenburg, L. A. (submitted). The PUMS database: A corpus of previously-used musical stimuli in 306 studies of music and emotion. DOI 10.17605/OSF.IO/NVQE8

Warrenburg, L. A., & Huron, D. (2019). Tests of contrasting expressive content between first and second musical themes. Journal of New Music Research, 48(1), 21-35.

Warrenburg, L. A., & Way, B. (2018). Acetaminophen blunts emotional responses to music. In Proceedings of the 15th International Conference for Music Perception and Cognition (pp. 483-488). Montréal, Canada.

Watt, R. J, & Ash, R. L. (1998). A psychological investigation of meaning in music. Musicae Scientae, 2(1), 33-53.

Wedin, L. (1972). A multidimensional study of perceptual‐emotional qualities in music. Scandinavian Journal of Psychology, 13(1), 241-257.

Wehrle, T., Kaiser, S., Schmidt, S., & Scherer, K. R. (2000). Studying the dynamics of emotional expression using synthesized facial muscle movements. Journal of Personality and Social Psychology, 78(1), 105-119.

Weinstein, D., Launay, J., Pearce, E., Dunbar, R. I., & Stewart, L. (2016). Group music performance causes elevated pain thresholds and social bonding in small and large groups of singers. Evolution and Human Behavior: Official Journal of the Human Behavior and Evolution Society, 37(2), 152-158.

Wellman, H. M., Harris, P. L., Banerjee, M., & Sinclair, A. (1995). Early understanding of emotion: Evidence from natural language. Cognition & Emotion, 9(2-3), 117–149.

302

Wells, A., & Hakanen, E. (1991). The emotional use of popular music by adolescents. Journalism Quarterly, 68(3), 445-454.

Weninger, F., Eyben, F., Schuller, B. W., Mortillaro, M., & Scherer, K. R. (2013). On the acoustics of emotion in audio: What speech, music, and sound have in common. Frontiers in Psychology, 4, 292.

Wenzlaff, R. M., Wegner, D. M., & Klein, S. B. (1991). The role of in the bonding of thought and mood. Journal of Personality and Social Psychology, 60(4), 500-508.

Widen, S. C. (2013). Children’s interpretation of facial expressions: The long path from valence-based to specific discrete categories. Emotion Review, 5(1), 72-77.

Widen, S. C., & Russell, J. A. (2008). Children acquire emotion categories gradually. Cognitive Development, 23(2), 291-312.

Willner, P., Benton, D., Brown, E., Survjit, C, Gareth, D., Morgan, J., & Morgan, M. (1998). “Depression” increases “craving” for sweet rewards in animal and human models of depression and craving. , 136(3), 272-283.

Winkielman, P., & Berridge, K. C. (2004). Unconscious emotion. Current Directions in Psychological Science, 13(3), 120-123.

Witvliet, C. V., & Vrana, S. R. (2007). Play it again Sam: Repeated exposure to emotionally evocative music polarises liking and smiling responses, and influences other affective reports, facial EMG, and heart rate. Cognition & Emotion, 21(1), 3-25.

Wood, J. V., Saltzberg, A., & Goldsamt. L. A. (1990). Does affect induce self-focused attention? Journal of Personality and Social Psychology, 58(5), 899-908.

Wu, T. L., & Jeng, S. K. (2008, January). Probabilistic estimation of a novel music emotion model. In International Conference on Multimedia Modeling (pp. 487-497). Berlin, Heidelberg: Springer.

Wundt, W. M. (1907). Outlines of psychology. W. Engelmann.

Yalch, R., & Spangenberg, E. (1990). Effects of store music on shopping behavior. Journal of Consumer Marketing, 7(2), 55-63.

Yang, Y. H., & Chen, H. H. (2011). Prediction of the distribution of perceived music emotions using discrete samples. IEEE Transactions on Audio, Speech, and Language Processing, 19(7), 2184-2196.

303

Yang, Y. H., & Chen, H. H. (2011). Ranking-based emotion recognition for music organization and retrieval. IEEE Transactions on Audio, Speech, and Language Processing, 19(4), 762-774.

Yang, Y. H., Lin, Y. C., Su, Y. F., & Chen, H. H. (2007, July). Music emotion classification: A regression approach. In 2007 IEEE International Conference on Multimedia and Expo (pp. 208-211). IEEE.

Yang, Y. H., Liu, C. C., & Chen, H. H. (2006, October). Music emotion classification: A fuzzy approach. In Proceedings of the 14th ACM International Conference on Multimedia (pp. 81-84). ACM.

Yang, Y. H., Su, Y. F., Lin, Y. C., & Chen, H. H. (2007, September). Music emotion recognition: The role of individuality. In Proceedings of the International Workshop on Human-centered Multimedia (pp. 13-22). ACM.

Yim, G., Huron, D., & Chordia, P. (in preparation). The effect of “lower than normal” pitch collections on sadness judgments of unconventionally-tuned melodies.

Zentner, M., Grandjean, D., & Scherer, K. R. (2008). Emotions evoked by the sound of music: Differentiation, classification, and measurement. Emotion, 8(4), 494-521.

304

Appendix A: Additional Information about Chapter 4

The purpose of this Appendix is to provide additional information about the review of emotional stimuli used between 1928 and 2018. First, the Appendix provides a table detailing all emotional labels, as documented in the PUMS database, which researchers have given to musical stimuli. Additionally, a table is presented that catalogues past studies that other researchers have used to gather emotional music stimuli. Next, some description of which musical styles or genres have been examined by music researchers is listed. A table regarding the musical works used in studies utilizing the Musical Mood Induction Procedure (MMIP) is offered next. Subsequently, a comparison of the duration of stimuli in studies of induced and perceived emotion is presented. The next table describes which emotions were studied in perceived and induced methodologies. Finally, the biplot created by the correspondence analysis is shown, which summarizes how different features of the music are related to various emotion categories.

305

Term Number Term Number Term Number Comforting, Sad 2013 Scary 12 Relaxing 2 Happy 1614 Spiritual 12 Exaggerated 2 Anger 869 Pleasant, Joyful 10 Active; Agitated 1 Relaxed 775 Serene 10 Active; Alert 1 Chills, Pleasure 731 Calm 9 Agitated; Anger 1 Agitated; Fear 332 Content 7 Restless 1 Disappointment, , Chills 318 Frustration 7 Anger; Hate 1 Peace 172 Distress, Sad 7 Anger; Joy; Sad 1 Fear, Startle, Groove 148 Nervous 7 Arousal 1 Arousal; Negative , Negative Valence 119 Satisfaction 7 Valence 1 Positive Valence 116 Gratitude 7 Bored; Listless 1 Negative Valence, Low Bored; Arousal 100 Hate, Disgust 7 Unstimulating 1 Hope, Joy, Happy, Gloating, Neutral 94 Surprise, Excited 7 Chills; Happy 1 Positive Valence, High Arousal 93 Lonely 7 Chills; Sad 1 Negative Valence, High Arousal 90 Love 7 Depressed; Sad 1 Positive Valence, Low Arousal 89 7 Energizing 1 Tender 86 Pleasant; Trivial 7 Exalted 1

Continued

Table A.1. Complete list of emotion terms in the PUMS database

306

Table A.1 continued

Term Number Term Number Term Number Fear; Pride, Threatening 56 Admiration 7 Excitative 1 Happy; Sad 53 Relief 7 Excited; Festive 1 , Joy 49 Anger 7 Expressive 1 Happy; Pleasant 46 Shame, Reproach 7 Angry/Agitated 1 Arousal; Potency; Valence 45 Surprise 6 Happy; Fear; Sad 1 Low Arousal 30 Exuberant 5 Irritation 1 Negative/Positive Valence, Negative Low/High Tension 30 Arousal 5 Joy; Pleasant 1 Negative/Positive Positive Valence, Low Energy 30 Active 4 Arousal 1 Positive Tension 30 Joy; Lively 4 Normal 1 Negative Valence, Low/High Depressed 27 Arousal 4 Nostalgia 1 Sad, Pleasant; Trivial; Beautiful 27 Unpleasant 4 Pain 1 Positive Valence, Arousal; Low/High Valence 21 Arousal 4 Peace; Relaxing 1 Pleasant; Lively 21 Sedated 4 Unpleasant 1 Solemn 20 Tension 4 Restrained 1 Lump in Throat; Tears 19 Agitated 3 Serene; Tranquil 1 Solemn; Anxiety 16 Anger; Fear 3 Unpleasant 1 Elated 15 Disgust 3 Sorrow 1 Unpleasant 14 Excited 3 Standard 1 Affection 12 Neutral; Sad 3 Stimulating 1 Humor 12 Amusement 2 Tranquil 1 Longing 12 Arousing 2 Unsettling 1

307

Source Number of Stimuli Source Number of Stimuli Albersnagel, 1998 6 Last.FM 3024 Altenmüller et al., 160 Levitin & Menon, 16 2002 2003 Bachorik et al., 2009 288 Liégeois-Chauvel et 40 al., 2014 Bigand et al., 2001 24 Magnatune.com 800 Bigand et al., 2005 89 Martin, 1990 10 8 McFarland, 1985 2 Year End Charts Blood et al., 1999 1 Nagel et al., 2007 7 Bower & Mayer, 2 Nater et al., 2006 3 1989 Darling, 1982 9 Nyklicek et al., 1997 25 Clark, 1983 9 Omar et al., 2010 40 Clark & Teasdale, 7 Panksepp, 1995 1 1985 Costa-Giomi et al., 2 Peretz et al., 1998 248 2008 Dalla Bella et al., 18 Peretz et al., 2001 42 2001 Dibben, 2004 8 Peretz et al., 2002 21 Dolgin & Adelson, 1 Ramos et al., 1996 1 1990 Eerola & Vuoskoski, 338 Roy et al., 2008 30 2011 Essen Collection 12 Schellenberg, 1996 2 Filipic et al., 2010 40 Schuller et al., 2010 5296 Flores-Gutiérrez, 2 Spackman et al., 2005 4 2001 Gentile, 1998 10 Speck et al., 2011 240 Gosselin et al., 2005 168 Smithsonian 21 Collection of Classic Country Music, 1981 Gosselin et al., 2006 18 Song et al., 2012 80 Gomez & Danuser, 16 Song et al., 2013 80 2004 Grewe et al., 2007 10 Stratton & 10 Zalanowski, 1991 Continued

Table A.2. List of “past studies” where experimenters gathered their stimuli. Note that not all of studies labeled which experiment they used to choose their stimuli. The sources in italics were labeled as “experimenter/expert chosen.”

308

Table A.2 continued

Source Number of Stimuli Source Number of Stimuli Hevner, 1935 10 Thomson Music Index 108 Demo Hevner, 1946 10 Vieillard et al., 2008 198 Hunter et al., 2008 60 Vines et al., 2006 1 Iwanaga et al., 1996 2 Vuoskoski & Eerola, 3 2012 Iwanaga & 2 Wanderley, 2002 3 Tsukamoto, 1997 Juslin, 1997 10 Wu & Jeng, 2006 76 Kawakami et al., 2 Wu & Jeng, 2008 37 2013 Koelsch et al., 2006 28 Yang & Chen, 2011 60 Kreutz et al., 2003 10 Yang et al., 2006 390 Krumhansl, 1997 3 Yang et al., 2007 60

309

Style Stimuli Style Stimuli Style Stimuli Popular 10705 Western Art 1910 Film Music 1397 Music Music Chinese 1241 MIDI 766 Keyboard 461 Music Instruments Electronic 363 Rock 289 Jazz 193 Folk Music 151 Natural 136 Voice 131 Sounds Guitar 88 Rock & Pop 69 Indian Music 59 Soul 58 Drum Beat 48 Ambient 42 Mafa 40 /Soul 36 Japanese 34 Music Russian 32 Hip-Hop 28 Violin 27 Music Country 25 Flute 24 World 24 Bassoon 20 20 Saxophone 20 Latin 18 Children’s 16 TV Programs 16 Music 15 Atonal 13 Religious 13 Heavy 12 Hymn 12 11 Metal1 Rap & Hip- 11 Alternative 10 Sentograph 10 Hop Pygmy Music 8 R&B 7 Traditional 6 Music Gospel 5 5 Death Metal 4 Foreign 4 Indie 4 3 Pop/Rock Trumpet 3 African Music 2 Background 2 Music Dance Music 2 Minimal 2 Post-Rock 2 Music Randomly 2 Krygystani 2 Navaho 2 Generated Music Indian Music Avant-garde 1 Bossa Nova 1 Computer 1 Game Music Rock 1 Gothic Rock 1 House 1 Industrial 1 Metal 1 Modern 1 Rock Tonal Psychedelic 1 Romantic 1 Single Tones 1 Trance Trance 1

Table A.3. List of musical styles or genres in the PUMS database

310

Reference Study Aim Length of Music Intended Mood Musical Work Albersnagel, 1998 Test if MMIP works 7 minutes Anxiety Stravinsky, Rite of Spring Depressed Sibelius, Swan of Tuonela Depressed Dvorák, Ninth Symphony Elated Delibes, Coppelia Neutral Debussy, Prelude, l’Apris Midi d’un Faun Clark & Teasdale, 1985 Effect of mood on 7 minutes Depressed Prokoviev, Russia Under the memory recall Mongolian Yoke, played half speed Elated Delibes, Coppelia 311 Bouhuys et al., 1995 Effect of mood on 7 minutes Depressed Sibelius, Swan of Tuonela

judgment of Elated Delibes, Coppelia ambiguous facial expressions Shapiro & Lim, 1989 Attentional bias to Anxiety Stravinsky, Rite of Spring affectively-neutral Non-anxiety Faure, for Piano and visual stimuli Orchestra Mathews & Bradley, 1983 Vulnerability to Depressed Prokoviev, Russia Under the depression and Mongolian Yoke, played negative recall bias half speed Mayer et al., 1990 Mood congruency Happy Delibes, Coppelia and instruction Happy Weisberg, The Good Life manipulations Happy J.S. Bach, Brandenberg Concerto 2 Happy Mozart, Toy Symphony Continued

Table A.4. Examples of stimuli used in Musical Mood Induction Procedure (MMIP) studies

Table A.4 continued

Reference Study Aim Length of Music Intended Mood Musical Work Sad Prokoviev, Russia Under the Mongolian Yoke, played half speed Sad Colombier, Emmanuel Lenton & Martin, 1991 Instructions and Depressed Prokoviev, Russia Under the MMIP Mongolian Yoke, played half speed Elated Delibes, Coppelia 312 Clark et al., 2001 Mood and risk 5 minutes Depressed Prokoviev, Russia Under the

taking and salivary Mongolian Yoke, played cortisol half speed Elated Delibes, Coppelia Martin & Metha, 1997 MMIP and memory 7 minutes Sad Albinoni, Adagio recall Sad Beethoven, Symphony 3, Allegro Vivace Sad Beethoven, Symphony 4, Adagio-Allegro Vivace Sad Tchaikovsky, Fantasy Overture, Romeo and Juliet Neutral Reich, Variations for Winds, Strings, and Keyboards Neutral Debussy, La Mer, Dawn Until Noon on the Sea Happy Vivaldi, Concerto No. 3, Allegro Continued

Table A.4 continued

Reference Study Aim Length of Music Intended Mood Musical Work

Happy Tchaikovsky, , Swan Lake Happy Mozart, Eine Kleine Nachtmusik, Allegro Happy Mozart, Flute Concerto in D Major, Allegro Stöber, 1997 Anxiety and risk Anxiety Stravinsky, Rite of Spring 313 Neutral Faure, Ballad for Piano and

Orchestra Willner et al., 1998 Depression and 3 minutes Depressed Prokoviev, Russia Under the chocolate craving Mongolian Yoke, played half speed Elated Delibes, Coppelia Stratton & Ability of music and 3 minutes Depressed Beethoven, Symphony 3, Zalanowski, 1989 paintings to change Mvt 2 mood Positive Affect Copland, Appalachian Spring Neutral Mozart, Symphony 50, Mvt 2 Wenzlaff et al., 1991 Mood congruency 9 minutes Upbeat Byrne, Beleze Tropical and thought Upbeat Laws, Brandenberg suppression Concerto 3 Somber Prokoviev, Russia Under the Mongolian Yoke Continued

Table A.4 continued

Reference Study Aim Length of Music Intended Mood Musical Work Somber Jarrett, Spheres, Mvts 6 and 7 Neutral Adams, Common Tones in Simples Time & Sabini, 1990 Mood incongruent 8 minutes Sad Prokoviev, Russia Under the recall (or 15 min) Mongolian Yoke, played half speed Happy Delibes, Coppelia 314 Balch et al., 1999 How valence/arousal 6 minutes High Pleasantness/ Mozart, Symphony 41,

affect mood- High Arousal Mvt 4 dependent memory High Pleasantness/ Tchaikovsky, Mazurka, High Arousal Swan Lake High Pleasantness/ J.S. Bach, Jesu, Joy of Man’s Low Arousal Desiring High Pleasantness/ Gluck, Orpheus and Low Arousal Eurydice Low Pleasantness/ Mussorgsky, Night on Bald High Arousal Mountain Low Pleasantness/ Grieg, In the Hall of the High Arousal Mountain King Low Pleasantness/ Beethoven, Symphony 3, Low Arousal Mvt 2 Low Pleasantness/ Marcello, Oboe Concerto in Low Arousal D Minor, Adagio Morrow & Mood regulation for Depressed Barber, Adagio for Strings Nolen-Hoeksema, 1990 depression Continued

Table A.4 continued

Reference Study Aim Length of Music Intended Mood Musical Work Wood et al., 1990 Affect and self- 10 minutes Sad Prokoviev, Russia Under the focused attention Mongolian Yoke, played half speed Happy Laws, Brandenberg Concerto 3 Neutral Chopin, Waltz 11 in G-Flat Neutral Chopin, Waltz 12 in F Minor Neutral Hedges, Aerial Boundaries 315 Eich & Metcalfe, 1989 Mood-dependent 45 minutes Happy Mozart, Eine Kleine

memory for internal Nachtmusik and external events Happy Mozart, Divertimento 136 Sad Albinoni, Adagio in G Minor Sad Barber, Adagio pour Courdes Heatherton et al., 1998 Emotional distress 8 minutes Neutral Adams, Chairman Dances, and disinhibited Common Tones in Simple eating Times Sad Prokoviev, Russia Under the Mongolian Yoke, played half speed Parrott, 1991 Mood-congruent 8 minutes Happy Delibes, Mazurka, Act 1, bias and subject Coppelia compliance Sad Prokoviev, Russia Under the Mongolian Yoke, played half speed Continued

Table A.4 continued

Reference Study Aim Length of Music Intended Mood Musical Work Control Delibes, Coppelia, played half speed Carter et al., 1995 Music-induced mood 7 minutes Low Mood Winston, Winter into Spring bulimic and healthy women Low Mood Bach, Concerto for Two Violins in D Minor Low Mood Fogelberg, Same Old Lang Syne 316 Low Mood O’Connor Compares

Two U Low Mood Carpenter, Rainy Days and Mondays Low Mood Dvorak, Symphony No. 9, Mvt 2; Largo Low Mood Sibelius, Swan of Tuonela

Both Induced Perceived Time Length Number Time Length Number Time Length Number < 30 sec 38 < 30 sec 776 < 30 sec 3178 1 - 2 min 30 1 - 2 min 639 1 - 2 min 65 10 min + 0 10 min + 28 10 min + 1 2 - 3 min 1 2 - 3 min 102 2 - 3 min 38 3 - 4 min 7 3 - 4 min 694 3 - 4 min 2 30 sec - 1 min 193 30 sec - 1 min 538 30 sec - 1 min 1579 4 - 5 min 4 4 - 5 min 5 4 - 5 min 2 5 - 6 min 2 5 - 6 min 14 5 - 6 min 1 6 - 7 min 1 6 - 7 min 41 6 - 7 min 1 7 - 8 min 27 7 - 8 min 32 7 - 8 min 0 8 - 9 min 0 8 - 9 min 11 8 - 9 min 0 9 - 10 min 0 9 - 10 min 8 9 - 10 min 1

Table A.5. Comparison of induced-perceived stimuli with their duration in the PUMS database

317

Induced Perceived Both Emotion Count Emotion Count Emotion Count Chills, Pleasure 731 Sad 524 Sad 36 Sad 666 Happy 467 Happy 33 Happy 347 Fear 270 Anger 23 Chills 311 Anger 174 Relaxed 20 Negative Valence 89 Peace 153 Fear 12 Positive Valence 86 Tender 62 Happy; Sad 8 Negative Valence, Low Arousal 80 Fear; Threatening 56 Peace 7 Negative Positive Valence, Arousal; Potency; Valence, High Arousal 71 Valence 45 High Arousal 5 Negative Negative Valence, Valence, Low High Arousal 69 Neutral 34 Arousal 5 Positive Valence, Low Positive Valence, Arousal 69 Low Arousal 30 High Arousal 5 Positive Valence, Neutral 60 Negative Tension 30 Low Arousal 4 Fear 37 Negative Valence 30 Excited 3 Anger 30 Positive Energy 30 Joy 3 Happy; Sad 27 Positive Tension 30 Sedated 3 Negative 1 Valence, Low/High Tender 24 Positive Valence 30 Arousal Positive Valence, 1 Low/High Depressed 22 Joy 27 Arousal Pleasant 21 Sad, Beautiful 27 Lump in Throat; Tears 19 Lively 21 Arousal; Valence 18 Solemn 20 Positive Valence, High Elated 15 Arousal 17 Negative Valence, High Unpleasant 14 Arousal 16 Positive Valence, Low Scary 12 Arousal 16 Negative Valence, Low Calm 9 Arousal 15 Joy 9 Affection 12 Peace 9 Humor 12

Table A.6. Top 25 emotions by condition (induced, perceived, both) in the PUMS database

318

CA − Biplot

Negative Valence Positive Valence

0.4 Other Operationalization

Popular Music

0.2 Induced < 30 sec Experimenter/expert chosen

Film Music Tender Dim2 (35.2%) 2000s and Before 0.0 2010s Anger Fear Happy Sad Perceived

−0.2

Previous studies > 30 sec Western Art Music

−0.25 0.00 0.25 0.50 Dim1 (48.3%)

Figure A.1. Biplot of the correspondence analysis summarizing seven emotion types (represented with circles) and twelve classification types

319

Appendix B: Additional Information about Chapter 5

Recall that Chapter 5 described the ways that melancholic and grieving musical passages were collected. Three different methods were used in order to assemble the musical stimuli used in the ensuing chapters. First, untrained participants solicited examples that they believed to represent or evoke crying (grief) or sadness (melancholy).

A separate group of participants were trained regarding facial expressions of melancholy/sad and grief/crying. Once again, participants freely selected musical examples that they believed represented these emotions. Finally, a trained group of experts identified specific sections of musical works as melancholic or grieving. The results of each of these processes are described below. Every work that a participant associated with these “sad” emotions is listed in the ensuing tables. The labels of specific passages (as melancholy or grief or neither or both) by three experts is also presented.

320

Song Artist Emotion Category 1-800-273-8255 Logic, , Khalid Crying A Face to Call Home Crying Adagio for Strings Barber Crying Alfie Dionne Warwick Crying Almost Lover A Fine Frenzy Crying An American Elegy Frank Ticheli Crying Angel Sarah McLachlan Crying Annie’s Song John Denver Crying Ave Maria J.S. Bach Crying Carissa Sun Kil Moon Crying Caruso Pavarotti Crying Casimir Pulaski Day Sufjan Stevens Crying Coffins MisterWives Crying Come, Sweet Death J.S. Bach Crying Come What May, Moulin Rouge Moulin Rouge Crying Crewe and the Soldier Patrick Doyle Crying Death with Dignity Sufjan Stevens Crying Dust in the Wind Kansas Crying Elegy for a Young America Ronald LoPresti Crying Fix You Coldplay Crying Flow, my tears John Dowland Crying Flute Sonata, II. Sehr langsam Hindemith Crying For Crying Out Loud Meat Loaf Crying F**kin’ Perfect P!nk Crying Gabriel’s Oboe, The Mission Ennio Morricone Crying Georgia Vance Joy Crying Ghost House of Heroes Crying Gorgeous X Ambassadors Crying Here Comes the Sun The Beatles Crying Hero of War Rise Against Crying His Daughter Molly Kate Kestner Crying How to Disappear Completely Radiohead Crying I Dreamed a Dream Les Misérables Crying Wicked, I’m Not that Girl Crying In Christ Alone Natalie Grant Crying In the Real Early Morning Jacob Collier Crying County Derry, Irish Tune Percy Grainger Crying King Lauren Aquilina Crying Lacrimosa, Requiem Mozart Crying Lament J.J. Johnson Crying Life and Death Paul Cardall Crying Love More Daniel Hart Crying Continued

Table B.1. Crying and sad songs selected by untrained participants

321

Table B.1 continued

Song Artist Emotion Category The Consul, Lullaby Gian Carlo Menotti Crying Lush Life Zara Larsson Crying Largo, Symphony No. 9 in Dvorak Crying E Minor, Op. 95 Nimrod, Enigma Variations Elgar Crying Numb Crying Maps Yeah Yeah Yeahs Crying No Shade in the Shadow of Sufjan Stevens Crying The Cross October Eric Whitacre Crying Old Black Joe Stephen Foster Crying On My Own Les Misérables Crying Papa Was a Crying Parting of Friends, Air Eileen Ivers & Crying Immigrant Soul Pretty Funny Lindsay Mendez Crying Radioactive Imagine Dragons Crying Remember Everything Five Finger Death Punch Crying Sad Maroon 5 Crying Sad 2 Frankie Cosmos Crying Say Something A Great Big World Crying Schindler’s List, Theme John Williams Crying Sleep Eric Whitacre Crying So Sick Ne-Yo Crying Someone Like You Crying Sometimes Goldmund Crying Sonata No. 8 in C Minor, Op. Beethoven Crying 13, Adagio cantabile Soprano Saxophone Concerto: John Mackey Crying III, Metal Still Hurting Jason Robert Brown Crying Stop and Stare OneRepublic Crying Strange Fruit Billie Holiday Crying Symphony No. 3, II. Lento e Henryk Górecki Crying Largo—Tranquillisimo Symphony in D Minor, M. 48, César Franck Crying II, Allegretto Symphony No. 2 in E Minor, Rachmaninoff Crying Op. 27, II, Allegro molto Symphony No. 2, Henryk Górecki Crying Kopernikowski, Mvt I Continued

322

Table B.1 continued

Song Artist Emotion Category Symphony No. 6 in B Minor, Tchaikovsky Crying Op. 74, I. Adagio—Allegro non troppo Symphony No. 6 in B Minor, Tchaikovsky Crying Op. 74, IV. Finale—Adagio lamentoso Taladh Na Beinne Guirme MacGregor, Brechin, & Crying O Headhra Terrible Things Mayday Parade Crying The Breaking of the Fellowship Howard Shore Crying The Carnival of the Animals, Saint-Saëns Crying Swan The Funeral Band of Horses Crying The Longest Time Billy Joel Crying The Nightingale Tchaikovsky Crying The River Bruce Springsteen Crying The Scientist Coldplay Crying The Trenches Patrick Doyle Crying To Build a Home The Cinematic Orchestra Crying Trauermusik, Götterdammerung Wagner Crying Trouble Found Me Hop Along Crying , , Act 1 Ruggero Leoncavallo Crying Wandering Jane Dario Marianelli Crying What I’ve Done Linkin Park Crying When I Was Your Man Bruno Mars Crying When It’s Cold I’d Like to Die Moby Crying Who’s Crying Now Journey Crying Whom Shall I Fear Chris Tomlin Crying [God of Angel Armies] Wipe Your Eyes Maroon 5 Crying 11 Blocks Wrabel Sad 21 Guns Green Day Sad 715-CREEKS Bon Iver Sad A Movement for Rosa Mark Camphouse Sad Adagio for Strings Barber Sad Adagio in G Minor Albinoni Sad Adore You Miley Cyrus Sad After the Love is Gone Earth, Wind, & Fire Sad All Is Well Austin Basham Sad All of Me Sad Alone Heart Sad Continued

323

Table B.1 continued

Song Artist Emotion Category American Pie Don McLean Sad An American Elegy Frank Ticheli Sad Après un rêve, Op. 7, No. 1 Fauré Sad Ashes of Eden Sad Baal Shem, II. Nigun Ernest Bloch Sad Ballade, Op. 46 Barber Sad Ben’s My Friend Sun Kil Moon Sad Boulevard of Broken Dreams Green Day Sad Busted and Blue Sad Can’t Be Friends Trey Songz Sad Car Radio Sad Concerto in D Minor after J.S. Bach Sad Alessandro Marcello BVW 974, Mvt 2—Adagio Death with Dignity Sufjan Stevens Sad Demons Imagine Dragons Sad Der Erlkönig D328 Schubert Sad Do Not Cast Me Off in My Pavel Chesnokov Sad Old Age, Op. 40, No. 5 Dust in the Wind Kansas Sad Earth Song Michael Jackson Sad Fantasia in C Minor for Piano, Mozart Sad K475, Adagio Fantasia in D Minor K397 Mozart Sad Fantasia on a Theme by Thomas Vaughn Williams Sad Tallis Fire Brian Crain Sad Fix You Coldplay Sad Good Riddance Green Day Sad (Time of Your Life) Good Thing Sam Smith Sad Hallelujah Rufus Wainwright Sad Happiness The Fray Sad Haunt You Every Day Weezer Sad Hello Adele Sad Hello Sad Hold Onto Me Mayday Parade Sad Hotel California Eagles Sad How Can You Mend a Al Green Sad I Can’t Tell You Why Eagles Sad Continued

324

Table B.1 continued

Song Artist Emotion Category I Miss You blink-182 Sad I Think You’re Really Beautiful Starry Cat Sad I Will Not Bow Breaking Benjamin Sad I’m Already There Lonestar Sad If You Never Come To Me Frank Sinatra Sad Imagine John Lennon Sad In Dreams, Lord of the Rings, Howard Shore Sad The Fellowship of the Ring Irreplaceable Beyoncé Sad It’s So Hard To Say Goodbye Jason Mraz Sad to Yesterday Jar of Hearts Christina Perri Sad Landslide Fleetwood Mac Sad Let Her Go Passenger Sad Let’s Hurt Tonight OneRepublic Sad Lone Wolf and Cub Thundercat Sad Lost Michael Bublé Sad Love Story Taylor Swift Sad Mad World (feat. Gary Jules) Michael Andrews Sad Madeline Peter Cincotti Sad Man of Words Booker Little Sad Many Mothers Junkie XL Sad Miserable At Best Mayday Parade Sad Must the Winter Come so Soon? Barber Sad Nessun dorma, Turandot, Act 3 Puccini Sad Nimrod, Enigma Variations Elgar Sad No Air Jordin Sparks Sad Nocturne in E-Flat Major, Chopin Sad Op. 9 No. 2 Nocturne Op. 72, No. 1 in Chopin Sad E Minor Not the Same Ben Folds Sad O Magnum Mysterium Laurdisen Sad October U2 Sad On My Own Les Misérables Sad Piano Concerto No. 1 in Shostakovich Sad C Minor, Op. 35, II, Lento Piano Sonata No. 14 in C-Sharp Beethoven Sad Minor, Op. 27, No.2, I. Adagio sostenuto Pretty Things Big Thief Sad Continued

325

Table B.1 continued

Song Artist Emotion Category Redemption Junkie XL Sad Requiem, Lacrimosa Mozart Sad River Flows in You Yiruma, Martin Jacoby Sad Running Low Shawn Mendes Sad Sad Song Christina Perri Sad Sam Stone John Prine Sad Say My Name Destiny’s Child Sad See You Again Wiz Khalifa Sad (feat. Charlie Puth) Op. 92, II. Allegretto Shadow of the Day Linkin Park Sad Somebody Else The 1975 Sad String Quartet Op. 11, II. Barber Sad Molto adagio Summertime Vince Staples Sad Sunset Soon Forgotten Iron & Wine Sad Symphony No. 6 in B Minor, Tchaikovsky Sad Op. 74, IV. Finale, Adagio lamentoso Symphony No. 7 in A Major Beethoven Sad Talking Bird Death Cab for Cutie Sad Tears in Heaven Eric Clapton Sad The Funeral Band of Horses Sad The Lark Ascending Vaughan Williams Sad The One That Got Away Sad The Scientist Coldplay Sad Tiger Mountain Peasant Song Fleet Foxes Sad Time Hans Zimmer Sad Tony Story Meek Mill Sad U Sad Under the Bridge Sad Veridis Quo Daft Punk Sad Viola Concerto, I. Andante William Walton Sad comodo We Think We Know You Bo Burnham Sad What It’s Like Everlast Sad What’s Going On Marvin Gaye Sad When I Was Your Man Bruno Mars Sad When She Loved Me Sarah McLaughlan Sad Yesterday The Beatles Sad Your Hand in Mine Explosions in the Sky Sad Youth Daughter Sad

326

Song Artist Emotion Category 1-800-273-8255 Logic Grief/crying Casimir Pulaski Day Sufjan Stevens Grief/crying Come What May Moulin Rouge Grief/crying Funeral Band of Horses Grief/crying Hero NA Grief/crying Home NA Grief/crying How to Disappear Completely Radiohead Grief/crying Maps Yeah Yeah Yeahs Grief/crying Numb Linkin Park Grief/crying Radioactive Imagine Dragons Grief/crying Sad 2 Frankie Cosmos Grief/crying Someone Like You Adele Grief/crying Stop and Stare One Republic Grief/crying The Breaking of the Fellowship Howard Shore Grief/crying The Longest Time Billy Joel Grief/crying The River Bruce Springsteen Grief/crying Trouble Found Me Hop Along Grief/crying What I’ve Done Linkin Park Grief/crying When I Was Your Man Bruno Mars Grief/crying Wipe your eyes Maroon 5 Grief/crying 11 Blocks Wrabel Melancholy/sadness 21 Guns Green Day Melancholy/sadness Adagio for Strings (3) Samuel Barber Melancholy/sadness Adagio in G Minor Albinoni Melancholy/sadness Adore You Miley Cyrus Melancholy/sadness All of me John Legend Melancholy/sadness Alone Heart Melancholy/sadness American Pie Don McLean Melancholy/sadness Ballade for piano Samuel Barber Melancholy/sadness Boulevard of broken dreams (2) Green Day Melancholy/sadness Busted and blue Gorrillaz Melancholy/sadness Creeks Bon Iver Melancholy/sadness Demons Imagine Dragons Melancholy/sadness Earth Song Michael Jackson Melancholy/sadness Fix You Coldplay Melancholy/sadness Flute Sonata, Mvt 2 Hindemith Melancholy/sadness Hallelujah Rufus Wainwright Melancholy/sadness Hello Adele Melancholy/sadness Hotel Californa Eagles Melancholy/sadness I Miss You Blink-182 Melancholy/sadness I Think You’re Really Beautiful Starry Cat Melancholy/sadness Continued

Table B.2. Trained student-selected works for four emotion categories

327

Table B.2 continued

Song Artist Emotion Category I Will Not Bow Breaking Benjamin Melancholy/sadness In Dreams Howard Shore Melancholy/sadness Irreplacable Beyonce Melancholy/sadness It’s so hard to say Goodbye Jason Mraz Melancholy/sadness to Yesterday Jar of Hearts Christina Perry Melancholy/sadness Let’s Hurt Tonight OneRepublic Melancholy/sadness Love story Taylor Swift Melancholy/sadness Man of Words Booker Little Melancholy/sadness No air Jordin Sparks Melancholy/sadness On My Own Les Mis Melancholy/sadness Pretty Things Big Thief Melancholy/sadness Say my name Destiny’s Child Melancholy/sadness See you Again (3) Wiz Khalifa Melancholy/sadness Somebody Else The 1975 Melancholy/sadness Symphony No 6, Mvt 4 Tchaikovsky Melancholy/sadness Talking Bird Death Cab for Cutie Melancholy/sadness The One That Got Away Katy Perry Melancholy/sadness Time of Your Life Green Day Melancholy/sadness Under the bridge Red Hot Chili Peppers Melancholy/sadness Youth Daughter Melancholy/sadness 24K Magic Bruno Mars Joyous/jubilant All about that bass Meghan Trainor Joyous/jubilant Black Spiderman Logic Joyous/jubilant Blank space Taylor Swift Joyous/jubilant Boom Boom Pow Black Eyed Peas Joyous/jubilant Buckeye battle cry NA Joyous/jubilant Call me maybe Carly Rae Jepsen Joyous/jubilant Close Your Eyes and Count Grouplove Joyous/jubilant to Ten Cousins Vampire Weekend Joyous/jubilant Beyonce Joyous/jubilant Don’t Bring Me Down ELO Joyous/jubilant Dynamite (4) Taio Cruz Joyous/jubilant Empty Room Arcade Fire Joyous/jubilant Every morning Sugar Ray Joyous/jubilant Festive Overture Shoshtakovich Joyous/jubilant Happy (7) Joyous/jubilant Harder Better Faster Daft Punk Joyous/jubilant Baauer Joyous/jubilant Hooked on a Feeling Blue Swede Joyous/jubilant Continued

328

Table B.2 continued

Song Artist Emotion Category I Don’t Wanna Dance Hey Monday Joyous/jubilant I Wanna Love You Akon Joyous/jubilant In Da Club (2) Joyous/jubilant It Don’t Mean a Thing Joyous/jubilant Lean Back Terror Squad Joyous/jubilant Livin on a Prayer Bon Jovi Joyous/jubilant Love Someone Jason Mraz Joyous/jubilant Love Story Taylor Swift Joyous/jubilant Mr. Brightside The Killers Joyous/jubilant My Country tUnE-yArDs Joyous/jubilant Nancy Mulligan Ed Sheeran Joyous/jubilant Never Gonna Give You Up Rick Astley Joyous/jubilant Paint it Black The Rolling Stones Joyous/jubilant Rhapsody in Blue Gershwin Joyous/jubilant Side to Side Joyous/jubilant Sir Duke (2) Stevie Wonder Joyous/jubilant September Earth Wind and Fire Joyous/jubilant Sorry Not Sorry Joyous/jubilant Shake it Off Taylor Swift Joyous/jubilant Spiderhead Cage the Elephant Joyous/jubilant Starships Nicki Minaj Joyous/jubilant Sweet Child o Mine Guns N’ Roses Joyous/jubilant Take On Me a-ha Joyous/jubilant Thunderstruck AC/DC Joyous/jubilant Tik Tok Kesha Joyous/jubilant Treasure Bruno Mars Joyous/jubilant Uptown Funk Bruno Mars Joyous/jubilant Victorious Panic at the Disco Joyous/jubilant You Make my Dreams Come True Hall and Oates Joyous/jubilant All of Me John Legend Peaceful/contented Appalachian Spring Copland Peaceful/contented Bubble Toes Jack Johnson Peaceful/contented California Silent Pilot Peaceful/contented Carry On Young Rising Sons Peaceful/contented Carry on My Wayward Son Kansa Peaceful/contented Chateau Lobby #4 Father John Misty Peaceful/contented Clair de Lune Debussy Peaceful/contented Don’t Stop Peaceful/contented Drops in the River Fleet Foxes Peaceful/contented Feel Good Inc. Gorillaz Peaceful/contented Float On Modest Mouse Peaceful/contented Continued

329

Table B.2 continued

Song Artist Emotion Category Glassworks Philip Glass Peaceful/contented Happy (3) Pharrell Williams Peaceful/contented I Want You Third Eye Blind Peaceful/contented I Went to the Store One Day Father John Misty Peaceful/contented In a Sentimental Mood Coltrane Peaceful/contented Irreplaceable Beyonce Peaceful/contented Little Wanderer Deathcab for Cutie Peaceful/contented Love Story (2) Taylor Swift Peaceful/contented Morning Mood Grieg Peaceful/contented Motorcycle Drive By Third Eye Blind Peaceful/contented Outside With the Cuties Frankie Cosmos Peaceful/contented Piano Man Billy Joel Peaceful/contented Re: Stacks Bon Iver Peaceful/contented Santa Monica Dream Angus and Julia Stone Peaceful/contented See You Again Akon Peaceful/contented Smooth Santana Peaceful/contented Starships Nicki Minaj Peaceful/contented Straight, No Miles Davis Quintet Peaceful/contented Symphony No. 2, Mvt IV Mahler Peaceful/contented Symphony No. 3, Posthorn Solo Mahler Peaceful/contented Symphony No.5 Mvt IV (end) Mahler Peaceful/contented The Best is Yet to Come Sheppard Peaceful/contented The 4 Seasons, Spring Vivaldi Peaceful/contented The Power of Love Celine Dion Peaceful/contented This Town Nial Horan Peaceful/contented Truly madly deeply Savage garden Peaceful/contented Vision of Love Peaceful/contented Viva La Vida Loca Ricky Martin Peaceful/contented We Belong Together Mariah Carey Peaceful/contented We Didn’t Start the Fire Billy Joel Peaceful/contented Your Body is a Wonderland John Mayer Peaceful/contented

330

Composer Title Passage Duration Rater 1 Rater 2 Rater 3 Barber Adagio for Strings 0:00-1:10 Long M M G Barber Adagio for Strings 4:04-4:57 Long M M M Bach Come, Sweet Death 2:30-3:34 Long M A G Berz Elegy for a Young 2:18-2:50 Long M M M American Franck Symphony in D Minor, 0:38-1:11 Long M M M Mvt 2 Tchaikovsky Symphony 6, Mvt 4 0:15-1:00 Long M A G Tchaikovsky Symphony 6, Mvt 4 7:29-8:35 Long M G G Faure Aprés un rêve 0:00-0:52 Long M M M Mozart Fantasia in D minor 0:50-1:29 Long M M M Junkie XL Many Mothers 0:00-0:37 Long M M G Chopin Nocturne in E Minor 2:20-2:54 Long M N M Junkie XL Redemption 0:00-0:59 Long M M M Beethoven Moonlight Sonata, 0:05-1:13 Long M M M Mvt 1 Beethoven Symphony 7, Mvt 2 0:44-1:24 Long M M G Beethoven Symphony 7, Mvt 2 2:04-2:50 Long M A G Albinoni Adagio in G Minor 0:00-1:00 Long M M M Bach St. Matthew Passion, 0:03-1:01 Long M M G Mvt 1 Desplat Lily’s Theme 0:02-0:57 Long M M G Desplat Lily’s Theme 1:02-2:03 Long M M G Beethoven Symphony 3, Mvt 2 0:00-1:00 Long M M M Gorecki Symphony 3, Mvt 2 1:37-2:33 Long M M G Barber Adagio for Strings 0:05-0:20 Short M M G Barber Adagio for Strings 4:06-4:22 Short M M M Bach Come, Sweet Death 1:22-1:40 Short M M M Bach Come, Sweet Death 2:30-2:40 Short M M G Berz Elegy for a Young 2:18-2:33 Short M M M American Berz Elegy for a Young 1:30-1:48 Short M M M American Dvorak Symphony 9, Mvt 2 7:07-7:30 Short M M M Franck Symphony in D Minor, 0:37-0:55 Short M M M Mvt 2 Doyle The Trenches 0:00-0:17 Short M M M Tchaikovsky Symphony 6, Mvt 4 0:15-0:28 Short M M M Tchaikovsky Symphony 6, Mvt 4 7:32-7:50 Short M G M Faure Aprés un rêve 0:00-0:13 Short M M M Mozart Fantasia in D Minor 0:50-1:07 Short M M M Junkie XL Many Mothers 0:00-0:17 Short M M M Chopin Nocturne in E Minor 2:45-2:54 Short M N M Junkie XL Redemption 0:01-0:19 Short M M M Continued

Table B.3. Melancholic and grieving passages rated by three experts. For the ratings, M means Melancholy, G means Grief, A means Ambiguous, and N means Neither.

331

Table B.3 continued

Composer Title Passage Duration Rater 1 Rater 2 Rater 3 Beethoven Moonlight Sonata, 0:28-0:43 Short M M M Mvt 1 Beethoven Symphony 7, Mvt 2 0:44-0:57 Short M M G Beethoven Symphony 7, Mvt 2 2:18-2:31 Short M N G Albinoni Adagio in G Minor 0:24-0:36 Short M M M Bach St. Matthew Passion, 0:02-0:16 Short M M M Mvt 1 Desplat Lily’s Theme 0:32-0:46 Short M M G Desplat Lily’s Theme 1:39-1:57 Short M M G Beethoven Symphony 3, Mvt 2 0:31-0:46 Short M M G Gorecki Symphony 3, Mvt 2 2:01-2:17 Short M M M Doyle Crewe and the Soldier 0:44-1:22 Long G G G Williams Schindler’s List, Track 1 2:37-3:07 Long G G G Marionelli Jane Eyre, Track 1 1:28-2:31 Long G G G Barber Adagio for Strings 5:51-6:59 Long G G G Arnold and Sherlock Series 2, 0:42-1:32 Long G G G Price Track 18 Tchaikovsky Symphony 6, Mvt 4 5:54-6:46 Long G G G Doyle The Trenches 0:30-0:58 Long G M G Barber String Quartet Op 11, 4:16-5:02 Long G G G Mvt 2 Doyle Crewe and the Soldier 1:07-1:21 Short G N G Williams Schindler’s List, 2:37-2:47 Short G G G Track 1 Marionelli Jane Eyre, Track 1 2:08-2:23 Short G G G Barber Adagio for Strings 6:20-6:35 Short G G G Arnold and Sherlock Series 2, 1:11-1:26 Short G G G Price Track 18 Tchaikovsky Symphony 6 Mvt 4 5:54-6:11 Short G G G Doyle The Trenches 0:39-0:51 Short G A G Mozart Symphony 6 Mvt 4 0:01-0:15 Short G M G

332

Appendix C: Additional Information about Chapter 6

Chapter 6 described how listeners trained in aural skills described melancholic and grieving musical passages. Each listener was asked to classify these musical passages, whose emotional label was unknown to them, with regard to 18 structural features identified by Huron (2015; in preparation). In addition to using these 18 features, listeners were asked to “identify any additional noticeable musical features that you think contributes to the emotional character of the music (such as rhythm, harmony, instrumentation, orchestration, structure, etc.).” Table C.1 lists the additional features chosen by the participants. Each of these features should be investigated in future research.

Comment by Participant (Sorry, I accidentally pressed one of the answers for the optional piano pedal question; there is no piano in this piece). Harmony, instrumentation, register, melodic line, additional glissandi in solo violin, orchestration (woodwinds) Articulation stood out to me a lot in this one as a part of the emotional character. ascending melody ends at minor chord Bass drum underlying the music brings a dark timbre to an otherwise harsh melody. brass cello Continued

Table C.1. Additional features that could contribute to the emotional character of melancholic and grieving passages, as identified by participants

333

Table C.1 continued

Comment by Participant Cellos instead of violins! A MUCH more well-rounded and warm timbre, very inviting... even in the moments where you can pick apart the minor modality, the instrument sounds so inviting that it almost feels major regardless! (And, of course, there’s a lot of major work at play in this segment.) Really a lot of the same notes here as in the first listening example, regarding orchestration, rhythm, harmony, melody, structure... amazing, though, how a change in instrumentation and some harmonic progression can alter the entire feeling of the piece from desperate wailing to a swelling warmth (or dread?), totally opposite emotions. contrapuntal motion between strings and brass crescendo/decrescendo lean intense vibrato d minor piano solo, some dotted 8th note, melody played by right hand chord by left hand Dense orchestration - highest notes achieved in violins and felt in trumpets, lowest felt largely in the sustained timpani roll. Not forgetting all of the instruments in between (hello, trombones!!) these extreme instruments paint some serious drama in the harmony that they build - no clear resolution or chord that Dense, high strings; but now, a bass drum hearkening each downbeat, adding an intensity and some level of forward motion. Lots of sustained tones over changing tones, creating a crazy dissonance and argument in the highest of strings! Very straightforward rhythm – doesn’t get crazier than a few quarter notes... some modality change near the end, but it sure isn’t a happy one. The piece gives off a looming anxiety or disturbance. Descending melody Dissonance makes you uncomfortable. double bass, strings vs. woodwinds Each motif was contained within a pitch range of instruments; seems as if there are multiple conflicting characters or emotions. Minor mode contributes to the tumultuous feel. The trembling of pitch in lower registers makes the listener unstable flute & string play descending melody D to F# and Eb to F# while some string plays fizz harmony, instrumentation, register/timbre (playing on D string), dynamic, rhythm and range of accompaniment of piano harmony, instrumentation, tempo slows down, harmony harmony, instrumentation/orchestration, dynamic, structure harmony, instrumentation/orchestration, register, closeness of voices, rhythm harmony, instrumentation/orchestration, rhythm, melodic structure harmony, tempo, melodic line, register, repeating rhythm of left hand Continued

334

Table C.1 continued

Comment by Participant heartbeat drumming High, high violin like in the first audio selection - not as high, but certainly extreme. Massive difference in how it’s presented, however: here, we have a solo violin taking on the high notes while an orchestra of violins takes on a much more moderate register. Orchestra is, of course, backed up by cello and double bass cranking out the low notes. For that matter, the orchestra has no dramatic melodic figures, or hardly any figures whatsoever... mostly a dense layer of harmony smeared through a legato texture like peanut butter on bread. Clearly, it’s the solo violin’s time to shine. A lightly-taken turn into major, openly accepted by the orchestra, paints the solo violin as a character telling a story of many emotions and many turns. Perhaps the orchestra underneath paints a deeper gut feeling while the violin tells the explicit details of the saga? I needed to take a break after hearing this excerpt so many times. The intense and dissonance really has a high frequency for the perception of the audience. I was feeling each move from the drone note and really wanting it to rest, but it never did really resolve, which gave me some anxious and unsettling feeling. I thought the addition of the bass drum was interesting, it kept me grounded as something to expect within the texture of the piece. Rhythmically it was not very interesting, but the moves to the semi-tone above or below the drone note. I should note, at this point, that I haven’t been marking staccato for many of these excerpts, save for where I hear the errant harp or drum where a staccato marking is almost necessary. I say this here, where I note what could be a blend of legato and marcato figures (erring more legato, but quite defined nonetheless), a theme that has been common to my ears over the past several examples. Multiple melodic figures being sounded in violin, cello, viola, some other violins, all without any regard to each other - an unabashed polyphony. A single tremolo note in the lower voices (cello? bass? shrug?) destabilizes the segment just a smidge. I’m not sure how to interpret the emotional character! There seems to be some deeply attached emotion in the melodic line, but the way it is repeated in a couple other solo instruments makes me question what it could mean. In context of the piece, this passage has a somewhat defined harmonic range, but extremely high compared to other sections of the piece - no lower strings at all. Intensely loud dynamics and a distinctly minor resolution following a mostly stepwise climb up the melodic line create a terrifyingly emotional climax. If not wholly homophonic, the density of high strings and the legato (that is, definitely not staccato) texture suggest the illusion of homophony. Altogether, a desperate wailing from the main character. Continued

335

Table C.1 continued

Comment by Participant instrumentation - timbre contrast instrumentation (strings), the lower strings being the ones with the moving parts, evenness and steadiness of the rhythm instrumentation (use of only strings), solo violin, harmony, vibrato instrumentation, calmness in the rest of the orchestra (orchestration) Instrumentation/orchestration, harmony, timbre, register, melodic line, dynamic level juxtaposition of strings with the percussion key modulation from minor to Major. Piano play chord accompaniment and string plays melody Low orchestration for strings, no viola or violin, create a darker sound almost instantly. Notes are very sustained all around, even in the melody, which itself is all over the place... suggests, to me, not a character speaking a story or a feeling, but rather an atmosphere of some sort. To me featuring the most sustained tones out of any of the examples, I feel the vibe of a resilient and ocean in the night. melodic trajectory (rising) and arrival on climactic high pitch minor melody + harmony, timbre (instrumentation) Mostly listening for that vibrato again, which feels pretty held back. Because I know this piece, I know they are saving the huge emotional vibrato for later. With orchestration, they variation of when voices move, so the oblique motion between the different parts, adds to the emotional character. oh, duh, vibrato (or lack of it) --this observation applies to all passages actually orchestration (logs of instruments), clashing harmony Orchestration/instrumentation, harmony, register, phrase length, large melodic leaps pedal sustains many notes into the next chord phrasing/inflection of the melody (performer) piercing violin melody key modulates from minor to major Presence of vibrato makes the piece more somber. Range of about two octaves, maybe three, mostly on the higher end. Harmonic bass figure feels like it *should* be lower, but isn’t - suggests an air of levity, even if the situation in the piece is not necessarily a light subject. Simple melodic figures, mostly stepwise, leaping only to the downbeat of m. 5, striking a bit of interest in the consequent portion of the musical idea. Clearly minor, simple harmony, simple melody... piano instrumentation and slow tempo contribute to a mysterious, perhaps pensive feeling. register of the violin Continued

336

Table C.1 continued

Comment by Participant repetition, orchestration Resolution of the cadences rubato into cadences Rubato is particularly effective here. Rubato, portamento, and vibrato in the solo line - the classical string player’s expressive solo toolkit is being used full blast here. sequence Sequence; BTW good job with the 2-dimension Likert scale! solo plays arpeggio of d minor to reach high A note; low string instrument plays tremolo sounds like all strings sparse orchestration string dominant g minor, legato, Bb to D to A to D melody heard switching between instruments a lot texture - all parts moving together; harmonic rhythm texture of the harmony (long chords) The balanced phrases and simple harmonic structure gives a concrete emotional rise and fall. The cellist is playing in the upper register of a low string, which is making the timbre darker and the vibrato wider and faster adding to the emotional character. Also the rubato in the passage adds to the emotional character. The contrast in pitch range of the solo violin to the rest of the instruments filling out the harmonic structure contribute a sense of . The arpeggios in the violin establish a minor mode, so the turn to major in the oboe in the last few seconds comes as a surprise. The difference in range between the bass drum part and the upper strings is pretty marked and a big part of the overall emotional impression. The element of this piece that sticks out the most and drives the energy is resolution. The piece just keeps building and building until there is a brief moment of rest, but yet still moves on in some way to dissonance. For the instrumentation, this section is much more musically rich with the melodic material in the cello voice. The timbre of the instrument and the placement of the melody within its’ range plays well into that. The energy is descending in this section of the work, which also plays into having the main melodic material in the cello for the color that it provides in the musical texture. The low contrasting percussion gives an impending sense of doom. The trills in the higher registered instruments adds suspense. The melody is kept simple so these features stand out. Continued

337

Table C.1 continued

Comment by Participant The solo violin voice is very expressive with the melodic line. The harmonic support underneath it in a choral style accompaniment in blocked chords also gives this piece a religious feel to it as being very reverential and reserved. I enjoy the sparse instrumentation that only features one voice in the melody, which I think represents the message of the piece. I couldn’t decide whether the mode was minor, but the melody is where I find that difficult. I think that it is in a major key, but all other musical signs point toward minor. In a way, the solo violin acts as a light in a very dark space that almost gives hope for something positive. The staccato bass line in addition to the sequence adds a driving emotional force. The ascending countermelody adds a sense of hope to a gloomier melody. The step wise harmonic motion makes the mixture of tonality less jarring. The cello has a harsher tone than the rest of the orchestra, giving priority to its repetitive ascending gesture. The swells of each note in dynamic and the growing loudness through the last chord add a sense of urgency and interest to the harmonic structure. Instruments with a lower pitch range are more prevalent and drone-like, makes the piece seem a little dogmatic. the tonality and the spooky bass drum sound The unresolved chordal structure of this piece really drives the energy toward this moment musically. The support from the lower string voices drives tension as the melodic line ascends to higher and higher statements of the same material. The use of strings also ties into the tension because the violins in that register are piercing through the texture, but are getting this wonderful warm tone in the viola voice to create almost a juxtaposition. The vibrato is going full blast here, that’s a big part of the overall character. Also every single string instrument is playing in the top of their range, so the very short string length adds a lot to the timbre. The vibrato was a big one for me, the cellists are really using it to shape the phrase of the excerpt starting with little narrow vibrato and moving to a much wider and somewhat faster one. There is a slow creeping bass pedal that is driving the energy of this passage. It ascends chromatically and that builds the anticipation. This selection seems to be emphatically longing for something which can be seen in the melodic lines as they sweep across in the compound meter with the use of chromatic neighboring tones in the melody. The tension is created through the dissonance and through the two voices moving in counterpoint. Continued

338

Table C.1 continued

Comment by Participant This excerpt is much more bombastic with the use of brass and dynamics. It was difficult to pick out which voice had the clear melody within the texture because there were so many musical ideas happening from the families of instruments throughout the ensemble. The rhythm was definitely making a big statement in the brass section, almost like the action was drawing to the end of something like an opera or a symphony with high emotion output that is being expressed in the music. Harmonically, we are driving toward a cadence point, the mode felt major, but very unresolved from a tonic standpoint. I like the marriage of the string and the brass voices, they were definitely there for different colors. The strings were there to convey the emotion with the descending melodic sequential pattern and the brass was there to emphasize the statement being made with the bright ascending line or sustained notes to create tension. This selection is very pensive and reverent. The pensive nature of this can be seen with the choice of accompaniment figure. The use of the piano with just repeated blocked chords seems unsure and the solo cello voice is looking for the answer. The tone seems reverent with the phrasing and the melodic line sustaining a tone and then flourishing slightly at the end to bring in the cadence. With only being two voices, this also ties into the mood. There’s one clear statement being made by the solo voice and there’s nothing really going on behind it to take away from that with counterpoint. This selection made me feel as though something was just lost, almost like a lament. The use of the minor arpeggio at the beginning of the phrase drives movement forward and then rests in the dissonance that happens directly afterward. In a way, the movement of the arpeggio motion sweeps the audience into the tension in the dissonance and keeps you there and even emphasizes it in different voices. Strings are great for portraying emotion because they are the closest instrument to how vocal folds vibrate, so we are more receptive to the messages that they are trying to present. timbre (orchestration and instrumentation) timbre, minor melody + harmony too much key modulation btw minor and major Trills in the instruments that fill out the harmonic structure contribute a sense of excitement. The high register gives a sense of transcendence. use of cello violin waltz structure, emphasis on certain notes

339

Appendix D: Additional Information about Chapter 7

Two studies were conducted in order to test whether listeners can perceive differences in melancholic and grieving music, as described in Chapter 7. In the second study, listeners were asked to select which emotion(s) were represented in passages from a list of fourteen emotions. A principle components analysis (PCA) was conducted on the resulting distribution of emotions participants perceived in the music. This Appendix describes the PCA process in more detail than in Chapter 7.

First, a scree plot was created from the eigenvalues of the emotion data. The scree plot suggested a 2-component solution. Examination of the component loadings (Table

D.1 on page 341) and the distribution of terms across these components (Figure D.1 on page 342) reveals that Component 1 may correspond to valence (explaining 16.1% of the variance), while Component 2 may correspond to arousal (explaining an additional

13.4% of the variance). The distribution of terms across the two components look similar to the circumplex model of Russell (1980).

The stimuli were also examined with regard to placement around the two components, reproduced below in Figure D.2 on page 343. The four stimulus factors roughly corresponded to their hypothesized location along the arousal/valence space

(where grief exhibits negative valence and high arousal, melancholy exhibits negative valence and low arousal, happiness exhibits positive valence and high arousal, and

340 tenderness exhibits positive valence and low arousal). The stimuli were therefore considered to represent their intended emotions accurately.

The two components resulted in a total explained variance of only 29.5%, suggesting that there are likely many other influences on which emotions listeners perceive in melancholy and grieving musical samples.

Emotion Component 1 Component 2 Angry -0.015 0.528 Bored -0.149 -0.070 Disgusted -0.065 0.498 Excited 0/768 -0.047 Fearful -0.065 0.578 Grieved -0.340 0.495 Happy 0.498 -0.406 Invigorated 0.727 0.029 Melancholy -0.471 0.131 Relaxed -0.297 -0.527 Surprised -0.210 0.338 Tender -0.387 -0.365 Neutral -0.065 -0.005

Table D.1. Component loadings for the two components of perceived emotion in Study 2

341

342

Figure D.1. First two components of the PCA of perceived emotion from Study 2 343

Figure D.2. Analysis of the 16 stimuli across the PCA space in Study 2

Appendix E: Additional Information about Chapter 8

This Appendix provides further information about the two studies described in

Chapter 8. The first study provided listeners with a list of emotions and asked them to select which emotion(s) they felt while listening to melancholic and grieving music. The second study asked listeners to freely describe their experiences while listening to these same “sad” musics. A principle components analysis (PCA) was conducted on the data from Study 1, which will be described below. The Appendix also elaborates on the qualitative content analysis used to examine the data from Study 2.

A PCA analysis was conducted on the data from Study 1, in a similar manner to the PCA described in Appendix C. The scree plot from the eigenvalues of this data suggested a 4-component solution. A visual analysis of this data, as well as examination of the component loadings (Table E.1 on page 345), suggests that Component 1 may correspond to valence (explaining 12.3% of the variance), while Component 2 may correspond to arousal (explaining an additional 11.1% of the variance). It appears that

Component 3 was influenced by the terms neutral and bored (explaining 7.9% of the variance) and Component 4 was influenced by the terms transcendence and wonder

(explaining 5.724% of the variance). In total, only 37.0% of the variance is explained by

Components 1-4. As described in Appendix C, it appears that there are many other

344 influences on induced emotions from music listening in addition to the factors examined in Chapter 8.

The stimuli were also examined with regard to placement around the four components, reproduced below in Figure E.1 (page 346). As can be seen, the four stimulus factors roughly correspond to their hypothesized place along the arousal/valence components, but melancholy and grief are interleaved, suggesting that arousal and valence are unable to fully account for the differences between these two emotions.

Emotion Component 1 Component 2 Component 3 Component 4 Angry -0.062 0.611 0.011 -0.030 Bored -0.097 -0.271 -0.298 -0.059 Compassionate 0.518 -0.114 0.140 0.004 Disgusted -0.073 0.411 -0.003 -0.054 Excited -0.128 -0.091 0.715 0.114 Fearful -0.115 0.573 -0.091 0.160 Grieved 0.048 0.674 -0.097 -0.145 Happy .111 -0.215 0.726 -0.112 Invigorated -0.194 -0.047 0.585 0.294 Joyful 0.012 -0.182 0.696 -0.093 Melancholy 0.121 0.428 -0.181 -0.226 Nostalgic 0.374 -0.021 -0.018 -0.120 Peaceful 0.599 -0.229 -0.038 0.008 Power -0.166 -0.036 0.390 0.433 Relaxed 0.508 -0.268 -0.075 0.033 Softhearted 0.691 -0.125 -0.039 0.055 Surprised -0.053 0.222 -0.006 0.205 Sympathetic 0.473 0.265 0.039 -0.132 Tender 0.695 0.017 -0.077 0.073 Transcendent 0.073 -0.017 -0.033 0.719 Tension -0.123 0.577 -0.163 0.259 Wonder 0.132 -0.053 0.119 0.651 Neutral -0.146 -0.209 -0.234 -0.049

Table E.1. Component loadings for the four components of induced emotion in Study 1

345

346

Figure E.1. Analysis of the 16 stimuli across the first two PCs for induced emotions in Study 1 As discussed above, a content analysis was conducted on the listener responses from Study 2. This process required two assessors to categorize all 793 responses into an unspecified number of categories (columns 1 and 2, Table E.1). Then, the assessors met, compared categories, and reconciled their two lists into a final list of categories (column

3, Table E.1). After the finalized list of categories was completed, the assessors worked together to re-categorize the 793 responses into these categories (Table E.2).

Assessor 1 Categories Assessor 2 Categories Reconciled Categories Alone/Lonely/ Alone (19) Alone (15) Resignation (23) Annoyed/Angry/Moody (10) Anger (6) Anger (13) Evil (3) Anticipation/Uncertainty (43) Uneasy/Unnerving/ Anticipation/Uneasy (48) Creepy/Danger (43) Anxiety/Stress (23) Anxiety/Fear (24) Anxiety/Stress (25) Calm/Peaceful/Gentle (46) Relaxed/Calm (46) Relaxed/Calm (38) Cinematic/Dramatic/Epic (35) Epic/Climax (8) Epic/Dramatic/ Cinematic (35) The Tragic Reveal (8) Compassion/Love/ Love/Sympathy (16) Love/Sympathy (13) Sympathy (12) Contemplative/Nostalgic (43) Reflective (34) Reflective/Nostalgic (44) Curious (4) Danger/Disaster/War (42) Suffering/War (13) Suffering/War (19) Dark/Empty/Eerie (44) Darkness/ Darkness/ Hard to See (15) Dark Colors (42) Death/Loss/Funereal (53) Death/Loss (53) Death/Loss (47) Grief/Turmoil/ (75) Grief (27) Grief (22) Breakup (7) Crying/Distraught/ Turmoil (43) Crying (13) Upset/Distraught (16) / Longing (31) Longing (30) Continued

Table E.2. Categories for the 793 words, presented by each assessor and the final reconciled categories (and number of terms in each category)

347

Table E.2 continued

Assessor 1 Categories Assessor 2 Categories Reconciled Categories Mystery/Confused (16) Confused/Conflicted (20) Confused/Lost (16) Fantastic/Mysterious (8) Nature/Scenery (24) Imagery/ Unspecified Nature (20) Emotion (30) Colors (43) Imagery/ Other Colors (38) Nothing/Mixed Mixed Emotion (18) Mixed Emotion/ Feelings/General Expressive/ “emotions” (36) Emotional (21) No Emotion/ No Emotion/ Boredom (19) Boredom (21) Emotional/Expressive (7) Other (60) Unclassifiable (33) Unclassifiable (47) Miscellaneous/ Associational (29) Musical Features (8) Prepared/Determined/ Brave (10) Brave/Determined (9) Courageous (10) Rain/Storm/Cloudy (46) Rain/Dreary Rain/Dreary Weather (37) Weather (42) Sad/Melancholy/ Sadness (75) Sad/Melancholy/ Depression (92) Depressed (75) (4) Tension/Intensity (30) Tension (40) Tension/Intensity (36) Transcendent/ Inspiring/ Transcendent (17) Wonder/Hope (24) Transcendent (13) Moving/Physical (20) Moving/Physical (16)

348

Melancholy Grief Emotion Number of Emotion Number of Category Terms (percent) Category Terms (percent) Alone 14 (3.58) Alone 1 (0.25) Anger 2 (0.51) Anger 11 (2.74) Anticipation/Uneasy 12 (3.07) Anticipation/Uneasy 36 (8.96) Suffering/War 1 (0.26) Suffering/War 18 (4.48) Anxiety/Stress 4 (1.02) Anxiety/Stress 21 (5.22) Relaxed/Calm 31 (7.93) Relaxed/Calm 7 (1.74) Epic/Dramatic/ 6 (1.53) Epic/Dramatic/ 29 (7.21) Cinematic Cinematic Love/Sympathy 9 (2.30) Love/Sympathy 4 (1.00) Reflective/Nostalgic 38 (9.72) Reflective/Nostalgic 6 (1.49) Darkness/Dark Colors 23 (5.88) Darkness/Dark Colors 19 (4.73) Death/Loss 18 (4.60) Death/Loss 29 (7.21) Grief 10 (2.56) Grief 12 (2.98) Crying/Distraught/ 13 (3.32) Crying/Distraught/ 30 (7.46) Turmoil Turmoil Longing 16 (4.09) Longing 15 (3.73) Confused/Lost 10 (2.56) Confused/Lost 6 (1.49) Nature 12 (3.07) Nature 8 (1.99) Imagery/Other Colors 26 (6.65) Imagery/Other Colors 12 (2.99) Mixed Emotion/ 7 (1.79) Mixed Emotion/ 14 (3.48) Expressive/ Expressive/ Emotional Emotional No Emotion/Boredom 14 (3.58) No Emotion/Boredom 7 (1.74) Unclassifiable 29 (7.42) Unclassifiable 18 (4.48) Brave/Determined 0 (0) Brave/Determined 9 (2.24) Rain/Dreary Weather 30 (7.67) Rain/Dreary Weather 12 (2.99) Sad/Melancholy/ 51 (13.04) Sad/Melancholy/ 24 (5.97) Depressed Depressed Tension/Intensity 3 (0.77) Tension/Intensity 33 (8.21) Transcendent 4 (1.02) Transcendent 13 (3.23) Moving/Physical 8 (2.05) Moving/Physical 8 (1.99)

Table E.3. Final categories of terms for the melancholic and grieving stimuli, presented by the number of terms in each category and their percentage

349