<<

BEHAVIORAL AND NEUROMODULATORY RESPONSES TO EMOTIONAL

VOCALIZATIONS IN MICE

A dissertation submitted

to Kent State University in cooperation

with Northeast Ohio Medical University in partial

fulfillment of the requirements for the

degree of Doctor of Philosophy

by

Zahra Ghasemahmad

December 2020

© Copyright

All rights reserve

Dissertation written by

Zahra Ghasemahmad

B.Sc., Tehran University of Medical , 2005

M.Sc., Tehran University of Medical Sciences, 2009

Ph.D., Kent State University, 2020

Approved by

, Chair, Doctoral Dissertation Committee Dr. Jeffrey J. Wenstrup , Members, Doctoral Dissertation Committee Dr. Brett R. Schofield , Dr. Merri J. Rosen , Dr. Rebecca Z. German , Dr. Karin G. Coifman Accepted by

, Director, School of Biomedical Sciences Dr. Ernest Freeman , Interim Dean, College of Arts and Sciences Dr. Mandy Munro-Stasiuk Table of Contents

Table of Contents ...... iii

List of Figures and Tables ...... vi

Acknowledgments ...... viii

CHAPTER I INTRODUCTION ...... 1

Vocalization as a tool for emotional expression ...... 2

Vocal communication of emotions in mice ...... 4

The effect of vocalizations on the receiver ...... 7

Amygdala and emotional processing ...... 9

Amygdala role in processing acoustic communication ...... 11

Neuromodulation of auditory and emotional processing ...... 14

Summary of specific aims of the dissertation ...... 17

CHAPTER II MATERIALS AND METHODS ...... 20

Animals...... 20

Acoustic methods ...... 21

Vocalization recording ...... 21

Mating vocalization analysis ...... 22

Vocalization playback ...... 26

Behavioral methods ...... 28

iii

Analysis of mating behaviors during vocal recordings ...... 28

Vocal playback in behavioral experiment ...... 29

Procedures related to microdialysis ...... 32

Surgery ...... 32

Microdialysis ...... 32

Verification of recording location ...... 36

Statistical methods ...... 37

CHAPTER III RESULTS ...... 40

Experiment 1: Identifying acoustic features associated with intense mating interactions ...... 42

Experiment 2: Emotional vocalizations result in behavioral changes in mice ...... 56

Experiment 3: Context- and state-dependent release of neuromodulators in the BLA ...... 62

CHAPTER IV DISCUSSION ...... 73

Internal state modulates mouse vocal behavior ...... 74

Acoustic features of emotional expression in mice during mating interactions ...... 75

Receiver behaviors are influenced by social vocalizations ...... 80

Behavioral changes in male and female mice in response to restraint vocalizations ...... 82

Male and female behavioral responses to mating vocalizations ...... 86

Neuromodulator release in the basolateral amygdala in response to affective vocalizations ..... 88

Context-dependent modulation of sensory processing in BLA by acetylcholine and dopamine .. 90

Acetylcholine and hormonal changes ...... 95

iv

Other neuromodulators affecting valence processing in the BLA ...... 95

Summary and Conclusion ...... 96

General significance and future directions ...... 98

REFRENCES ...... 100

v

List of Figures and Tables

Figure 1.1 Four categories of vocalizations emitted by mice ...... 5

Figure 1.2 General connections of the amygdala...... 10

Figure 1.3 An auditory-centric view of connections of the amygdala ...... 12

Figure 2.1 Sample of four categories of syllables emitted by mice ...... 24

Figure 2.2 Structure of stimuli in playback experiments...... 27

Table 2.1. Behaviors identified for categorizing lower and higher intensity of mating. .. 29

Figure 2. 3. Experimental design for behavioral experiments...... 30

Table 2.2. Behaviors analyzed during playback of mating and restraint vocalizations. ... 31

Figure 2.4 Experimental design for microdialysis...... 34

Figure 3.1 Vocalizations are modulated by the intensity of mating interactions...... 44

Figure 3.2 Syllable composition and complexity change with mating intensity ...... 47

Figure 3.3 Inter-syllable-interval (ISI) decreases with increased intensity of mating ...... 49

Figure 3.4 Syllable duration increases with increased intensity of mating interaction .... 50

Figure 3.5 Peak-to-peak amplitude of emitted syllables increases with harmonicity ...... 52

Figure 3.6 Minimum of syllables...... 53

Fig. 3.7 Spectrogram snapshot of some of the vocal sequences ...... 55

Figure 3.8 Self-grooming behavior in male and female mice...... 57

Figure 3.9 Abrupt attending ...... 58

Figure 3.10 Stretch-attend posture in male and female mice ...... 59

Figure 3.11 Context-dependent modulation of flinching behavior ...... 60

vi

Figure 3.12 Pull-back posture...... 61

Figure 3.13 Microdialysis probe location...... 64

Figure 3.14 Mating and restraint vocalizations differentially change ACH and DA...... 66

Figure 3.15 Serotonergic activity in the BLA of male mice is not context dependent. .... 67

Figure 3.16 Patterns of ACH, but not DA release in the BLA of female mice...... 68

Figure 3.17 Estrous- but not sex-dependent release pattern of ACH and DA...... 71

Figure 4.1 Proposed model for neuromodulation of salient vocalization...... 92

vii

Acknowledgments

This work would not have been possible without the unwavering guidance, encouragement, and support of the vast number of individuals. To all who have been on my side for this long and difficult journey, I am sincerely thankful.

My special gratitude goes to my advisor, Dr. Jeff Wenstrup. From the moment you welcomed me to your lab despite my lack of neuroscience knowledge, and through your constant encouragement in testing my ideas, you taught me how not to be afraid of taking risks. I owe my professional growth in the last several years to you, who always supported inquisitiveness, creativity, and constructive idea exchange in the lab environment. You taking pleasure in teaching me idioms or cultural lessons, the chocolate-covered coffee bean jokes for our late afternoon meetings, or your time-to- time travel agency advice, all were to teach me how to balance my passion for work with the joy of life. The friendship you created and lessons you taught me will last forever.

Alongside my adviser, I am grateful to my parents, who endured the pain of distance only to see me pursue my passion. To my dad who reminded me that the key to success is to be bold and act, and my mom who taught me to take control of my happiness and persevere. Without your relentless support, I would not have been able to overcome the challenges of starting a new.

viii

I value the friendship and support of members of acoustic communication and emotions

(ACE) laboratory: Dr. Sharad Shanbhag for his scientific and technical insights, Aaron

Mrvelj and Debin Lei for their help in data collection and analysis, and Mahtab Tehrani for her constructive feedback whenever was needed. I would also like to thank previous members of the lab for their helpful comments and suggestions on my experiments. I appreciate the hard work of summer fellows at ACE lab, Rishi Panditi, Bhavya Sharma,

Drishna Perumal, Anthony Zampino, Krish Nair, Austin Poth, and Michael DiMauro for their help in the laborious process of vocalizations and behavioral analysis.

I am very grateful for my committee members, Drs. Brett Schofield, Merri Rosen, and

Rebecca German who were my friends and mentors throughout graduate school, offered me advice when needed, whether it was from the diversity of their technical expertise or from their life experiences.

I feel very fortunate to be a member of Research Group (HRG) at NEOMED who did not hesitate to challenge me during Journal club presentations, or for putting my name on the board for donut duties. You all have a share in my assertiveness and confidence and turning me to a better presenter. Having the chance to interact with great through weekly seminar series, provided an amazing networking opportunity for which I am always thankful.

To the members of the department of and neurobiology who always welcomed me with open arms, whether it was for personal, technical advice or just offering their friendship. Our holiday Grinch exchange was one of my most favorite

ix times of the year, even if no useful gifts would have come out of it. Thank you for your kindness, friendship, and help in this long journey.

Division of Graduate Studies by hosting Dissertation Boot Camps (DBCs) provided a distraction-free environment for focusing on dissertation data analyses and writing with the opportunity to interact and learn from graduate students from other fields. I will always appreciate the tireless efforts put into organizing these bootcamps by Kyle

Reynolds, Evan Faidley, and Odeh Halaseh.

I also want to acknowledge the technical support of several individuals: Kristin Yeager from statistical consulting center at Kent State University, for weekly meetings and advice on statistical analyses; Raimund Specht from Avisoft for his timely advice on vocal analysis; Daniel Gavazzi for developing the code for extracting best SNR vocal data; Dale Martin from Datawave for his advice on video tracking and behavioral analysis; Sheila Fleming for her advice on behavioral classifications; Dr. Leah Shriver from Akron University as well as Tom Russell, David Loftis, and John Considder from

Shimadzu for their help with LC/MS equipment troubleshooting and method development; Eva Valbjørn Sørensen and Per Stahl Skov from EP Medical ApS for their guidance on microdialysis probe; Kim Novak for her great communication and support while using the LC/MS equipment, and Dr. Mohammad Yunus Ansari, for his amazing teaching skills and assistance on the chemistry questions related to neurochemical project.

x

Many thanks to the Vanderbilt University Neurochemistry Core, supported by

Vanderbilt Brain Institute and the Vanderbilt Kennedy Center, who performed the neurotransmitter sample analysis.

I would also like to acknowledge the financial support from National Institute for

Deafness and Other Communication Disorders of the National Institute of Health through R01 DC00937 awarded to Dr. Jeffrey Wenstrup, and the fall 2018 and Spring

2019 research awards form Kent State Graduate Student Senate awarded to me.

Reaching this stage would have not been possible without the support of my family and friends. My siblings who through their steadfast support and encouragement helped me in overcoming challenges and the difficulty of being away. I would also like to thank my friends who were my companion in travel adventures Parastoo Maleki, Daniel Gavazzi,

Ashley Brillhart, and Parisa Shabani for your contagious positivity, energy, and for the lively discussions over my research challenges and troubleshooting process. To the countless others who supported me in challenging days, for your wisdom, and support, I am always grateful.

To my friend Will Hamlet, who helped me on the start of this adventure. Your absence is truly felt.

xi

CHAPTER I INTRODUCTION

The overall goal of this project is to improve our understanding of the neural mechanisms underlying the processing of emotional vocal communication. Emotional communication is the process of using cues (e.g. verbal or nonverbal) to exchange information about the affective state of the sender (happiness, sadness, excitement, fears, threats, etc), which can influence the state and behavior of listeners (Planalp,

1998).

To study the brain circuitry involved in emotional vocal communication, we need to understand: (i) What cues are used by the sender to convey information about its internal emotional state, (ii) How vocal stimuli utilizing these cues influence behavioral responses in the listener, and (iii) How these cues influence the brain centers involved in processing emotional vocalizations and shaping behavioral responses to them. Our focus in neural mechanisms is on the amygdala, an emotional center in the brain involved in processing biologically significant sensory information and shaping appropriate behavioral responses to them. We investigate the release into the amygdala of modulatory neurochemicals (i.e., neuromodulators) since these neuromodulators

1 both contain information about an ’s internal state and alter processing in response to sensory stimuli. Further, we address these questions in a mouse strain,

CBA/CaJ, whose vocal communication is well understood, since studying such questions in is unable to address the neural mechanisms underlying the process of emotional vocal communication. The general hypothesis is that vocalizations corresponding to higher arousal or intensity of social interaction carry cues that represent the internal state of the vocalizing and these cues can change the release of neuromodulators in the BLA, and thus, behavioral responses in the listening animals.

Vocalization as a tool for emotional expression

Humans and other use vocal communication for expressing thoughts, internal state, and their emotions during social interaction. Whether these vocalizations are used for conveying a positive message (such as play, courtship) or for expressing aversion (such as distress and pain), they share a common trait, that is, they contain cues that the valence (a value that leads to approach and consummatory behaviors (positive valence) or defensive or avoidance behaviors (negative valence))

(Wenstrup et al., 2020). Such cues are generally represented in the segmental features of vocal communication (words or syllables) and suprasegmental features that contribute to , including spectral, temporal, and amplitude cues. Both the choice of syllables and the prosodic features of can be affected by internal state

2 and emotions of the sender (Litvin et al., 2010; Brudzynski et al., 2010; Brudzynski et al.,

2011; Ehret et al., 2013). For instance, when seeing a threat or danger, animals express warning using “alarm calls”. These types of vocal usually carry spectral cues such as use of broad frequency bands and high amplitude and are easy to localize. Thus, they can be used by animals to warn conspecifics of an approaching predator (Seyfarth et al.,

1980; Manser et al., 2002). Distress calls and other aversive vocalizations share some similar characteristics in terms of their frequency range and fast rise time (Litvin et al.,

2010; Ehret et al., 2013).

Mating vocalizations, however, unlike aversion calls, are often higher in frequency, and their shorter wavelength makes them suitable for targeting a possible mate in their proximity (Brudzynski et al., 2010; Ehret, 2013). Their frequency range is targeting the of the intended mate from same species and modified accordingly

(Brudzynski et al., 2010). These vocalizations use cues such as sequencing and ordering for conveying the message to the listener (Holy and Guo, 2005; Scattoni and Branchi,

2010). Furthermore, features such as rate of syllable emission, duration of syllables, fundamental frequency, and loudness carry information related to arousal and excitement of the sender (Gadziola et al., 2012b; Zimmermann et al., 2013; Gaub et al.,

2016), possibly due to their modulation by the autonomic nervous system (Mortillaro et al., 2013).

3

Vocal communication of emotions in mice

In mice, like other species, cues such as syllable types and spectrotemporal characteristics convey information linked to the internal affective state of the sender.

Syllable categories in mice include ultrasonic vocalizations (USVs) with many syllable subtypes, low frequency harmonic (LFH) calls, mid-frequency vocalizations (MFVs), and

Noisy calls (Fig. 1.1). These are used for social communication in several behavioral contexts (Holy & Guo, 2005; Chabout et al., 2015; Grimsley et al., 2011 & 2016; Hanson and Hurley, 2016; Gaub et al., 2016; Sangiamo et al., 2020).

During mating and same sex interactions, USVs are the predominant vocalization type

(Ehret et al., 2013) and their short wavelengths are used to communicate with nearby subjects (Porges and Lewis, 2010) (Fig. 1.1A). During social interactions, spectrotemporal characteristics of USV syllables seem to be modulated by various contexts. Duration, for instance, is heavily affected by the nature of social interaction, such that USVs emitted in a mating context are significantly longer than those emitted in restraint or isolation (Grimsley et al., 2016; Lefebvre et al., 2020). USVs emitted during male fighting are longer than those emitted during fleeing (Sangiamo et al., 2020).

Further, the complexity of USVs during mating is affected by the arousal level of the male (Hanson and Hurley, 2012; Gaub et al., 2016).

4

Figure 1.1 Four categories of vocalizations emitted by mice. A. Ultrasonic Vocalizations (USVs), B. Low Frequency Harmonic calls (LFHs), C. Mid-Frequency Vocalizations (MFVs). D. Noisy calls. Abbreviation: dB (FS), intensity in decibels relative to a full-scale value.

The other vocal categories in mice utilize lower with longer wavelength that can travel further distances. (Porges and Lewis, 2010). LFH and Noisy calls always contain broad frequency bands, but MFVs may either be narrow band or broadband.

LFH calls are emitted by females during mating (Grimsley et al., 2013; Ronald et al.,

2020) and by both male and female mice under distress (Chen et al., 2009; Ehret, 2013;

Grimsley et al., 2013). With a fundamental frequency below 5 KHz, the syllable shows harmonic stacks that can extend up to 100 KHz (Grimsley et al., 2011) (Fig. 1.1B).

However, these syllables in a mating context have significantly longer durations than in isolation or stress (Grimsley et al., 2016). Another context related to LFH emission is during same-sex interaction, where a defeated male mouse emits this syllable type in a series at high intensities (near 80-90 dB SPL) and durations ranging between 30-300 ms

5

(Ehret et al., 2013). Despite the predominant emission of these calls in aversive contexts, the meaning of these calls is context-dependent, such that approach or avoidance can be modulated by other sensory cues (Grimsley et al., 2013).

MFVs are syllables that occur predominantly during restraint stress and are linked to increased blood corticosterone (Grimsley et al., 2016). They may occur either as a single spectral component in the 10-18 kHz range, a multi-harmonic signal with 10-18 kHz fundamental (Fig. 1.1C), or as a signal with harmonic and non-linear features (Grimsley, et al. 2016). Noisy calls have been recorded most during isolation (Fig 1.1D) (Grimsley et al., 2016).

While these studies strongly support the use of affective prosody and syllable type in mouse vocal communication, they often limit their analysis to only one syllable category and, usually, to some USV subtypes. Thus, syllable-by-syllable changes of spectrotemporal characteristics in one emotional context is generally not addressed. As vocal communication contains a variety of syllable categories as well as spectrotemporal changes that reflect a sender’s internal state, understanding these cues in mouse communications requires addressing all these characteristics in all syllable categories.

The experiments in Aim I of this dissertation address this issue. By categorizing mouse mating behavior based on the interaction intensity, and then categorizing vocalizations emitted in each of these states, these experiments identify cues that convey the affective state of the caller in all vocalization types emitted in each of these states. We hypothesize that the composition, intensity, and spectrotemporal characteristics of

6 syllable types emitted by mice during mating are influenced by the intensity of social interaction when emitting those syllables. Experiments in this aim provide us with the tools for identifying vocal sequences reflecting higher arousal mating interactions to be used as playback stimuli for behavioral and neurochemical studies in Aims 2 and 3.

The effect of vocalizations on the receiver

Expression is half of vocal communication and reception, analysis, and response forms the other half. The receiver’s response to the sender’s vocal expression of emotions, e.g., fear, distress, appeasement, and pleasure, involves changes in the receiver’s emotional state, which can then result in behavioral responses. For instance, animals such as meerkats and vervet monkeys emit alarm calls that convey information about both predator type and about urgency of danger (Seyfarth et al., 1980, 1990; Manser et al., 2002). Hearing these calls elicits defensive behaviors such as escape, avoidance, and hiding (Litvin et al., 2010). Conversely, other vocalizations may trigger approach behavior. For example, the of crying infants, results in parental approach and attention in (Li et al., 2018; Gholampour et al., 2020). Further, rodent pups under stress, cold, or hunger emit vocalizations that trigger maternal/paternal nest building, licking, and crouching behaviors that help nursing (Brudzynski, 2009; Scattoni et al.,

2009; Liu et al., 2013). These calls, when produced under isolation, cause retrieval behavior in mothers (Scattoni et al., 2009; Lahvis et al., 2011; Valtcheva and Froemke,

2019).

7

In mice during mating behavior, females prefer vocalizing male mice over devocalized males (Pomerantz et al., 1983; Portfors and Perkel, 2014). Recent studies have shown that playback of courtship USVs triggers a female’s approach (Hammerschmidt et al.,

2009), and these vocalizations are considered important in maintaining a female’s proximity and reducing aggression during copulation (Pomerantz et al., 1983).

Furthermore, the patterns of USVs during social interactions can affect the behavior of socially engaged mice but not the behavior of other animals that observe the interaction

(Sangiamo et al., 2020). Such impact of vocalizations on the listener is not limited to

USVs and is further observed in response to broadband vocalizations. For instance,

Niemczura (2019) reported increased anxiety-related behavioral responses and corticosterone hormone in male and female mice in response to synthetic sequence of broadband calls such as LFHs and MFVs.

While our work is informed by the extensive literature on listeners’ responses to various vocal categories and acoustic cues, this project seeks specifically to understand how vocalizations with opposing valence, carrying prosodic and semantic information in natural sequences, can influence the behavior of the listener. Work in Aim 1 and previously (Grimsley et al., 2016) allows us to generate highly salient exemplars of vocal signals emitted during mating and restraint. These exemplars, then, are used in Aim 2 playback experiments, in which the behavioral responses of male and female mice to these negative and positive affect vocalizations are analyzed. We hypothesize that oppositely valent vocal sequences, carrying cues reflecting the internal state of vocalizing animals, can change the behavioral responses in a sex- and context-specific

8 manner. Findings of this aim will help to understand the changes in neurochemical release in response to these vocalizations in Aim 3.

Amygdala and emotional processing

The is a collection of several functionally and anatomically interconnected structures that regulate autonomic, endocrine, and motor functioning in response to emotional stimuli (Fig. 1.2) (LeDoux, 2000 & 2007). The amygdala, as part of the limbic system, is a collection of interconnected nuclei in the medial that is believed to be involved in shaping behavioral responses to biologically significant sensory information (LeDoux, 2000; Davis and Whalen, 2001; Sah et al., 2003; Phelp and

LeDoux, 2005). For this role, sensory information reaches the lateral nucleus of amygdala (LA), and to a lesser extent, the basal nucleus (BA), via projections from sensory thalamus and cortex (Fig. 1.2A). These two nuclei are collectively referred to as the basolateral amygdala (BLA). The BLA integrates inputs related to several sensory modalities, polysensory and limbic-related cortices, and neuromodulatory inputs that provide information related to context and internal state (LeDoux et al., 1990;

McDonald, 1998; Sah et al., 2003; Phillips et al., 2003; Gabbott et al., 2005; LeDoux,

2007; Janak and Tye, 2015; Likhtik and Johansen, 2019; Wenstrup et al., 2020). These connections help the amygdala to provide associations between current sensory inputs and past experiences (Sah, et al. 2003) and to define salience and valence of these inputs. The BLA, then, projects to brain centers that mediate internal and external responses to this integrated information (Fig. 1.2B) (Cardinal et al., 2002; Jimenez and

Maren, 2009; Namburi et al., 2015; Beyeler et al., 2016 & 2018).

9

Figure 1.2 General connections of the amygdala. A. Major inputs. B. Intra-amygdalar projections and outputs of the basolateral and central amygdala. Abbreviations: B, basal nucleus of amygdala; CeA, central nucleus of amygdala; itc, intercalated cell group; La, lateral nucleus; M, medial nucleus of amygdala. Modified from Ledoux (2007; see Wenstrup et al., 2020).

Recent findings suggest that within the BLA, neurons exhibit preferential responses to aversive and appetitive stimuli. These preferences are linked to intermingled subpopulations of neurons within the BLA that project to the brain areas involved in shaping behavioral responses to appetitive (nucleus accumbens (NAc)) and aversive

10

(central nucleus of amygdala (CeA)) stimuli (Zhang et al., 2013; Namburi et al., 2015). It has been suggested that through associative and via previous experiences, NAc and CeA projectors undergo opposing patterns of synaptic strength, which differentiate positive and negative valence of these sensory stimuli and result in the avoidance or consummatory behaviors (Namburi et al., 2015). This integrated anatomical network helps the amygdala to perform its role in emotional processing, behavior, and perception (Amaral, 2003; Todd and Anderson, 2009; Janak and Tye, 2015; Di Ciano and

Everitt, 2004).

Amygdala role in processing acoustic communication

There is a large body of work over several decades exploring the role of the amygdala in sound-evoked associative learning termed auditory fear conditioning (e.g., Ledoux et al., 1983, 1985, 2000; Li et al., 1996; Collins and Pare, 2000; Goosen et al., 2000; Kim et al., 2007; Costafreda et al., 2008; Tye et al., 2008; Zhang et al., 2013; Genud-Gabai et al.,

2013; Gore et al., 2015; Janak and Tye, 2015; Beyeler et al., 2016 & 2018; O'Neill et al.,

2018). Figure 1.3 provides an auditory-centric view of the amygdala connections involved in processing and shaping behavioral responses to meaningful affective sounds, including vocalizations.

11

Figure 1.3 An auditory-centric view of connections of the amygdala. A. The analysis of acoustic stimuli including communication signals by the amygdala depends on auditory inputs (blue) and others that contextualize the interpretation of acoustic signals: neuromodulatory centers (red dashed lines) and sensory and limbic cortex (green). B. Amygdala outputs (in black) that affect auditory function include direct projections to auditory structures and indirect projections through neuromodulatory centers (red). Dashed lines indicate these amygdalo-auditory pathways. Adapted from Wenstrup et al., 2020.

The involvement of amygdala in processing affective sounds is well-proven by both human and animal studies. Human studies, for example, have shown amygdala activation not only in processing music (Blood and Zatorre, 2001; Koelsch et al., 2006;

Baumgartner et al., 2006; Ball et al., 2007; Remedios et al, 2009) and laughter and crying

(Sander and Scheich, 2001 & 2005; Fecteau et al., 2007; Fruhholz et al., 2016), but also

12 in emotional processing (Leitman et al., 2010; Frühholz et al., 2012; Sheppard et al., 2020). Human fMRI studies, for instance, have shown strong activation of the amygdala in response to negatively valenced emotional vocalizations such as angry and fearful words (Pannese et al., 2016), as well as positively valenced words (Hamann and

Mao, 2002; Zald, 2003).

Animal studies, on the other hand, have helped in understanding the detailed neural mechanisms of this process. Studies in rodents (Parsana et al., 2012; Grimsley et al.,

2013), (Naumann and Kanwal, 2011; Peterson and Wenstrup, 2012; Gadziola et al.,

2012b, 2016; Mariappan et al., 2016), and primates (Gil-da-Costa et al., 2004; Remedios et al., 2009; Payne & Bachevalier, 2019) have already established that amygdalar neurons respond to vocalizations of either negative and positive valence. Such valence discrimination could extend to fine subtypes of each category of positive and negative vocalizations (Wenstrup et al, 2020).

Another factor that influences the amygdala’s response to vocal stimuli is the surrounding context (Wenstrup et al., 2020). Several human and animal studies have supported this point by revealing how change in cues accompanying vocal stimuli could alter amygdalar neuronal responses. For instance, the gender of speaker (Fecteau et al.,

2007; Bach et al., 2008) or self- vs others-emitted vocalizations (Matsumoto et al., 2016) generate different response patterns in the amygdalar activation or individual neurons.

When evaluating the effect of acoustic context, BLA neurons responded differently to a syllable that is presented alone vs in the context of a natural sequence with other syllables (Gadziola et al., 2016). Further, adding other sensory information such as visual

13

(Ethofer et al., 2006; Klasen et al., 2011, Payne & Bachevalier, 2013 & 2019), olfactory

(Grimsley et al, 2013), or somatosensory cues (Morrow et al., 2019) can alter response patterns to vocalizations in the BLA neurons.

These findings strongly support that the amygdala plays a crucial role in processing valence and context of emotional speech and vocalizations. Therefore, it is not surprising to see its involvement in diseases such as spectrum disorder (ASD)

(Miller et al., 2005; Abrams et al., 2013; Rosenblau et al., 2017), post-traumatic stress disorder (PTSD) (Forcelli et al., 2017; Schneider et al., 2019), and (Leitman et al., 2010a,b; Gold et al., 2012), where emotional responses to vocalizations are altered. Animal studies further prove this point by showing that selective lesion to the

BLA reduces responsiveness to emotional vocalizations (Newman et al., 1997; Schönfeld et al., 2020) and vocal behavior (Jürgens and Ploog, 1970; Matsumoto et al., 2012; Ma and Kanwal, 2014; Green et al., 2018) in the affected species.

Neuromodulation of auditory and emotional processing

To process valence (affective value) and salience (significance) of sensory information, as stated earlier, the amygdala utilizes contextual information available in sensory input, but also appears to depend on internal state-related information provided by neuromodulatory inputs in the BLA. Neuromodulators in the nervous system play a crucial role in modulating the action of the main excitatory (glutamate) and inhibitory

(GABA) neurotransmitters. Unlike neurotransmitters that work at close range in the

14 synaptic cleft, neuromodulators such as acetylcholine (Ach), Dopamine (DA), norepinephrine (NE), and serotonin (5-HT) may use widely distributed projections and volume transmission to have a larger impact on the nervous system. Via such extended impact on the brain tissue and by taking advantage of slower release process, they mediate the effect of context and state on sensory information processing (Bradley et al., 2005; Wang & Pereira, 2016).

In the auditory system, cholinergic, dopaminergic, serotonergic, and noradrenergic inputs modulate neuronal responses to sound (Schofield and Hurley, 2018). These neurochemicals, thus, influence vocalization processing in auditory nuclei such as the inferior colliculus (Hurley and Pollak, 2005) or frequency tuning properties of auditory cortical neurons (Metherate, 2011; Edeline, et al., 2011). Neuromodulators can further be involved in the emission of affective communication calls. For example, cholinergic and dopaminergic pathways are shown to be involved in emitting alarming and affiliative vocalizations, respectively (Brudzynski, 2007; Wang et al., 2008).

Within the BLA, several neuromodulatory systems have been implicated in mediating the -behavior associations, including DA (See, Kruzich and Grimm 2001), Ach

(See et al., 2003; Jiang, et al. 2016, Unal et al., 2015), NE (Berlau and McGaugh, 2006), and 5-HT (Macedo, et al. 2005; Bradley et al., 2005) .

Ach is a major neuromodulator impacting many emotional processing centers including the amygdala. The role of Ach in amygdala processing is reciprocal and via two steps: (i)

BLA receives strong cholinergic inputs from the basal forebrain (Carlsenet al., 1985;

15

Woolf and Butcher, 1982). During sensory processing, this input is shown to modulate processing of aversive cues and associated learning in the BLA (Unal et al., 2015, Jiang et al., 2016). (ii) the amygdala influences cortical auditory information processing by indirect cholinergic modulation of cortical areas via CeA (see Fig. 1.3B). This results in cholinergic modulation of learning and sensory processing in cortical areas such as (Sah, et al. 2003). Overall, Ach can play a strong modulatory role in providing contextual information to the BLA neurons, and in influencing sensory processing via BLA in other brain areas.

DA is another important neurmodulator influencing sensory processing in the BLA neurons. The amygdala receives dopaminergic inputs from the ventral tegmental area

(VTA) (Asan, 1998). Dopaminergic modulation in the BLA is shown to enhance excitability of BLA neurons (Kröner et al., 2005), and influences appetitive behaviors as well as fear extinction via effects on BLA neurons (Di Ciano et al., 2004; Ambroggi et al.,

2008; Lutas et al., 2019; Likhtik and Johansen, 2019) . As DA influences the background activity of the amygdalar neurons (Rosenkranz and Grace, 1999), a mechanism that is linked to selective responses to social vocalizations in the BLA (Gadziola et al., 2016), DA can modulate BLA neruonal responses to vocal stimuli. Blockade of the DA receptors in the BLA significantly changes associative learning in behaving animals (Lamont and

Kokkinidis, 1998).

Other neuromodulators such as NE and 5-HT also modulate BLA response to biologically significant sensory inputs. For example, increased NE level in the BLA during stress- related experiences enhances learning and consolidation of tasks (McGaugh, 2002;

16

Buffalari and Grace, 2007; Zhang et al., 2013), while blockade of 5-HT receptors impacts amygdala activation in response to emotional stimuli (Bigos, et al. 2008; Bocchio, et al.

2016).

Acurate interpretation and behavioral responses to vocalizations depends on contextual information surrounding vocal signals. Given the roles of neuromodulators in shaping sensory processing in aversive and appetitive associated learning, and their influence on auditory processing throughout the brain, they are likely shaping vocal processing in the

BLA neurons in a context-dependent manner. However, the mechanisms by which this contextual modulation of vocal information processing occur in the BLA is not explored yet. To understand this question, using vocal stimuli developed in earlier aims, we examine in Aim 3 neuromodulator release into the BLA in response to aversive and appetitive vocalizations in the mouse BLA. We hypothesize that positive- and negative affect vocal sequences associated with mating and restraint experiences differentially modulate the release of DA, Ach, NE, and 5-HT; such that, the release of DA in response to mating vocalizations will increase, while other neuromodulators will decrease

However, hearing aversive vocalizations will enhance the release of NE, 5-HT, and, Ach while reducing DA.

Summary of specific aims of the dissertation

The general goal of the current dissertation is to understand how the brain, using contextual cues present in emotional vocalizations, shapes reactions to such stimuli. Our

17 overall hypothesis is that the sender’s emotional state is represented in acoustic cues in the emitted vocalizations, and that these cues, which are unique and different to positive and negative natural vocal sequences, are able to change behavioral responses and the release of neurochemicals into the BLA in sex- and context-specific ways.

Aim I: To investigate vocal acoustic features representing intensity of mating interaction in mice. Previous work has shown that not only syllable types (Grimsley et al, 2016) but also spectrotemporal features of syllables differ based on the behavioral context in which the vocalizations are emitted (Gaub et al, 2016; Sangiamo et al., 2020).

I hypothesized that the internal state reflected in intensity of mating interaction, can alter the syllable composition and spectrotemporal characteristics of vocalizations in mice. To test this hypothesis using experiments in Aim I, first, I recorded behaviors and vocalizations of mice during mating experience. Then, behaviors were categorized to higher and lower intensity interaction and accordingly, vocalizations were classified into these two categories. The acoustic features and syllable types in each category were analyzed and compared.

Aim 2: To assess behavioral responses in male and female mice to negative- and positive-affect vocal sequences. These experiments address the behavioral responses in listening animals to higher intensity mating and restraint vocalizations. I hypothesized that salient and valent vocalizations reflecting higher arousal states in the sender change the internal state and behavior of the listening male and female mice in distinct

18 ways. Using characteristics of higher arousal mating vocalizations identified in Aim 1, and features described for restraint vocalizations by Grimsley et al., (2016), highly salient appetitive and aversive vocal sequences were developed. These vocalizations were then used in a playback experiment in which the behavior of listening male and estrous female mice were video-recorded and analyzed.

Aim 3: To assess the changes in neurochemicals in the BLA in male and female mice in response to emotion-laden vocal sequences. Several neuromodulatory centers in the brain project to and modulate sensory processing in the BLA neurons. Release of neurochemicals in the BLA is linked to the internal state of the animal and the contextual information related to sensory inputs. I hypothesized that higher intensity negative and positive affect vocalizations can change the patterns of neurochemical release in the BLA in distinct ways. I tested this hypothesis using microdialysis to sample extracellular fluids over several hours before, during, and after playback of vocal sequences. The concentration of neurochemicals was analyzed using a liquid chromatography-mass spectrometry (LC/MS) technique.

19

CHAPTER II MATERIALS AND METHODS

Animals

Experimental procedures were approved by the Institutional Animal Care and Use

Committee at Northeast Ohio Medical University (protocol number 18-09-207). A total of 83 adult CBA/CaJ mice (p90-p180), male and female, were used for this study.

Animals were kept on a reversed dark/light cycle and food and water were provided ad libitum except during the experiment. Experiments were performed during the dark cycle and animals were singly housed during the week of experiments.

Prior to each experiment on female mice, estrous phase was evaluated. For sterile vaginal lavage, vaginal smear samples were collected using glass pipettes filled with double distilled water and placed on a slide. Samples were then stained using crystal violet and coverslipped to examine under a microscope. Estrous stage was determined based on the predominant presence of one of the major cell types: squamous epithelial cells (estrous), nucleated cornified cells (proestrous), or leukocytes (diestrous) (McLean et al., 2012). To confirm that the stage of estrous did not change during the experiment day, samples with vaginal smears before and after each experiment were compared.

20

Acoustic methods

Vocalization recording

Vocal recordings were performed using ultrasonic condenser

(CM16/CMPA, Avisoft Bioacoustics, Berlin, Germany) in a single-walled acoustic chamber (Industrial , New York, NY) lined with anechoic foam. Microphones were connected to a multichannel amplifier and A/D converter (UltraSoundGate 416H,

Avisoft Bioacoustics). The gain of each was independently adjusted once per each recording session to optimize the Signal-to- Ratio (SNR) but prevent saturation of the signals. Acoustic signals were digitized at 500 kHz and 16-bit depth, monitored in real time with RECORDER software (Version 5.1, Avisoft Bioacoustics), and

Fast Fourier Transformed (FFT) at 512Hz. Recordings were obtained in an open-topped plexiglass chamber (width, 28 cm; length, 28 cm; height, 20 cm) and all recordings were performed in the dark. A night vision camera (VideoSecu Infrared CCTV), centered 50 cm above the bottom of the test box, recorded the behaviors synchronized with the vocal recordings (VideoBench software, DataWave Technologies, version 7).

Mating vocalization recordings

Sixteen animals (8 male-female pairs) were used in this experiment. Each recording session lasted for 30 minutes. First, a male mouse was introduced to the test box, then after 5 minutes habituation, a female mouse was placed with the male. Mating vocalizations and behaviors were recorded using two ultrasonic microphones placed 30 cm above the bottom of the recording box and 13 cm apart. The gains of the

21 microphones were adjusted to optimize SNR for all syllable types independent of animal location during interactions.

Restraint vocalization recordings

Six mice (male and female) were briefly anesthetized with isoflurane and then placed in a restraint jacket, as described previously (Grimsley et al., 2016). Vocalizations were recorded for 30 minutes while the animal was suspended in the recording box. Since these vocalizations are usually emitted at low intensity, the recording microphone was positioned 2-3 inches from the snout to obtain the best SNR.

Mating vocalization analysis

Offline analysis of vocal recordings during mating used Avisoft-SASLab Pro (version

5.2.12, Avisoft Bioacoustics) with a hamming window, 1024 Hz FFT size, and an overlap percentage of 98.43. The channel with the higher amplitude signal was analyzed spectrographically for several acoustic features. Since automatic syllable tagging did not allow distinguishing some syllable types such as noisy calls and mid-frequency vocalizations (MFVs) from , we manually tagged the start and end of each syllable, then examined to measure several acoustic features and classify syllable types based on Grimsley et al. (2011). Vocalizations emitted during male-female interactions by mouse pairs were subsequently assigned to one of two behavioral categories described in a later section: a lower intensity stage of mating featuring mutual sniffing, genital sniffing, exploratory behaviors, and a higher intensity

22 stage of mating including mounting and head-sniffing. (See table 2.1 for behavioral descriptions).

The following acoustic features were extracted:

• Interval (inter-syllable-interval): The silent interval from the end of the one syllable to the start of the immediate next syllable

• Duration: The duration of each syllable from start to end.

• Minimum frequency of a syllable: The minimum frequency was calculated from the start to the end of one syllable in Hz.

• Maximum frequency of a syllable: The maximum frequency was calculated from the start to the end of one syllable in Hz.

• RMS amplitude: Overall energy of the syllable waveform calculated in root mean

Square (RMS) of the entire waveform.

• Peak-to-peak amplitude: The peak-to-peak amplitude in volts

Vocalizations were divided into four syllable categories based on Grimsley et al, 2011 &

2016 (Fig. 2.1):

• Low frequency harmonic (LFH) Calls: Syllables with the fundamental frequency below

5 KHz and harmonic stacks, usually extending into the ultrasonic range.

• Mid Frequency Vocalizations (MFV): Syllables with fundamental frequency between 5-

18 KHz, sometimes with harmonic stacks and chaotic elements extending into the ultrasonic range.

23

• Noisy Calls: Warbled syllables with noisy components that often cover a wide frequency range (10-120 KHz).

Figure 2.1 Sample spectrogram of four categories of syllables emitted by mice. A. Low Frequency Harmonics (LFH) syllable B. Mid Frequency Vocalization (MFV). C. Noisy syllable. D. Ultrasonic Vocalizations (USVs). This group contains various types. The upper row of (D) from left to right represents down-FM, up-FM, flat, complex, and reverse chevron. The lower row of (D) from left to right represents chevron, one-frequency step, two-frequency steps, and more than two frequency steps syllable. The white triangles points to the harmonic components. dB relative to full scale (dB (FS)) of the spectrograms is shown in the color scale bar.

24

• Ultrasonic Vocalizations (USVs): Vocalizations with fundamental frequency above 30

KHz.

USVs are further divided into sub-categories as previously described (Grimsley et al,

2011). For each sub-category, the presence of harmonics, subharmonics, or deterministic chaos was further evaluated (see Fig. 2.1).

• Chevron: An inverted U-shaped syllable with the highest frequency at least 6 KHz greater than the beginning and the end frequency.

• Reverse chevron: U-shaped syllable with the highest frequencies at the beginning and end more than 6 KHz different than the lowest frequency.

• Flat syllable: A syllable with changes of no more than 6 KHz.

• Short syllable: A syllable shorter than 5 ms.

• Complex syllable: A syllable with two or more directional frequency modulations of >

6 kHz.

• Up-FM: Upwardly-frequency modulated syllable with more than 6 KHz difference between the beginning and the end.

• Down-FM: Downwardly- frequency modulated syllable with more than 6 KHz difference between the beginning and the end.

• One-frequency step: A two-element syllable where one element is more than 10 KHz different from the second. Meanwhile, the timing of the beginning of one element overlaps with the end of the second element,

25

• Two-frequency step: A three-element syllable with the end time of each element overlapping with the beginning of the next, and each element more than 10 kHz separated from the following.

• More than two-frequency steps: A syllable with more than three elements where the beginning time of each element overlaps with the end time of the previous one. Like other frequency-stepped syllables, there is ≥ 10 kHz frequency difference between each two elements.

Vocalization playback

Several context-specific exemplars of natural vocal sequences were used to form stimulus blocks for each context in playback experiments (Fig. 2.2). Exemplars were chosen based on the high SNR, correspondence with behavioral category by video analysis, and unique representation of the spectrotemporal characteristic of restraint

(Grimsley et al, 2016) or mating (Fig. 3.1-3.6). Restraint stimulus blocks included seven vocal sequences emitted by restrained male or female mice with durations ranging from

5.7 – 42.3 s and contained all syllable types: USVs, LFHs, MFVs, and noisy syllables

(Fig.3.7). Mating stimulus blocks contained five exemplars of vocal sequences emitted during higher intensity mating interactions. These exemplars ranged in duration from

15.0 – 43.6 s. Each block included all four vocal categories. For both contexts, each stimulus block had a 150 s duration. These blocks were repeated throughout the 20 min playback (Fig. 2.3).

26

Figure 2.2 Structure of stimuli in playback experiments. Each playback experiment session included 20 minutes vocalization playback and two 10-minute windows of pre and post stimulus. The video recording was performed for the whole 40-minute window. Vocal playback window included seven stimulus blocks, each of which included five vocal exemplars. An equal duration of silence from the background of the same exemplar was added after each vocal exemplar. A small snapshot of spectrogram of vocalizations in each context is shown at the top of the figure.

Playback sequences were conditioned in Adobe Audition CC (2018), adjusted to a 65 dB

SNR level, then normalized to 1 V peak-to-peak based on the highest amplitude syllable in the sequence. This maintained relative syllable emission amplitude in the sequence.

For each sequence, an equal duration of background noise (i.e., no vocal or other detected sounds) from the same recording was added at the end of that sequence. A 5 ms ramp was added at the beginning and the end of the entire sequence to avoid acoustic artifacts. Playback levels were calibrated using custom software (EqualizIR).

Vocal sequences were converted to analog signals at 500 KHz and 16-bit resolution

27 using DataWave (DataWave SciWorks , Loveland, CO), anti-alias filtered (TDT FT6-2, fc=125KHz), amplified (HCA-800II, Parasound), and sent to the speaker (LCY, K100, Ying

Tai Audio Company, Hong Kong). Each sequence was presented at peak level equivalent to 85 dB SPL.

Behavioral methods

Behaviors during both contextual experience and playback were recorded using a night vision camera (480TVL 3.6mm, VideoSecu), centered 50 cm above the floor of the test box, and SciWorks (DataWave, Videobench version 7) for video acquisition and analysis.

Analysis of mating behaviors during vocal recordings

Video recordings of mating behavior were analyzed second-by-second and classified into lower and higher intensity mating interactions as described previously (Gaub et al.,

2016, Heckman et al., 2016). Lower intensity mating interactions included mutual sniffing, genital sniffing and exploring, while higher intensity mating interactions contained head-sniffing, attempted mounting, and mounting behaviors (Table 2.1).

28

Table 2.1. Behaviors identified for categorizing lower and higher intensity of mating interaction.

Vocal playback in behavioral experiment

Prior to this playback experiment, each animal underwent consecutive days of mating and restraint experience for 90 minutes each, in a counterbalanced design (Fig. 2.3).

29

Figure 2. 3. Experimental design for behavioral experiments. Mice first experienced mating and restraint contexts on two consecutive days in a randomized counterbalanced design. Four days later, on the day of the experiment, after a 3-hour habituation, their behavior was recorded before, during, and after vocalization playback

After three rest days, mice were observed during playback tests using either mating or restraint vocalizations described above. On the day of playback testing, mice were placed in the experimental chamber and allowed to habituate for three hours. Then, video recording began 10 minutes before vocal playback (pre-stimulus), continued for

20 minutes during playback, and included another 10 minutes after playback ended

(post-stimulus) (See Fig. 2.3). From the video recording, 11 behaviors were analyzed in

10-second intervals (Table 2.2).

30

Table 2.2. Behaviors analyzed during playback of mating and restraint vocalizations.

31

Procedures related to microdialysis

Surgery

Mice were anaesthetized with Isoflurane (2-4%, Abbott Laboratories, North Chicago, IL) and hair overlying the skull was removed using depilatory lotion. A midline incision was made, and the skin was moved laterally to expose the skull. A craniotomy (≃1mm2) was made above the basolateral amygdala (BLA) (at stereotaxic coordinates from bregma: -

1.65 mm rostrocaudal, +3.43 mm mediolateral). A guide cannula (CMA-7, CMA

Microdialysis, Sweden) was implanted to a depth of 2.6 mm below the cortical surface and above the BLA (Fig. 2.4A, day 2), then secured using dental cement and UV light.

After surgery, the animal was returned to its cage and the body temperature was maintained using a heating pad until it fully recovered from anesthesia and then, the cage was returned to the CMU facility for a 4-day recovery period before microdialysis experiments.

Microdialysis

Microdialysis playback experiments were conducted four days after surgery to implant the guide cannula. As the behaviors in male and females during restraint vocal playback were similar, but in mating playback were different, we conducted microdialysis experiments in males only for restraint vocal playback, but in both males and females in response to mating vocal playback. On the day before the experiment, the microdialysis probe (CMA-7, CMA Microdialysis, Sweden) was conditioned in 70% methanol and artificial cerebrospinal fluid (aCSF) (CMA Microdialysis, Sweden). On the day of the

32 experiment, the probe, with 1 mm membrane length and 0.24 mm outer diameter

(MWCO 6 kDa), was inserted into the guide (Fig. 2.4A-B, day 6).

33

Figure 2.4 Experimental design for microdialysis. A. Overall experimental design for one week of experiment; mice first experienced mating and restraint contexts on two consecutive days in a randomized counterbalanced design. Then, a guide canula (Day 2) was implanted above the basolateral amygdala (BLA). Four days later, on the day of the experiment, a microdialysis probe was inserted in the guide cannula (Day 6) and mouse was put in the test box to habituate. B. Experimental paradigm on the day of experiment: After several hours of habituation and CSF equilibration, CSF samples were collected during the experiment—before, during and after vocalization playback—and analyzed with the liquid chromatography/mass spectrometry technique (LC/MS).

34

Using a spiral tubing connector (0.1 mm ID x 50 cm length) (CT-20, AMUZA

Microdialysis, Japan), the inlet and outlet tubing of the probe was connected to the inlet

/outlet Teflon tubing of the microdialysis lines. A swivel device for fluids (TCS-2-23,

AMUZA microdialysis, Japan), secured to a balance arm, held the tubing, and facilitated the animal’s free movements during the experiment.

To prevent degradation of the collected neurotransmitters, outlet tubing was passed through ice to a site outside the sound-proof booth where samples were collected on ice. To account for the dead volume of the outlet tubing, a flow rate of 1.069 µl/min was established at the syringe pump to obtain a 1 µl sample per minute. Animals were allowed to habituate and the neurochemicals to equilibrate between artificial cerebro- spinal (aCSF) fluid in the probe and brain extracellular fluid (ECF) for four hours. Samples collected during this time were measured for volume to assure a consistent flow rate.

Samples collection for data began with four background samples, two samples during playback of restraint or mating vocal sequences (total 20 minutes), one or more samples after playback ended. Samples were collected in 10-minute intervals and stored in a

−80°C freezer.

Samples were analyzed using a liquid chromatography (LC) / mass spectrometry (MS) technique (Fig. 2.4B) at the Vanderbilt University Neurochemistry Core. This method allows simultaneous measurement of concentrations of acetylcholine (ACH), dopamine

(DA) and its metabolites (3,4-Dihydroxyphenylacetic acid (DOPAC), and homovanillic acid (HVA)), serotonin (5-HT) and its metabolite (5-hydroxyindoleacetic acid (5-HIAA)),

Norepinephrine (NE), gamma aminobutyric acid (GABA), and glutamate in the same

35 dialysate samples. However, due to low recovery rate of NE and 5-HT from mouse brain, we were unable to track these two neuromodulators in this experiment

Before each LC/MS analysis, 5 ul of the sample was derivatized using sodium carbonate, benzoyl chloride in acetonitrile, and internal standard (Kennedy et al, 2016). LC was performed on a 2.0 × 50 mm, 1.7 μM particle Acquity BEH C18 column (Waters

Corporation, Milford, MA, USA) with 1.5% aqueous formic acid as mobile phase A and acetonitrile as mobile phase B. Using a Waters Aquity Classic UPLC, samples were separated by a gradient of 98–5 % of mobile phase A over 11 min at a flow rate of 0.6 ml/min with delivery to a SCIEX 6500+ QTrap mass spectrometer (AB Sciex LLC,

Framingham, MA, USA). The mass spectrometer was operated using electrospray ionization in the positive ion mode. The capillary voltage and temperature were 4KV and

350°C, respectively (Kennedy et al, 2016). Chromatograms were analyzed using

MultiQuant 3.0.2 Software (AB SCIEX, Concord, Ontario, Canada).

Verification of recording location

To verify the probe location after each experiment, the probe was perfused with 2% fluorescein isothiocyanate–dextran (MW 4KDa) (Sigma) at a flow rate of 1 ul/min for 5 minutes. The location of the probe was then visualized in Nissl-stained sections and adjacent, fluorescent-imaged sections. Sections were photographed using a SPOT RT3 camera and SPOT Advanced Plus imaging software (version 4.7) mounted on a Zeiss Axio

Imager M2 fluorescence microscope. Adobe Photoshop CS3 was used to invert brightness levels and to adjust brightness and contrast globally. Only animals with more

36 than 80% of the probe membrane located within the BLA were included in statistical analyses.

Statistical methods

All statistical analyses were performed using SPSS (IBM, versions 25 and 26). The overall testing hypothesis for the first aim was that acoustic features of vocalizations are influenced by the internal state of the vocalizing male and female mice. Vocalization data were analyzed using a full factorial linear mixed model for the fixed effects of state, harmonicity, syllable types, and the interaction of these three. Random effect was animal which was included as the intercept. Dependent (response) variables included: minimum and maximum frequency, duration, interval, peak-to-peak amplitude, RMS amplitude. All syllables (single or overlapped, n=11822 syllables) were included to measure the proportion of the emitted syllable types in each behavioral state.

Overlapped syllables were then excluded from other analyses and the rest of syllables

(n=10715) were used in linear mixed model.

For behavioral and neurochemical analyses, generalized linear model (GLM) repeated measure MANOVAs were used to track the changes of behavior and each neurochemical in every animal over time and between experimental groups. For both analyses, where Mauchly’s test indicated that the assumption of sphericity had been violated, degrees of freedom were corrected using Greenhouse Geisser estimates of sphericity.

37

For the second aim, the testing hypothesis was that valence of mating and restraint vocalizations change the behavior of mice in a sex- and context- dependent manners.

The GLM for testing this hypothesis used time as a within-subject factor; sex, context, and the interaction of these two were the between subject factors. Dependent

(response) variables included the behavioral measurements of flinching, pull-back posture, still and alert posture, stretch and attend posture, orientation to speaker, orientation to null, exploring speaker hole, exploring null hole, exploring speaker zone, exploring null zone, rearing, self-grooming, attending, and locomotion. Tail rattling and sleep were excluded from the analysis as they were rarely observed. A total of 29 mice were used in this experiment (female-mating=7, female-restraint=7, male-mating=9, and male-restraint=6).

For the third aim, the testing hypothesis was that the release of neurochemicals in the basolateral nucleus of amygdala is differently modulated by mating and restraint vocalizations in male and (estrous and diestrous) female mice. The GLM model for testing this hypothesis used time as a within-subject factor, and sex, context, and estrous as the between-subject factors. Dependent (response) variables included the normalized concentration of Ach, DA, 5-HIAA, GABA, Glu, HVA, and DOPAC. A total of 31 mice were used in this experiment (estrous-female =8, diestrous-female =7, male- mating=9, and male-restraint=7).

All neurochemical data were normalized to the background level. The percentage change from background level was calculated based on this formula: % change from background = (100x sample concentration in pg)/ background concentration in pg

38

To assess background values, we used a single pre-stimulus sample immediately preceding playback. This provided clarity in representations and did not result in different outcomes of statistical tests compared to use of three pre-stimulus samples.

The fluctuation in all background samples was ≤ %20.

Values are represented as mean ± one standard error unless stated otherwise. Box plots indicate minimum, first quartile, median, third quartile, and maximum values.

39

CHAPTER III RESULTS

In social interactions involving acoustic communication signals, an animal receives and analyzes sensory information, compares it with previous experiences, identifies the meaning (salience and valence) of such information, and shapes appropriate behavioral responses to the signals.

In mammals, these integrated functions depend on brain circuits that include the amygdala, a region located within the temporal lobe that is recognized to play a role in orchestrating responses to emotional sensory stimuli (LeDoux 2000; McGaugh et al, 2002; LeDoux 2003; Sah et al., 2003; Wenstrup et al., 2020). The amygdalar target of auditory input is the basolateral nucleus (BLA), which receives projections from auditory cortex and auditory thalamus (Figs. 1.2,

1,3) (LeDoux et al, 1991; Li et al., 1996; Sacchetti et al., 1999; Keifer et al., 2015). Through reciprocal connections with areas such as the hippocampus, the BLA compares received sensory information with previous experiences (Pitkänen et al., 2000). By integrating this information with other sensory inputs and inputs from other limbic areas, BLA neurons “decide” on the appropriate behavioral response (Namburi et al., 2015,2016; Beyeler et al., 2016; Gründemann et al., 2019). Then, via projections to downstream targets such as the nucleus accumbens (NAc)

(Ambroggi et al., 2008; Stuber et al., 2011) and central nucleus of the amygdala (CeA) (Ciocchi et al., 2010), defensive or affiliative/appetitive behaviors occur.

40

Evidence suggests that the amygdala processes sensory information (Phelp & Ledoux, 2005;

Orsini et al., 2013) including information related to vocalizations (Grimsley et al, 2013; Gadziola et al, 2016) in a context-dependent manner (Goosens et al., 2000; Sander et al., 2005; Wiethoff et al., 2009; Leitman et al., 2010; Matsumoto et al., 2012; Parsana et al., 2012; Grimsley et al,

2013; Gadziola et al., 2016; Li et al., 2017; Gründemann et al., 2019). In some cases, the contextual information clearly arises from inputs from other sensory modalities (e.g., somatosensory information in fear conditioning (Lanuza et al., 2004; McDonald et al., 1998), or olfactory input in responses to vocalizations (Grimsley et al., 2013). In some cases, however, the contextual information is associated with an animal’s internal state. Brain circuits involving modulatory neurochemicals (i.e., neuromodulators) have long been suggested to provide information linked to internal state and context that could modulate the sensory processing

(Bradley et al., 2005; Krichmar, 2008; Bergmann, 2012; Piccioto et al., 2012; Schofield and

Hurley, 2018; Likhtik and Johansen, 2019). The neuromodulatory inputs into the BLA have also been extensively studied and shown to modulate processing sensory inputs (Rosenkranz &

Amiel, 1999; Kruzich et al., 2001; See et al., 2003; Julie et al., 2003; Macedo et al., 2005; Kröner et al., 2005; Mitsushima et al., 2006; Hoebel et al., 2007; Arias-Carrión et al., 2007; Piccioto et al., 2012; Zhang et al., 2013; Unal et al., 2015; Jiang et al., 2016; Bocchio et al., 2016; Aitta-Aho et al., 2018). These shape attention, emotion, and goal-directed behaviors (Krichmar, 2008;

Naneix et al., 2012; Avery and Krichmar, 2017; Gielow and Zaborszky, 2017; Likhtik and

Johansen, 2019). Despite this evidence, there is little understanding of the contextual information provided to the amygdala during emotional vocal communication. Our hypothesis is that emotional vocalizations elicit distinct patterns of neuromodulator release into the BLA by

41 which they shape the processing of meaningful sensory information. To test this hypothesis, we designed three experiments in a mouse model to understand the behavioral and neuromodulator responses to emotional vocalizations.

Experiment 1: Identifying acoustic features associated with intense mating interactions

To study how vocalizations, affect the release of neurochemicals into the BLA, we first sought to develop highly salient vocal stimuli with positive and negative valence. We had previously shown that relatively short periods of restraint elicit anxiety-related behaviors, increased release of the stress hormone corticosterone, and distinct patterns of vocalizations in mice

(Grimsley et al, 2016). We used vocal sequences obtained from restrained mice to produce vocal stimuli with negative valence. To create vocal stimuli associated with appetitive behaviors and positive valence, we studied the vocalizations produced during mating behaviors.

We recorded behaviors and vocalizations (n= 11822 syllables) of CBA/CaJ mice (n=8 male/female pairs) during 30-minute mating interactions. We first categorized behaviors between male/female pairs, separating these into lower and higher intensity stages of mating, as described in chapter 2 (see table 2.1) (Heckman et al., 2016). We, then, analyzed the acoustic features of vocalizations emitted during these two stages. Example sequences and features are shown in Figure 3.1. During the low intensity mating behavior, vocalizations consisted mainly of ultrasonic vocalizations that were relatively infrequent, with long intervals between individual

42 syllables or short syllable sequences (Fig. 3.1A-B, left). Most vocalizations were relatively short

USVs (Fig. 3.1C), with lower intensity (Fig. 3.1D) and higher minimum frequency (Fig. 3.1E).

Vocalizations emitted during high intensity mating behaviors often included long sequences of

USVs mostly emitted by the male (Neunuebel et al., 2015; Heckman et al., 2016) intermingled with LFHs emitted by females (Grimsley et al., 2013; Keesom and Hurley, 2016). The high intensity mating interaction shown in Figure 3.1 represents the increase in the interaction intensity during mounting attempted by the male (Fig. 3.1A, on right). As the sequence progressed, the USVs were more complex, more frequent, longer, more intense, and with lower minimum frequency. LFH calls were emitted by the female during the mounting (Fig. 3.1, right).

43

Figure 3.1 Vocalizations are modulated by the intensity of mating interactions. Short sequences of vocalizations emitted by CBA/CaJ mouse pairs during low intensity (left) and high intensity (right) mating interactions. A. Sound spectrograms show syllables emitted during 3.5 s sequences in each behavior. Note different vertical axis scales for the two behavioral conditions. B-E. Acoustic features of each USV emitted in these sequences. B. Inter-syllable intervals (ISI) for USVs. Note different vertical axis scales the two behavioral conditions. C. Duration of USVs (sec). D. Syllable intensity as measured in RMS voltage (expressed in decibels relative to full scale). E. Minimum frequency of USVs. During higher intensity mating behavior, male/female pairs produced more USV syllables and LFH calls. USVs had shorter ISI, longer duration, higher intensity, and lower minimum frequency.

44

Across the recorded vocalizations obtained from the 8 mouse pairs, we observed these changes in syllable composition and acoustic features as a function of interaction intensity. Figure 3.2A compares the occurrence of each syllable category and specific

USV syllable type during low and high intensity mating. Simple USV syllables (e.g., flat

(70.4%) and FM types (64.6%), and MFV (89.4%) and noisy calls (96.7%) were much more likely to be emitted during low intensity mating behavior. In contrast, more complex USV syllables (e.g., stepped (68%), chevron (79.6%), and complex syllables

(88.5%)) and LFH calls (91.5%) were produced more often during high intensity mating.

Furthermore, the complex USV syllables were produced with multiple harmonic elements more commonly during high intensity mating (Fig. 3.2B). Overall, syllable complexity increased during high intensity mating behavior, in agreement with previous studies (Hanson & Hurley, 2012; Egnor & Seagraves, 2016; Gaub et al., 2016).

45

46

Figure 3.2 Syllable composition and complexity change with mating intensity. A. Bar graph shows percentages of each syllable type recorded across all low (navy blue) and high (red) intensity mating interactions. Note the higher percentage of complex syllables, chevrons, syllables with frequency jumps (steps), and LFH calls emitted during higher intensity mating. B. Frequency of occurrence of USV syllables in low (upper graph) vs high (lower graph) intensity stages of mating interactions. Grey bars facing left show USV syllables with no harmonics. Orange bars facing right show syllables with at least one harmonic in addition to the fundamental. Note that many syllables shown in part A to be emitted preferentially during high intensity mating have more harmonic components (Low intensity syllables, n = 5540; high intensity syllables, n = 5180 syllables).

47

To perform further analyses of spectrotemporal features of syllables, we removed from consideration all overlapping syllables, as occur when a male-emitted USV overlaps with a female-emitted LFH (Fig. 3.1A, right). The results of these analyses are shown in

Figures 3.3-3.6. Figure 3.3 shows that for all male/female pairs (significant main effect of state, n=8 mouse pairs and 10720 syllables; F (1, 10182) =13, p<0.0001, Linear mixed model; Fig. 3.3A) and based on the syllable types (significant interaction of state and syllable type; n=10720 syllables, 13 syllable types; F (12, 10190) =1.85, p=0.03, Linear mixed model Fig. 3.3B), there was a marked reduction in ISI (and increased vocalization rate) as mating interactions became high intensity.

Sound duration increased substantially during high intensity mating in each male/female pair (n=8 mouse pairs, 10750 syllables; F (1, 10715) = 83.0, p<0.0001, Linear mixed model;

Fig. 3.4A). However, the increased duration depended on syllable harmonicity

(significant two-way interaction of state and harmonicity, n=10750 syllables; F (1, 10712) =

14.5, p<0.0001, Linear mixed model). Between low and high intensity mating, the increases in duration were significantly larger for stepped USVs with harmonics

(significant three-way interaction between state, harmonicity, and syllable type, t=2.36, p=0.01, , Linear mixed model; Fig. 3.4B).

48

Figure 3.3 Inter-syllable-interval (ISI) decreases with increased intensity of mating interactions. A. Population data representing average ISIs recorded from 8 mating pairs during low and high intensity stages of mating. Each color line represents one mating pair (significant main effect of state, n= 10183 syllables: F (1, 10182) =13, *** p<0.001, Linear mixed model). B. Mean ISI values for syllable categories and USV types as a function of mating intensity. ISI values showed an interaction with mating intensity and syllable types Significant interaction of state and syllable type for LFH (t(0.79, 10190)=3.9,***p<0.001), short (t(0.79, 10192)=2.65,**p<0.01), for other syllables this interaction effect was not significant. All values in figures are mean ± SEM. .

49

Figure 3.4 Syllable duration increases with increased intensity of mating interaction. A. Population data representing average duration of all syllables recorded from 8 mating pairs for low and high intensity stages of mating. Each color represents one mating pair (n=10750 syllables; F (1,10715) =83, ***p<0.001, linear mixed model) B. Mean values for duration of all syllable categories and USV types as a function of mating intensity and syllable harmonicity. One frequency stepped USVs with harmonic content showed significant increase in duration with mating intensity (Significant three-way interaction of harmonicity, state, and syllable type, t=2.36, *p<0.05, linear mixed model). This effect was not significant for other syllable types. Color codes represent harmonicity of syllables: orange: harmonic, grey: nonharmonic. All values in figures are mean ± SEM.

50

Sound amplitude also increased during high intensity mating. Figure 3.5A shows that the peak-to-peak amplitude of syllables increased across all mating pairs (n=8 mouse pairs,

10750 syllables, F (1, 10720) =24.5, P<0.0001, Linear mixed model). This finding is further supported by a comparison of sound energy (expressed as root mean square (RMS) level) as a function of mating intensity (n=8 mouse pairs, 10750 syllables, F (1,10710)

=45.625, p<0.0001, Linear mixed model). Figure 3.5B shows that LFH calls displayed the largest increase in amplitude (Mean ± SD: low intensity= 0.2 ± 0.16, vs high intensity=

0.6 ± 0.6; t =10.84, p<0.0001, Linear mixed model). USV syllables with multiple harmonics showed a greater increase in peak-to-peak amplitude with mating intensity, compared to those without harmonics (Significant main effect of harmonicity, n=10750 syllables, F (1, 10718) =9.8, P=0.002, Linear mixed model, 3.5B ).

Previous work has shown that the minimum frequency of syllables represents the emotional state of the caller (Zimmermann et al., 2013; Gaub et al., 2016). We observed that such state-related changes were dependent on syllable harmonicity (Significant three-way interaction of state, syllable type, and harmonicity, n=4927 syllables, 12 syllable types, F (8, 4919) =3.2, P=0.001, Linear mixed model, Fig. 3.6). Thus, syllables such as stepped calls that had multiple harmonics during higher intensity mating (Fig. 3.2B) showed the most significant state-dependent changes (t=-2.3, P=0.02, Linear mixed model; Fig 3.6), while other syllables that were emitted primarily in non-harmonic forms

(Fig. 3.2B) showed little change with mating category (e.g. UP-FM: t = 1.35, P=0.18,

Linear mixed model; Fig. 3.6).

51

Figure 3.5 Peak-to-peak amplitude of emitted syllables increases with harmonicity and mating intensity. A. Population data representing average peak-to-peak amplitude of all syllables recorded from 8 mating pairs for low and high intensity stages of mating. Each color represents one mating pair, (n= 10750 syllables, F (1,10720) =25.5, ***p<0.0001, linear mixed model). B. Mean amplitude of emitted syllables (in volts peak-to-peak) as a function of harmonicity for low and high intensity stages of mating. Color codes represent harmonicity of syllables: orange: harmonic, grey: non-harmonic syllables. Note the amplitude increase for syllables with frequency steps, complex syllables, and chevrons with harmonicity. LFH syllables emitted in the high intensity phase were also emitted at higher amplitude (t =-10.84, ***p<0.001, Linear mixed model), All values in figures are mean ± SEM.

52

Figure 3.6 Minimum frequency of syllables decreases with increased intensity of mating interaction as a factor of harmonicity. Minimum frequency by syllable types based on the harmonicity and intensity stage of mating interaction (Significant main effect of harmonicity on min frequency, n=4927 syllables, 12 syllable types, F (1, 4920) =57.0, P<0.001, Linear mixed model). Color code: Orange, harmonic, grey: nonharmonic. Syllable categories are at the top of each panel. Harmonicity significantly reduced min frequency in flat (t =2.0, *p<0.05), chevron(t =2, *p<0.05), one step (t =10.16, ***p<0.001), two steps (t =-7.0, ***p<0.001), and >2 step syllables (t =2.6 , **p<0.01), but the effect in other syllables was not significant. Linear mixed model, all values in figures are mean ± SEM. .

53

Overall, our data suggest that during vocal communication in mice, the intensity of mating interaction affects syllable composition (including USV, LFH, MFV, and Noisy syllables), spectral features (harmonicity, minimum frequency), temporal features (e.g. interval and duration), and syllable intensity. Together with our previous study

(Grimsley et al, 2016), these findings indicate that the change in state of vocalizing animals, represented in the intensity of social/sexual interaction, are reflected in the acoustic characteristics of their vocal signals.

Based on these findings and previous work by Grimsley et al (2016), we constructed vocal stimuli to be used in Experiments 2 and 3. Samples of these sequences are shown in Figure 3.7. Each spectrogram shows a 2-second sample of a longer sequence that contributed to the acoustic stimuli. Mating sequences (Fig. 3.7A) included many USVs with harmonics, steps, and complex structure, as well as LFH calls. Restraint vocalizations (Fig. 3.7B) include many USVs, MFVs, LFHs, and Noisy calls.

54

Fig. 3.7 Spectrogram snapshot of some of the vocal sequences used as stimuli in Experiments 2 and 3. A. Mating sequences. The durations of the full sequences were 5 s (top), 17 s (middle), and 7 s (bottom). B. Restraint sequences. The duration of the full sequences were 21 s (top), 18.2 s (middle), and 21 s (bottom).

55

Experiment 2: Emotional vocalizations result in behavioral changes in mice

Next, we asked whether such highly emotionally charged vocalizations, representing a sender’s internal state in their syllable composition and spectrotemporal features, produced changes in the behavior of listening animals. We hypothesized that the high salience of our vocal stimuli would likely evoke some similar behaviors for both stimuli, while other behaviors would depend on the valence or behavioral context of the vocal stimuli. Further, since one context involves mating behavior, we hypothesized that there are sex-based differences in responses to these vocalizations. To test these hypotheses, we presented vocal stimulus blocks consisting of exemplars recorded either during the high-arousal stage of mating or during restraint, then monitored several behaviors of listening mice.

The playback experiment was performed using sexually naïve mice (males and estrous females). Prior to the day of playback, they experienced mating and restraint situations on two consecutive days in a counterbalanced design. On the day of the playback experiment, mice acclimated to the experimental chamber for a period of three hours, after which we recorded their behaviors for 40 minutes while vocalizations (mating or restraint) were played back for 20 min in the middle of the recording session (Figs. 2.3).

In response to these two groups of contextual vocal sequences, mice displayed several patterns of behavioral responses. In one pattern, changes in behavior were similar across playback type and across sex. For example, during both mating and restraint, both male and female mice reduced maintenance behaviors such as self-grooming

56

(n=29 mice, significant main effect of sound, F (2,49.67) =6.6. P= 0. 003; = 0.66; repeated measures MANOVA Fig. 3.8). This was accompanied by a marked increase in alertness, expressed by abrupt attending, a behavior manifested in abrupt head and eye fixation followed by freezing, and a sudden change in head and body orientation (see table 2.2 for definitions of behaviors). The number of abrupt attending behaviors increased in response to both vocal types, and by both male and female mice (n=29 mice, significant main effect of sound, F (2,48.8) =72.5. P< 0. 0001; = 0.65; Fig. 3.9).

Figure 3.8 Self-grooming behavior in male and female mice similarly decreases during restraint and mating vocal playback. Boxplots represent average number of self-grooming behaviors during playback of mating (left) and restraint (right) vocalization for estrous female (red) and male (black) mice. n male mating=7, n female mating= 7, n male restraint=6, and n female restraint=9; main effect of sound playback, F (2,49.67) =6.6. P= 0. 003; = 0.66, General linear model MANOVA with repeated measures).

57

Figure 3.9 Abrupt attending in response to positive and negative affect vocalizations is similarly increasing in male and female mice. Boxplots representing average number of abrupt attending during mating (left) and restraint (right) vocalizations playback for estrous female (red) and male (black) mice. n male mating=7, n female mating= 7, n male restraint=6, and n female restraint=9; significant main effect of sound playback, F (2,48.8) =72.5. P< 0. 0001; = 0.65, general linear model MANOVA with repeated measures).

The stretch-attend posture is a risk assessment behavior associated in rodents with detection and analysis of possible threats (Blanchard et al., 2010; Hager et al., 2014).

Both vocal stimuli resulted in an increase in stretch-attend posture (n=29 mice, main effect of sound playback, F (1.48, 37) = 58.2. P< 0.0001; =0.49, general linear model

MANOVA with repeated measures; Fig. 3.10); Despite the fact that sex of the listener did not have a significant main effect on this behavior, females showed a stronger increase in this behavior during playback of mating vocalizations, while both sexes increased this behavior during restraint (Fig. 3.10).

58

Figure 3.10 Stretch-attend posture in male and female mice similarly increases during restraint and mating vocal playback. Boxplots representing average number of stretch-attend posture during mating (left) and restraint (right) vocalizations playback for estrous female (red) and male (grey) mice. (n male mating=7, n female mating= 7, n male restraint=6, and n female restraint=9; main effect of sound playback, F (1.48,37) =22.94. P= 0< 0001; = 0.49, General linear model MANOVA with repeated measures).

Other behaviors were affected differentially by the two vocal stimuli. For instance, flinching behavior, a reflexive twitching of head or body, increased only during restraint for both males and females (n=29, significant interaction of time and context, F (1.3,32.6)

=9.1. P= 0. 003; =0.43, General linear model MANOVA with repeated measures; Fig.

3.11). Flinching has been reported in rodents in response to nociceptive stimuli. Here, it occurred in response to aversive acoustic stimuli. Since the intensity of the maximum sound level of playback vocalizations was equalized for both vocal types, this is unlikely to be an acoustic startle response, but rather a behavior affiliated with an aversive stimulus.

59

Figure 3.11 Context-dependent modulation of flinching behavior in male and female mice. Boxplots representing average number of flinching behaviors during mating (left) and restraint (right) vocalizations playback for estrous female (red) and male (black) mice. Solid fills represent mating context, and patterned fills, restraint (n male mating=7, n female mating= 7, n male restraint=6, and n female restraint=9; significant interaction of time and context F (1.3,32.6) =9.1. P= 0. 003; =0.43, General linear model MANOVA with repeated measures).

Finally, we observed that some behaviors were modulated differently by the sex of listener in the two contexts: we observed that females in mating and males in restraint showed a dramatic increase in pull-back posture during 10-20 min window of vocal playback (n=29 mice, significant interaction of time, sex, and context, F (1.4,34.7) =4.9. P=

0. 02; =0.46, general linear model MANOVA with repeated measures; Fig. 3.12). This behavior is exhibited by a pulled-back body posture representing escape initiation

(Evans et al., 2019) that is often followed by immediate escaping away from the speaker.

60

Figure 3.12 Pull-back posture is differently modulated by negative and positive affect vocalizations in male and female mice. Boxplots representing average number of pull-back posture (escape behavior) during mating (left) and restraint (right) vocalizations playback for estrous female (red) and male (grey) mice. (n male mating=7, n female mating= 7, n male restraint=6, and n female restraint=9; significant interaction of time, sex, and context; F (1.4,34.7) =4.9. P< 0. 05; =0.46, general linear model MANOVA with repeated measures).

In female mice, this behavior was further accompanied by an increase in orienting the head toward the speaker (significant interaction of time, sex, and context, F (3,75) =46.9.

P= 0. 02, general linear model MANOVA with repeated measures), in the first 10 minutes of mating vocalization playback, whereas such sex-based differences were not observed during restraint vocal playback. On the other hand, males showed significant increase in their locomotion during mating, but not restraint playback and locomotion

(n=29 mice, significant interaction of time, sex, and context, F (3,75) =2.86. P= 0. 04,

61 general linear model MANOVA with repeated measures). Other behaviors such as rearing and still and alert did not show a significant interaction of sex and context.

Overall, these behavioral findings show that emotional vocalizations affect the behavior of mice, and that the meaning or valence of the vocal stimuli and the sex of the listener result in different behavioral responses. These results indicate that these vocal sequences have communication value and that such communication is, furthermore, modulated by the sex of the listening animal.

Experiment 3: Context- and state-dependent release of neuromodulators in the

BLA

We have established that vocalizations of mice reflect their internal state in the acoustic features of the vocalizations and that these vocalizations are capable of changing the behavior of listening mice in a sex-and context-specific manner. We next tested whether these contextually related vocalizations have the potential to affect processing of emotional vocalizations through differential patterns of neuromodulator release. Our focus is on neuromodulator release in the BLA because the BLA responds to social vocalizations in context-dependent ways (Grimsley et al., 2013; Gadziola et al., 2016;

Matsumoto et al., 2016). Since neuromodulators carry information about the state of an animal (Bradley et al., 2005; Krichmar, 2008; Bergmann, 2012; Piccioto et al., 2012;

Schofield and Hurley, 2018; Likhtik and Johansen, 2019) and can affect processing of sensory stimuli in the amygdala (Rosenkranz & Amiel, 1999; Kruzich et al., 2001; See et

62 al., 2003; Julie et al., 2003; Macedo et al., 2005; Kröner et al., 2005; Mitsushima et al.,

2006; Hoebel et al., 2007; Hoebel et al., 2007; Arias-Carrión et al., 2007; Piccioto et al.,

2012; Zhang et al., 2013; Unal et al., 2015; Jiang et al., 2016; Bocchio et al., 2016; Aitta-

Aho et al., 2018), we hypothesized that the major neuromodulators are released in patterns that are opposite for vocalizations associated with appetitive and defensive behaviors. Further, based on findings from Experiment 2, we hypothesized that mating vocalizations may evoke different patterns of release in male and female mice. We tested these hypotheses through an experiment combining playback of the same emotional vocal stimuli used in Experiment 2 with microdialysis and analysis of fluids within the BLA. As in the behavioral experiment, the microdialysis experiment used sexually naïve mice (males and females). After a one-hour experience with mating and restraint conditions on subsequent days (counterbalanced design), we implanted the mice with a guide cannula (Fig. 2.4A). Only mice with more than 80% of the microdialysis probe located within the BLA were included in the microdialysis data analysis (Fig 3.13). Four days later, on the day of microdialysis, a subject was placed in the experimental chamber and the microdialysis probe inserted. After a four-hour habitation/equilibration period, we monitored the release of several neurochemicals in the BLA in response to mating or restraint vocalizations (Figs. 2.4B). A liquid chromatography- mass spectrometry technique allowed simultaneous analysis of several neurochemicals in the same dialysate samples. Because previous studies suggested that interaction of cholinergic and dopaminergic systems shapes emission of positive and negative vocalizations (Brudzynski et al, 2020), we asked specifically

63 whether patterns of acetylcholine (ACH) and dopamine (DA) release showed differences in response to playback of appetitive and aversive vocalizations.

Figure 3.13 Microdialysis probe location. A. Probe location in an example animal. The location of the probe was marked by perfusing fluorescein isothiocyanate–dextran through the microdialysis probe at the end of the experiment. B. Probe location for animals in all experimental groups. Only animals with more than 80% of the microdialysis probe in the BLA were included in the final analysis.

We first examined this valence effect in male mice and saw distinct context-dependent modulation of cholinergic and dopaminergic release. In response to mating vocalizations, ACH release decreased in male mice below baseline levels, while restraint vocalizations resulted in an increase compared to baseline levels (significant interaction

64 of time and context, n males mating =9, n males restraint =7; F Ach (2, 28)=4.78, p=0.01; general linear model MANOVA with repeated measures; Fig. 3.14A). In contrast, DA release displayed an opposite pattern: DA increased during playback of mating vocalizations but decreased during playback of restraint vocal sequences (significant interaction of time and context, n males mating =9, n males restraint =7; FDA (2, 28) =5.35, p=0.01; general linear model MANOVA with repeated measures; Fig. 3.14B). Other neurochemicals showed no context-dependent release in the BLA (e.g., serotonergic release (F5-HIAA (2, 28) =0.43, p=0.65; Fig. 3.15).

65

Figure 3.14 Mating and restraint vocalizations differentially change ACH (A) and DA (B) release in the BLA of male mice. Boxplots show the percentage change of neurotransmitter release from background level (Pre) during (10-20, 20-30 mins), and after (30-40 mins) playback of mating (red) and restraint (gold) vocalizations. Y-axis: percentage change of in concentration compared to 10 min background level. Shaded area represents vocal playback window. A. During playback, ACH levels increased with restraint vocalizations and decreased with mating vocalizations (significant interaction of time and context, n male. mating =9, n male restraint =7; F (2, 28) =4.78, p=0.01; general linear model MANOVA with repeated measures). B. During playback, DA levels increased with mating vocalizations and decreased with restraint vocalizations. (significant interaction of time and context, n male. mating =9, n male restraint =7; F (2, 28) =5.35, p=0.01; general linear model MANOVA with repeated measures).).

66

Figure 3.15 Serotonergic activity in the BLA of male mice is not context dependent. Boxplots show the percentage change of 5-HIAA (a 5-HT metabolite) release from background level (Pre) during (10-20, 20-30 mins), and after (30-40 mins) playback of mating (red) and restraint (gold) vocalizations. Y-axis: percentage change of ACH concentration compared to 10 min background level. During playback, 5-HIAA levels changed similarly with mating and restraint vocalizations (n male mating =9, n male restraint =7; F (2, 28) =0.43, p=0.65; general linear model MANOVA with repeated measures).

Because both vocalizations and behavior differ between males and female during mating, we examined whether these sex-based differences are reflected in patterns of neuromodulator release into the BLA. Due to the extended time course of this experiment (four days), our testing inevitably included females in estrous as well as females in diestrous. We found that these two groups differed in their neuromodulator response to mating vocalization playback (significant interaction of time and context; F

67

Ach (2, 25) =6.37, p=0.01; general linear model MANOVA with repeated measures; Fig.

3.16).

Figure 3.16 Patterns of ACH, but not DA release in the BLA of female mice in response to mating vocalizations are dependent on estrous stage. Line graphs show the percentage change of ACH (left) and DA (right) release from background level (Pre) during (10-20, 20-30 mins), and after (30-40 mins) playback of mating vocalizations in diestrous (solid line) and estrous (dashed line) female mice. Y-axis: percentage change of concentration of ACH (left) and DA (right) levels compared to 10 min background level. During playback, ACH levels increased for the estrous females, while it reduced for diestrous females compared to background level. However, DA increased similarly for both estrous and diestrous females in response to mating vocalizations (n Estrous female =8, n non-estrous females =7; F Ach (2, 25) =6.37, p=0.01). Each line represents one animal; Shaded area shows vocal playback window.

68

Specifically, ACH release was increased in estrous-stage females but decreased in diestrous-stage females (Fig. 3.16). DA release increased in both groups of females. (Fig.

3.16). However, it was the diestrous-stage females but not estrous-stage females who matched the ACH release patterns of male mice (Fig. 3.17A). All three groups responded similarly to mating vocalizations with respect to DA release within the BLA (F (2, 42)

=1.897, p=0.16; Fig. 3.17B). Such distinct release patterns were not observed for other neurochemicals in response to mating vocalizations, e.g., serotonin (F 5-HIAA (2, 31.2)

=0.90, p=0.39; Fig. 3.18).

Together, these findings indicate that valent and salient vocalizations are capable in changing cholinergic and dopaminergic activity in the BLA in a context-related manner.

Further, our results show that release of ACH in the BLA can be further modulated by the hormonal state of the female mice.

69

70

Figure 3.17 Estrous- but not sex-dependent release pattern of ACH (A) and DA (B) in response to mating vocalizations in the BLA of male and female mice. Boxplots show the percentage change of neurotransmitter release from background level (Pre) during (10-20, 20-30 mins), and after (30-40 mins) playback of mating vocalizations in male (red), diestrous (solid cyan), and estrous (dashed cyan) females). Y-axis: percentage change in concentration compared to 10 min background level. Shaded area represents vocal playback window. A. During playback, ACH levels similarly increased for both males and diestrous females, whereas it decreased for estrous females compared to background level (n Estrous female =8, n non-estrous females =7, and n males=9; No interaction of time*sex F(2, 25)=3.36, p=0.07), but significant interaction of time and estrous (F(2, 25)=6.37, p=0.01). B. During mating playback, DA levels similarly increased for males and all females (estrous and diestrous) (n Estrous female =8, n non-estrous females =7, and n males=9; no interaction for time and sex F (2, 42) =1.897, p=0.16).

71

Figure 3.18 Serotonergic activity in the BLA of male and female mice during mating playback is not sex or estrous dependent. Boxplots show the percentage change of 5-HIAA (a serotonin metabolite) release from background level (Pre) during (10-20, 20-30 mins), and after (30-40 mins) playback of mating (red) and restraint (gold) vocalizations. Y-axis: percentage change of ACH concentration compared to 10 min background level. During playback, 5-HIAA levels changed similarly with mating and restraint vocalizations (n Estrous female =8, n non-estrous females =7, and n males=9; No interaction of time*sex F 5-HIAA (2, 31.2) =0.90, p=0.39), or time*estrous: F5-HIAA (2, 31.2) =1.06, p=0.34). Shaded area represents vocal playback window.

72

CHAPTER IV DISCUSSION

The current study examined vocal communication of emotion by assessing acoustic cues representing the sender’s state, the impact of such cues on behavioral responses of the listeners, and the patterns of neuromodulator release into the BLA that are evoked during processing these cues in listening male and female mice. An approach that combined microdialysis with liquid chromatography-mass spectrometry allowed us to examine the release pattern of several neurochemicals in the same sample and to evaluate the patterns they exhibit related to context and internal state in the BLA while processing socially relevant cues. Our data showed that playback of appetitive (mating) vocalizations increased the release of DA into the BLA and reduced ACH release.

However, when the valence of vocalizations was negative (restraint vocal sequences), the release patterns reversed such that ACH release increased while DA release decreased. These data show that neuromodulators provide contextual information to affect processing in the BLA. We further showed for the first time that the release pattern of ACH in female mice is modulated by hormonal changes. During playback of mating vocalizations, diestrous females showed the same patterns of ACH and DA

73 release as males; increased DA and decreased ACH. However, females in estrous showed an increase in ACH release in response to mating vocalizations. We did not observe such striking context- or state-dependent changes in the release pattern of other neurochemicals analyzed in the same samples. These findings are consistent with the understanding that ACH plays a crucial role in processing aversive cues in the BLA, whereas DA is involved in modulating neuronal processing of reward-related cues or fear extinction (absence of the negative cue). Further, our findings provide evidence for the role of neuromodulators in providing context- and state-dependent inputs to the

BLA.

Internal state modulates mouse vocal behavior

The first experiment of this dissertation aimed at identifying acoustic features associated with an appetitive context (mating), presumably reflecting the animal’s internal state. We used multichannel audio and simultaneous video recordings to categorize vocalizations based on the mating intensity. By analyzing a large dataset of nearly 12000 syllables from 8 mouse pairs, including all syllable categories, we described the contributions of all syllable types in mouse vocal communication in this behavioral context. We used behavioral classifications to identify two phases of interaction: lower and higher intensity mating. We found that during mating interactions mice emit four categories of vocalizations: three lower frequency/broadband categories along with

USVs. Within syllable types, we observed distinct spectrotemporal features associated

74 with the intensity of mating interactions. For example, chevron and step syllables, predominantly emitted in the higher intensity stage of mating, showed increased duration and peak amplitude and reduced minimum frequency, but these changes occurred mostly in harmonic syllables typically emitted during the higher intensity mating stage. Spectrotemporal changes were further present in LFHs, the broadband syllable that was often emitted by females during higher intensity mating interactions.

Overall, our findings demonstrate that acoustic characteristics and syllable composition changes with internal state during mating interactions in mice. These features can be utilized to identify salient and strongly valent vocalizations during mating interactions.

Acoustic features of emotional expression in mice during mating interactions

Social interaction, whether it is appetitive or aversive, progresses from an initial motivational phase to a final consummatory or encounter phase. During this progression, the intensity of emotions and behavioral expression that are activated by sensory inputs undergo sequential changes (Kennedy et al., 2014; Anderson, 2016).

Vocalizations are valuable indicators of emotions or internal state that contribute to the affective component of social communication (Zimmerman et al, 2016). During vocal communications, both syllable type and spectrotemporal features are affected by a sender’s affective state. Change in fundamental frequency, duration, and rate of syllable emission have been reported in different species as they are affected by the internal

75 state (Knutson et al., 1998; Rendall, 2003; Dietz and Zimmermann, 2004; Moles et al.,

2007; Soltis et al., 2011; Gadziola et al., 2012b; Grimsley et al., 2016).

Our data show that syllable composition in mice is directly affected by the intensity of social interactions. This is not only limited to USVs. Lower frequency and broadband syllables are also distributed differently in the two stages of the mating interaction; LFH calls were often emitted during the high intensity stage, whereas MFVs and Noisy syllables were typically emitted during low intensity mating interactions. This is not surprising; LFH calls are the syllables emitted by a female when a male mouse attempts to sniff her head or mount her (Ehret, 2013; Grimsley et al., 2013; Ronald et al, 2020).

Such behaviors are not usually observed in the lower intensity mating stage. MFVs, on the other hand, are the predominant syllable type emitted by mice during restraint stress, but only a small percentage of these syllables are observed in mating interactions

(Grimsley et al, 2016). This agrees with our findings showing that a small percentage of these syllables (approximately 2%) are emitted, mostly during low intensity mating.

Noisy calls emitted predominantly by mice in isolation (Grimsley et al., 2016), shaped more than 4% of all syllables emitted during mating interactions in our data set. These calls were limited to times when the animals were in an exploratory phase, exclusively during wall-supported rearing rather than in proximity to the other mouse. A playback study by Niemczura (2019) found that Noisy calls trigger the same corticosteroid and behavioral responses as MFVs, but different vocal responses. More research is required to understand the communication role of Noisy calls in mice.

76

Unlike the lower frequency and broadband calls, USVs have been well studied in terms of acoustic features and changes with social context in a variety of mouse strains (Holy and Guo, 2005; Wang et al., 2008; Grimsley et al., 2011 & 2016; Portfors and Perkel,

2014; Chabout et al., 2015; Egnor and Seagraves, 2016; Gaub et al., 2016; Weiner et al.,

2016; Matsumoto and Okanoya, 2016 & 2018; Castellucci et al., 2018; Okanoya and

Screven, 2018; Vogel et al., 2019; Sangiamo et al., 2020; Ronald et al., 2020; Nicolakis et al., 2020). However, no studies have considered how both the extent of syllable types and their acoustic features vary with different levels of intensity in mating interactions

We examined several types of USV syllables based on the characteristics defined by

Grimsley et al. (2011), observing that the proportions of USV types vary dramatically with mating intensity. During low-intensity mating interactions, tonal syllables (flat, up-

FM, and down-FM) form the majority of emitted USVs. During high intensity mating interactions, chevrons were the dominant USV syllable, with increased proportion of syllables with frequency steps (40 %vs 8%). These proportions differ from previous studies, only a few of which have identified chevron calls (Grimsley et al., 2011 & 2016;

Hanson and Hurley, 2012; Matsumoto and Okanoya, 2016; Okanoya and Screven, 2018).

Instead, other studies report that syllables with frequency jumps are the main category emitted during the mounting stage of mating (Holy and Guo, 2005; Gaub et al., 2016;

Chabout et al., 2015).

There are several possible explanations for this discrepancy. First, when examining higher intensity mating interactions, we used a broader definition of this behavioral category, including both head-sniffing and mounting behaviors rather than just

77 mounting behavior. However, our subsequent analyses suggest that the different definitions cannot account for these different proportions of stepped and chevron syllables. Second, different mouse strains may utilize syllable types in different proportions (Holy and Guo, 2005; Gaub et al., 2016; Wang et al., 2008; Matsumoto and

Okanoya, 2016 & 2018). Third, our recording techniques may have allowed us to detect chevron calls more sensitively. We used a multichannel recording setup for vocalization recordings. For every syllable, the channel with highest SNR was chosen to determine syllable type, rather than the single microphone recording used by others (Holy and

Guo, 2005; Wang et al., 2008; Matsumoto and Okanoya, 2016 & 2018). With the variations in head position during mating behaviors, a single microphone cannot represent the emitted spectrotemporal features of syllables as accurately, resulting in apparent discontinuities in the spectrograms. Thus, syllables will appear to include multiple elements or frequency steps rather than a continuous chevron shape. Finally, the necessity of removing overlapping syllables from further analysis (Grimsley et al,

2011 and 2016; Gaub et al., 2016), may have reduced the observation of chevron syllables, because these syllables are more likely to overlap with LFH calls during higher intensity mating.

With changes in mating interaction intensity, we also observed changes in other acoustic features of vocalizations. Syllables in the higher intensity mating stage were louder (higher peak amplitude), longer (duration), and emitted in higher rate (shorter

ISIs), with lower minimum frequency. These features did not change uniformly for all syllable types. For example, the change in the intensity of syllables was more dramatic in

78

LFH calls and among USV syllables such as chevron and stepped syllables. These syllables constitute the largest proportion of syllable types in higher intensity interactions. This study, the first to report such detailed syllable-dependent analyses, is based on a large dataset of recorded vocalizations (≈ 12,000 syllables).

We and others (Wang et al., 2008; Hanson and Hurley, 2012; Lahvis et al., 2011; Gaub et al, 2016; Matsumoto and Okanoya, 2016 & 2018) observed the increase of syllable harmonicity with intensity of mating interaction; of all harmonic syllables recorded, more than 90% were emitted in the high intensity interactions. These harmonic syllables

(especially chevron and stepped calls) displayed the largest increases in duration, by nearly double. Interestingly, modulation of male USV duration during mating can occur when the female is in estrous (Hanson and Hurley, 2016). The interaction of state with harmonicity was not limited to duration effects but was also observed in other acoustic components of emotional vocalizations such as peak amplitude and minimum frequency.

Although harmonicity changes have been reported in emotional vocalizations in various species (Narins and Smith, 1986; Marquez, 1995; Narins et al., 2004; Gourbal et al.,

2004; Suthers et al., 2006; Gadziola et al., 2012b; Zimmerman et al, 2016), its co- variation with other spectrotemporal characteristics of emotional vocalizations is rarely addressed (Gaub et al, 2016; Grimsley et al., 2016). The increase of syllable duration with harmonicity also occurs in vocalizations during aversive contexts in mice (Grimsley et al., 2016). As longer duration syllables are the result of longer exhalations (Castellucci et al, 2018), and since USV production is tightly coordinated with sniffing behavior

79

(Sirotin et al., 2014; Alves et al., 2016), such changes in syllable duration and ISIs appear to be the result of a dramatic decrease in sniffing cycle (Wesson et al., 2008; Sirotin et al., 2014; Castellucci et al, 2018). Modulation of spectrotemporal characteristics of vocalizations that result from changes in vocal fold dynamics seem to be under tight control of sympathetic autonomic arousal (Johnstone, 2001; Stewart et al, 2013). They are most likely controlled by the central nucleus of amygdala in response to increased arousal (Stewart et al, 2013).

Overall, findings of this experiment help to identify the acoustic features the characterize different levels of mating as an appetitive interaction. We use these features to develop highly salient vocal stimuli with positive and negative affect that are used to study listeners’ behavioral and brain responses to emotional vocalizations.

Receiver behaviors are influenced by social vocalizations

During social communication by sound, vocalizations reflecting the internal state of a sender influence the actions and emotional state of the listener. This process is shaped via the segmental and suprasegmental features in vocalizations that are received by and influence the listener (Wenstrup et al., 2020). As in humans, vocalizations impact the behavior of other mammals and vertebrates in everyday life. When hearing alarm calls or threat vocalizations, an animal may freeze or flee, depending on whether the threat is imminent or potential (Sayfarth et al., 1980; Zuberbühler et al, 1999; Devereux et al.,

2008; Litvin et al., 2010; Cäsar et al., 2013). Vocalizations in courtship displays or during

80

“play” result in successful mating (Spencer et al., 2005; Holveck et al., 2008; Woolley and Portfors, 2013; Neunuebel et al., 2015) or approach behavior (Wöhr and

Schwarting, 2007), respectively. The characteristics embedded in pup vocalizations trigger retrieval behavior in mothers (D’Amato et al., 2005; Hahn and Lavooy, 2005; Liu et al., 2013; Hiraoka et al., 2019).

In this study, we identified acoustic features associated with different levels of mating behavior, and by inference differences in internal state, then used these features to develop natural vocal stimuli with high salience for an appetitive behavior (mating).

Similarly, we used our previous work on restraint vocalizations and internal state

(Grimsley et al., 2016) to develop salient natural vocal stimuli associated with this negative behavioral context. We then assessed behavioral responses of male and female mice to such vocalizations.

The interpretation and response to salient vocalizations, whether they are linked to a threat or to a potential reward, are affected by several factors. These include various signal features such as acoustic context (syllable types and their ordering as well as the spectrotemporal characteristics of these syllables) (Holliday and Jaggers, 2015; Kang and

Johnson, 2018; Reinisch et al., 2011; Gadziola et al., 2016; Constantino and Simon 2018), previous experience linked to a vocalization or its associated behaviors (Sayfarth et al.,

1980; Yong, 2010; Synders et al., 2015; Chambers et al., 2017; Constantino and Simon

2018), and the internal state of the listener (emotional, physiological, endocrine) (Liao et al., 2015; Hurley and Kalcounis-Rueppell, 2018). In the current study, we controlled for several factors that could possibly affect responses to vocalizations. For example, the

81 vocal selections matched the high-intensity aversive and appetitive experiences to which the male and female mice had been exposed in the days prior to the playback experiments. These experiences were of the same duration and types across all tested animals, with no other previous experiences of these types. To control for hormonal state, only females in estrous were included in the final analysis of the behavioral experiments.

Our findings revealed distinct behavioral responses in male and female mice, some generalized and the result of enhanced attention and stimulus evaluation (such as increased freezing-like behaviors to stimuli of both contexts by both sexes), some sex- dependent responses (such as increased rearing in males to both vocal categories), and some differentially modulated by context (such as enhanced twitching or escape- initiation responses in restraint playback in both sexes).

Overall, salient mouse vocalizations that carry context-specific information in their composition and spectrotemporal attributes change the behaviors, and presumably the internal states, of listening mice. Further, the behaviors show both generalized responses to salient vocal signals and responses that are sex- or context-specific.

Behavioral changes in male and female mice in response to restraint vocalizations

Our study shows that playback of restraint vocalizations in male and female mice result in some common and some distinct behavioral responses for the two sexes. For example, these vocalizations resulted in aversive reactions such as flinching (head or

82 body twitching) in both male and female mice, similar to reactions previously reported in response to nociceptive stimuli in rodents (DeBerry et al., 2015; Larson et al., 2019).

This behavior occurred predominantly within the first 10-minute playback time window and was reduced in the subsequent 10-minute window. We do not believe that this was an acoustic startle response, because the amplitude of restraint vocalizations were the same as the mating vocalizations (both were normalized to the loudest syllable in the sequence and played at the same dB level), and mating vocalizations did not trigger such responses. Further, although some restraint calls may have had faster rise-times compared to some mating calls, the LFH calls emitted during mating could also display faster rise-times. Nonetheless, fast rise-time signals are often associated with negative valence. Prey species, when captured by predators, emit vocalizations referred to as

“death calls”; these calls have sharp onset with high intensity, and are considered to be a last effort to startle the predator into accidently releasing the prey, helping the animal to escape (Wise et al, 1999; Rendall and Owren, 2010). Alarm calls also have similar acoustic structure: broadband and fast rise-time, allowing for better localization and discrimination from the surrounding noise (Rendall and Owren, 2010; Pamela et al,

2011).

During playback of restraint vocalizations, both male and female mice demonstrated a decrease in normal exploratory and grooming behavior and increased risk assessment postures. This behavior was further accompanied by escape-initiation-like behaviors that in some cases were followed by escape from the speaker zone. Such changes in behavior have been previously reported in rats’ response to 22 KHz vocalizations

83

(Brudzynski and Chiu, 1995), an aversive type of call emitted by rats during negative experiences such as defensive/submissive behaviors, during confrontation with a predator, and during chronic pain contexts (Sales and Pye, 1974 ; Brudzynski, 2005). Our study reveals that this behavior was stronger in males in response to aversive calls, and in females in response to mating vocalizations, sometimes followed by escape.

After detecting a threat, animals demonstrate a short delay before showing escape behavior. This period, which may include a “stretched-attend” posture observed in our study for both vocal categories and sexes, is thought to be used to evaluate possible threats and the need for escape (Butler et al., 2009; Evans et al., 2018). Although escape should be initiated as soon as possible after detection of a threatening sensory stimulus, various factors can modulate whether the escape eventually will take place (Evans et al.,

2019). In mice, for example, escape initiation is controlled by the periaqueductal gray

(PAG), which receives input from retina (Deng et al., 2016; Evans et al., 2018). The visual detection of the threat, after receiving other threat-related sensory input, affects the eventual escape behavior (Evans et al., 2019). Thus, in our experiment, the presence of aversive vocalizations in the absence of visual threat may signal that the threat is potential rather than imminent, resulting in risk assessment behaviors and escape initiations but not many escapes. (Blanchard et al., 1998).

The observed difference in male and female response to threatening cues (i.e., restraint vocalizations) could be due to differences in the hypothalamic-pituitary-adrenal axis

(HPA), expressed as corticosteroid levels, and sympathetic nervous system activity, expressed as increased heart rate and blood pressure (Verma et al., 2011). The adrenal

84 cortex shows increased sensitivity in females during stress compared to males

(Roelfsema et al., 1993) and basal ACTH release seems to be higher in female rodents compared to males. Further, previous work has shown a greater corticosterone increase in females than males in response to vocalization playback (Niemczura, 2019).

Nevertheless, in our study, male mice showed a more pronounced increase in escape initiation and freezing like behaviors compared to females when responding to aversive vocalizations. This might be an influence of estrous cycle on female responses to the restraint stimuli. All females included in our dataset were in estrous.

Previous work has shown that corticotropin releasing factor (CRF) is modulated by ovarian hormones (Bangasser and Wiersielis, 2018). During non-estrous hormonal stages, CRF significantly disrupts attention when responding to stressors, whereas for females in estrous, estrogen appears to block the negative effect of CRF on attention to sensory cues (Cole et al., 2016). Further, basal forebrain cholinergic neurons express many estrogen receptors, which during the estrous stage results in an increase of ACH release in response to salient sensory stimuli. This is expected to result in enhancement of cognitive functions and cautiousness in dealing with sensory stimuli (Gibbs, 1996;

McEwen, 1998; Shughrue et al., 2000). This is supported by our microdialysis results that show an increase in ACH release in estrous females during mating playback. Thus, the combined effect of ACH and estrogen-blocking on CRH might have resulted in less escape initiation and freezing like behaviors (e.g., abrupt attending), compared to male mice in response to aversive calls.

85

Male and female behavioral responses to mating vocalizations

Our study reveals that mating vocalizations elicited an increase of exploratory behaviors in male mice while females tended to remain still and avoid the speaker zone. Because the mating vocal sequences used in our study included both the male and female vocalizations during the higher intensity mating stage (mounting), we cannot conclude which category of these vocalizations is responsible for triggering the observed behaviors. However, previous work has shown that the LFH calls can have either positive or negative meaning to male mice (Ehret, 2013; Grimsley et al., 2013; Ronald et al.,

2020), depending on the presence of other sensory cues. In the absence of these other sensory cues, we believe that the natural sequence of both male USVs and female LFH calls established a mating context in the playback experiment that had positive valence for the male mice, resulting in the increase in locomotion and exploratory behaviors throughout the playback period. Such active behaviors further represent males’ role during mating interactions, as male mice are overall more exploratory and active in examining the female and vocalizing to engage female in mating behavior (Johansen et al., 2008). Further, males did not exhibit avoidance or aversive behaviors such as escape initiation or flinching confirming the lack of negative cues in such vocalizations for male mice.

Unlike males in this study, the behaviors of females indicated avoidance (reduced locomotion and avoidance from speaker zone) and risk assessment. These behaviors suggest that the female mice interpreted the vocal sequences with caution. Despite that male mice have a more dominant role by exhibiting more active behaviors, females

86 represent their receptiveness through vocalizations such that the smaller percentage of

USVs (20-30%) emitted by females during courtship behavior may function as a signal of receptivity during mating (Neunuebel et al., 2015), while emitting LFHs early on could be a sign of rejection, thus preventing a successful mating interaction (Ronald et al., 2020).

Further, female mice are attracted to vocalizing males over devocalized ones

(Pomerantz et al., 1983; Hammerschmidt et al., 2009; Shepard and Liu, 2011). Overall, the cautious female response to mating sequences in our study may be part of a more complex response that includes both attractive responses to the USVs and cautious responses associated with the LFH calls. These more complex responses could be due to the amplified attention resulted from increase in ACH and its possible interaction with

DA release, as well as ovarian hormones due to estrous cycle.

The behavioral response to aversive and appetitive vocalizations observed in our study exhibited differences between males and females, depending on the valence of vocalizations. Even though such behaviors could represent different meaning of these vocalizations to the listening male and female mice, behavioral observations alone are not be sufficient for drawing conclusions about the biological significance of these calls.

Thus, in the third experiment of this dissertation, we further examined the release of neurochemicals in a brain region involved in emotional processing of affective vocalizations, the amygdala.

87

Neuromodulator release in the basolateral amygdala in response to affective vocalizations

Findings from Experiments 1 and 2 shed light on the two important aspects of vocal communication in mice: (i) acoustic cues used by the sender to reflect its internal affective state during communication and, (ii) sex- and context-dependent behavioral responses to these cues from listeners. The link between these two aspects of vocal communication is the result of brain circuits responsible for integrating this acoustic information with other sensory inputs, internal state signals, and previous experiences in the listener. With access to centers shaping behavioral responses to these sensory cues, including those involved in social communication (Bickart et al., 2014), the amygdala participates substantially in this process.

Several studies in humans have revealed responses of the amygdala to appetitive and aversive vocal communication signals (Sander et al., 2003a; Andics et al., 2010;

Viinikainen et al., 2012; Frühholz et al., 2012, 2014, 2016; Liebenthal et al., 2016 ;

Pannese et al., 2016; Abrams et al., 2016), and mechanistic studies in other species demonstrate these responses at the level of single neurons (Gadziola et al., 2012b, and

2016; Grimsley et al., 2013; Parsana et al., 2012a; Matsumoto et al., 2016; Naumann and Kanwal, 2011; Peterson and Wenstrup, 2012). However, despite evidence of amygdala involvement in processing vocalizations, in valence coding of appetitive and aversive cues, and in shaping appropriate behavioral responses to these cues, the process by which contextual information is delivered into the amygdala and contributes to vocal processing is not understood. Since the amygdala receives strong projections

88 from neuromodulatory brain centers (Carlsen et al., 1985; Young et al., 1998; Asan,

1997a, b & 1998; Berlau, 2006; Bigos et al., 2008; Bocchio et al., 2016; Aitta-Aho et al.,

2018), and since the involvement of these neurochemicals in providing internal state and contextual information is well-proven (Jiang et al., 2015; Likhtik and Johansen,

2019; Bocchio et al., 2016), we hypothesized that the release patterns of these neuromodulators in the BLA provide contextual information during processing of affective vocalizations. To test this hypothesis, we assessed the release of several neurochemicals in the BLA of freely moving male and female mice in response to oppositely valent exemplars of natural vocal sequences. Our findings reveal that these highly emotionally charged vocalizations result in distinct release patterns of acetylcholine (ACH) and dopamine (DA) into the BLA of male and female mice, Further, our data suggest that female mouse hormonal state influences ACH release in the BLA when processing mating vocalizations. Such context- or state-dependent changes in the release of other neurochemicals (GABA, Glu, 5-HT) were not observed. These data indicate that during analysis of affective vocalizations in the BLA, ACH and DA provide state- and context-related information that can, potentially, modulate sensory processing within the BLA and as a result participate in shaping the organism’s response to these vocalizations.

89

Context-dependent modulation of sensory processing in BLA by acetylcholine and dopamine

The BLA receives strong cholinergic projections from the basal forebrain, specifically from nucleus basalis of Meynert (NBM), the ventral pallidum (VP), and the substantia innominata (SI) (Carlsen et al., 1985; Zaborszky et al., 1999; Aitta-Aho et al., 2018).

Several studies have described the involvement of ACH in processing aversive cues and in fear learning in the amygdala (Mascagni et al., 2008; Baysinger et al., 2012;

Pidoplichko et al., 2013; Tingley et al., 2014; Gorka et al., 2015; Jiang et al., 2016;

Minces et al., 2017). Our findings support these studies by demonstrating an increase of

ACH release in the BLA in response to the playback of aversive vocalizations. Although the exact mechanism by which ACH affects vocal information processing in the BLA is not clear yet, the result of ACH release onto BLA neurons seems to enhance arousal during emotional processing (Likhtik and Johansen, 2019). Various physiological and anatomical studies have investigated this question from different perspectives, and the findings of these studies suggest mechanisms by which vocalizations affect ACH release and in turn drive behavioral responses (Fig. 4.1).

Cholinergic modulation in the BLA is mediated via muscarinic (G-protein coupled receptors) and nicotinic (ionotropic) ACH receptors on the BLA pyramidal neurons and inhibitory interneurons (Mesulam et al., 1983; Pidoplichko et al., 2013; Unal et al., 2015;

Aitta-Aho et al., 2018). During the processing of sensory information in the BLA, partially non-overlapping populations of neurons respond to cues related to positive or negative experiences (Paton et al.,2006; Shanbel and Janak, 2009; Namburi et al., 2015; Beyeler

90 et al., 2018). These neurons then project to different target areas involved in appetitive or aversive behaviors—NAc or CeA, respectively (Namburi et al., 2015). When in response to an aversive cue or experience (Fig. 4.1A), ACH is released into the BLA from the basal forebrain, it affects neurons according to their activity. Thus, if projection neurons are at rest, ACH may exert an inhibitory effect on such neurons in two ways.

First, via nicotinic ACH receptors, it activates local GABAergic interneurons that synapse onto the dormant pyramidal neurons, resulting in GABA-A mediated inhibitory postsynaptic potentials (IPSPs) in the pyramidal neurons. Second, via direct activation of

M1 ACH receptors on the pyramidal neurons and by activating inward rectifying K+ currents, ACH results in additional inhibition (Fig. 4.1A) (Unal et al., 2015; Aitta-Aho et al., 2018; Pidoplichko et al., 2013).

91

Figure 4.1 Proposed model for neuromodulation of salient vocalization processing via acetylcholine (ACH) and dopamine (DA) in the basolateral amygdala (BLA). A. Cholinergic modulation of CeA projection neurons during aversive vocalization/cues processing in the BLA. In the presence of aversive cues, the released ACH from the basal forebrain acts on M1R in activated CeA -projecting neurons, enhancing their excitatory response, while the quiescent NAc-projecting neurons that are non-responsive to aversive cues are inhibited by interneurons that are activated through nACHRs. B. Dopaminergic modulation and enhancement of signal-to- noise ratio in response to reward-associated cues (appetitive vocalizations). When rewarding cues/vocalizations are present, release of DA from VTA enhances excitation in NAc-projecting neurons that are responsive to positive cues, whereas nonresponsive CeA -projecting neurons that are dormant to such cues are inhibited. DA is thought to action D1Rs in these local interneurons to shape a direct inhibition onto CeA -projecting neurons: nucleus accumbens, CeA: central nucleus of amygdala, BF: basal forebrain, VTA: ventral tegmental area, ACx: auditory cortex, MGB: medial geniculate body.

92

However, when these BLA pyramidal neurons are already active, due to strong excitatory input associated with aversive cues (such as aversive vocalizations or electrical shock), M1 receptor activation can result in long afterdepolarizations that produce persistent firing lasting as long as ACH is present (Unal et al., 2015; Jiang et al.,

2016). Such a process may explain persistent firing that was observed in response to aversive social vocalizations in bats (Gadziola et al., 2012b, Peterson and Wenstrup,

2012). Through this process of inhibiting quiescent neurons and enhancing activation and persistent firing in active neurons, ACH enhances the population signal-to-noise ratio during the processing of salient, aversive signals in the BLA. These neurons, processing negative cues, likely project to the central nucleus of the amygdala (CeA) to regulate defensive behaviors such as escape and avoidance (Fig. 4.1A) (Namburi et al.,

2015; Beyeler et al., 2016 & 2018). In agreement with this proposal, our behavioral findings show an increase in escape initiation and risk assessment in conjunction with the increased release of ACH during processing of aversive vocalizations. Such prolonged afterdepolarizations provide the appropriate condition for associative synaptic plasticity

(Likhtik and Johansen, 2019) that underlies an increase in AMPA/NMDA currents in CeA

-projecting neurons during processing of aversive cues (Namburi et al., 2015; Vugt et al.,

2020).

The BLA receives substantial dopaminergic innervation from the ventral tegmental area

(VTA) (Asan, 1997a &b ;1998), which acts on BLA neurons via D1 and D2 receptors, both of which are G-protein coupled receptors. Dopamine is important in reward processing, fear extinction, decision making, and motor control (Ciano et al., 2004; Ambroggi et al.,

93

2008; Lutas et al., 2019). We observed an increase of DA release in the BLA in response to mating vocalizations in both male and female mice.

Electrophysiological studies have shown that DA enhances sensory processing in the BLA neurons by increasing the population SNR in a process like ACH (See et al., 2001; Kröner et al., 2005; Vander Weele et al., 2018). Thus, during processing of positive signals such as mating vocalizations or those related to rewarding experiences, the release of DA in the BLA is increased, enhancing the presence of DA in the vicinity of pyramidal neurons and interneurons (Fig. 4.1B). When neurons are active (as during processing of positive cues such as appetitive vocalizations, or during reward processing), the DA acts on D2 receptors of pyramidal cells, enhancing neuronal firing and resulting in persistent firing of these projection neurons. Conversely, in BLA projection neurons that do not respond to such positive cues (such as CeA -projection neurons), DA exerts its effect directly via

D1 receptors and by activating inhibitory interneurons in feedforward inhibition (Kröner et al., 2005). The net result of this process is an increase in the population signal-to- noise ratio resulting from enhancement of activity in the reward-responding neurons and suppression of activity in aversive-responding neurons, during the processing appetitive vocalizations or reward-related cues. This process is likely depending on the increase in synaptic plasticity via enhanced AMPA/NMDA current during processing such cues in NAc projection neurons in the BLA (Otani et al., 2003; Namburi et al., 2015; Vugt et al., 2020). Our findings suggest that this may occur in the BLA in response to appetitive vocalizations.

94

Overall, the interaction of DA and ACH with BLA neurons during processing of rewarding cues or aversive cues, respectively, appears to contribute to the underlying cellular mechanisms by which contextual information influences processing of affective vocalizations or other biologically relevant information in the BLA.

Acetylcholine and hormonal changes

Our findings show increased release of ACH in estrous-stage female mice during mating vocalization playback. As ACH appears to be predominantly involved in processing aversive cues, these results were initially unexpected. However, cholinergic input to the

BLA originates in the basal forebrain, which, along with hippocampus, exhibits high expression of estrogen receptors that is influenced by a female’s hormonal state

(Shughrue et al., 2000). During estrous, enhanced release of estrogen affects release of

ACH and may influence retrieval and cognitive functions (Gibbs, 1996; McEwen,

1998). This is a possible mechanism underlying increased attentional and risk assessment behaviors in females in response to vocalization playback.

Other neuromodulators affecting valence processing in the BLA

Along with DA and ACH, other neurochemicals such as norepinephrine (NE) and serotonin (5-HT) modulate emotional learning and sensory information processing in the BLA (Bradley et al., 2005; Berlau and McGaugh, 2006; Bocchio et al., 2016; Likhtik and Johansen, 2019 ). In the current study, we were unable to track these two

95 neuromodulators. Potential reasons include: (i) the limited sampling area in the mouse

BLA that could lower recovery for neurochemicals with lower concentrations, (ii) NE and

5-HT may not pass through the microdialysis probe membranes as well as other neurochemicals, and (iii) lower recovery rate of such neurochemicals by LC/MS system.

However, our data on 5-HIAA, a metabolite of serotonin, reveal no context- or sex- dependent changes during playback of emotional vocalizations. This finding is surprising to us, but we draw no conclusions regarding the modulation of serotonin since we were unable to track the concentration of serotonin itself in BLA.

Summary and Conclusion

The main goal of this dissertation was to understand the context- and sex-dependent release of neuromodulators in the basolateral amygdala in response to emotion-laden vocalizations. The specific aims of the dissertation first examined how mice utilize acoustic cues for reflecting their internal state during mating in their vocalizations, then sought to understand how these cues and those present in aversive (restraint) vocalizations influence the behavioral responses in male and female mice. Finally, we examined release of several neuromodulators into the BLA during playback experiments in freely moving mice.

The findings of this dissertation show that mice use several features of vocalizations— syllable composition, the rate of syllable emission, duration, minimum frequency, and peak intensity—to express the intensity of their affective state during mating. Further,

96 these spectrotemporal characteristics interact strongly with the harmonicity of syllables, another measure of emotional state of the caller. These characteristics are similarly used by other species and human for expressing their emotional state during vocal communication.

We used these cues to prepare vocal sequences that correspond to intense mating and restraint experiences. We developed stimuli utilizing unique exemplars of such vocal sequences by examining the behavior and using acoustic cues identified in Aim I. These vocal exemplars were first used in a playback experiment to understand how they affect the behavior of listening mice. We observed sex- and context-dependent changes in mice behavioral reactions to these negative and positive vocalizations.

Lastly in Aim 3, we used the same appetitive and aversive vocal stimuli developed in Aim

2 to assess the release of neurochemicals in the mouse BLA. The results of this experiment revealed that such highly emotionally charged vocalizations result in distinct context-dependent patterns of the release of dopamine and acetylcholine in the mouse

BLA. Further, females in estrous showed an increase in acetylcholine release, a pattern that was unlike the observed pattern of ACH release in male and diestrous females in response to mating vocal playback. Other neurochemicals, however, did not reveal such modulation in their release in response to the tested vocalizations.

These findings suggest distinct roles of dopaminergic and cholinergic modulation in shaping processing of emotion-laden vocalizations in the amygdala and in behavioral outcomes.

97

General significance and future directions

Emotional communication is an important aspect of our daily life, during which we receive numerous cues from various contexts. This requires combining these cues with those from other sensory modalities and using our previous experiences, and internal state information to appropriately respond. This complex task depends on amygdala processing on a moment-by-moment basis. Although a large body of research has described the role of the amygdala in fear learning and, to a lesser extent, in reward processing, the mechanisms by which the amygdala shapes emotional responses in vocal communication is not clear. The neurophysiological and neuromodulatory experiments have just started to explore these functions. This research could help explain how in pathological conditions such as post-traumatic stress disorder, autism, or schizophrenia, the response to sensory information is compromised or exaggerated.

Such information may help in developing possible treatments for such conditions.

An interesting future path to understanding functional aspects of amygdala-related circuits is to monitor behavior, neuromodulation, and neurophysiological recordings in real time, moment by moment. In the current study, we used microdialysis combined with LC/MS to examine the release pattern of several neuromodulators in the same sample. Despite the strengths of this technique, we were unable to utilize smaller sampling windows to understand the fine time scale changes of information processing and neuromodulation. Some techniques such as voltage sensitive dyes combined with two-photon imaging or fiber photometry have been able to address real-time monitoring of neuromodulators. However, these techniques have limitations, such as

98 restraining the animals during the experiment or monitoring only 2 neuromodulators simultaneously. Since amygdalar responses are affected by restraint stress, and as this structure is deep in the brain, the current techniques still have not resolved the challenges in studying this structure.

One other interesting future path for this research is to monitor the activity of the amygdala simultaneously with input areas such as VTA and the basal forebrain and output targets such as NAc and CeA. This will allow us to understand how various brain areas are synchronized to shape valence coding, neuromodulator release, and behavioral reactions.

99

REFRENCES

Abrams, D.A., Chen, T., Odriozola, P., Cheng, K.M., Baker, A.E., Padmanabhan, A., et al., 2016. Neural circuits underlying mother’s voice perception predict social communication abilities in children. Proc. Natl. Acad. Sci. U.S.A. 113 (22), 6295–6300.

Abrams, D.A., Lynch, C.J., Cheng, K.M., Phillips, J., Supekar, K., Ryali, S., Uddin, L.Q., and Menon, V. (2013). Underconnectivity between voice-selective cortex and reward circuitry in children with autism. Proc. Natl. Acad. Sci. U.S.A. 110, 12060–12065.

Adolphs, R. (2013). The of Fear. Current Biology 23, R79–R93.

Aitta-Aho, T., Hay, Y.A., Phillips, B.U., Saksida, L.M., Bussey, T.J., Paulsen, O., and Apergis-Schoute, J. (2018). Basal Forebrain and Brainstem Cholinergic Neurons Differentially Impact Amygdala Circuits and Learning-Related Behavior. Curr. Biol. 28, 2557-2569.e4.

Altenmüller, E., Schmidt, S., and Zimmermann, E. (2013). of Emotional Communication: From Sounds in Nonhuman Mammals to Speech and Music in Man (Oxford, UK: Oxford University Press).

Alves, J.A., Boerner, B.C., and Laplagne, D.A. (2016). Flexible Coupling of Respiration and Vocalizations with Locomotion and Head Movements in the Freely Behaving Rat. Neural Plast. 2016, 4065073.

Amaral, D.G. (2003). The amygdala, social behavior, and danger detection. Ann. N. Y. Acad. Sci. 1000, 337–347.

Ambroggi, F., Ishikawa, A., Fields, H.L., and Nicola, S.M. (2008). Basolateral Amygdala Neurons Facilitate Reward-Seeking Behavior by Exciting Nucleus Accumbens Neurons. Neuron 59, 648–661.

Anderson, D.J. (2016). Circuit modules linking internal states and social behaviour in flies and mice. Nature Reviews Neuroscience 17, 692–704.

Anderson, A.K., and Phelps, E.A. (2001). Lesions of the human amygdala impair enhanced perception of emotionally salient events. Nature 411, 305–309.

Andics, A., McQueen, J.M., Petersson, K.M., Gál, V., Rudas, G., Vidnyánszky, Z., 2010. Neural mechanisms for voice recognition. NeuroImage 52 (4), 1528–1540.

Andrew M.J, Y., and Katy R, R. (1998). Dopamine release in the amygdaloid complex of the rat, studied by brain microdialysis. Neuroscience Letters 249, 49–52.

100

Arias-Carrión, O., and Pŏppel, E. (2007). Dopamine, learning, and reward-seeking behavior. Acta Neurobiol Exp (Wars) 67, 481–488.

Asan, E. (1997). Ultrastructural features of tyrosine-hydroxylase-immunoreactive afferents and their targets in the rat amygdala. Cell Tissue Res 288, 449–469.

Asan, E. (1998). The catecholaminergic innervation of the rat amygdala. Adv Anat Embryol Cell Biol 142, 1–118.

Augustsson, H., and Meyerson, B.J. (2004). Exploration and risk assessment: a comparative study of male house mice (Mus musculus musculus) and two laboratory strains. Physiol. Behav. 81, 685–698.

Avery, M.C., and Krichmar, J.L. (2017). Neuromodulatory Systems and Their Interactions: A Review of Models, Theories, and Experiments. Front Neural Circuits 11.

Bach, D. R., Schächinger, H., Neuhoff, J. G., Esposito, F., Salle, F. D., Lehmann, C., ... & Seifritz, E. (2008). Rising sound intensity: an intrinsic warning cue activating the amygdala. , 18(1), 145-150.

Bailey, K.R., and Crawley, J.N. (2009). Anxiety-Related Behaviors in Mice. In Methods of Behavior Analysis in Neuroscience, J.J. Buccafusco, ed. (Boca Raton (FL): CRC Press/Taylor & Francis), p.

Ball, T., Rahm, B., Eickhoff, S.B., Schulze-Bonhage, A., Speck, O., and Mutschler, I. (2007). Response Properties of Human Amygdala Subregions: Evidence Based on Functional MRI Combined with Probabilistic Anatomical Maps. PLOS ONE 2, e307.

Ballinger, E.C., Ananth, M., Talmage, D.A., and Role, L.W. (2016). Basal Forebrain Cholinergic Circuits and Signaling in Cognition and Cognitive Decline. Neuron 91, 1199– 1218.

Bangasser, D.A., and Wiersielis, K.R. (2018). Sex differences in stress responses: a critical role for corticotropin-releasing factor. Hormones 17, 5–13.

Bargmann, C.I. (2012). Beyond the connectome: how neuromodulators shape neural circuits. Bioessays 34, 458–465.

Baumgartner, T., Lutz, K., Schmidt, C.F., and Jäncke, L. (2006). The emotional power of music: how music enhances the feeling of affective pictures. Brain Res. 1075, 151–164.

Baysinger, A.N., Kent, B.A., and Brown, T.H. (2012). Muscarinic receptors in amygdala control trace fear conditioning. PLoS ONE 7, e45720.

Berlau, D.J., and McGaugh, J.L. (2006). Enhancement of extinction memory consolidation: the role of the noradrenergic and GABAergic systems within the

101 basolateral amygdala. Neurobiol Learn Mem 86, 123–132.

Beyeler, A. (2016). Parsing reward from aversion. 354, 558–558.

Beyeler, A., Namburi, P., Glober, G.F., Simonnet, C., Calhoon, G.G., Conyers, G.F., Luck, R., Wildes, C.P., and Tye, K.M. (2016). Divergent Routing of Positive and Negative Information from the Amygdala during Memory Retrieval. Neuron 90, 348–361.

Beyeler, A., Chang, C.-J., Silvestre, M., Lévêque, C., Namburi, P., Wildes, C.P., and Tye, K.M. (2018). Organization of Valence-Encoding and Projection-Defined Neurons in the Basolateral Amygdala. Cell Reports 22, 905–918.

Bickart, K.C., Dickerson, B.C., and Barrett, L.F. (2014). The amygdala as a hub in brain networks that support social life. Neuropsychologia 63, 235–248.

Bigos, K.L., Pollock, B.G., Aizenstein, H.J., Fisher, P.M., Bies, R.R., and Hariri, A.R. (2008). Acute 5-HT Reuptake Blockade Potentiates Human Amygdala Reactivity. Neuropsychopharmacology 33, 3221–3225.

Blanchard, R.J., and Blanchard, D.C. (1989). Attack and defense in rodents as ethoexperimental models for the study of emotion. Progress in Neuro- Psychopharmacology and Biological Psychiatry 13, S3–S14.

Blanchard, D.C., Griebel, G., and Blanchard, R.J. (2001). Mouse defensive behaviors: pharmacological and behavioral assays for anxiety and panic. Neuroscience & Biobehavioral Reviews 25, 205–218.

Blanchard, D.C., Blanchard, R.J., and Griebel, G. (2005). Defensive responses to predator threat in the rat and mouse. Curr Protoc Neurosci Chapter 8, Unit 8.19.

Blanchard, D.C., Griebel, G., Pobbe, R., and Blanchard, R.J. (2011). Risk assessment as an evolved threat detection and analysis process. Neuroscience & Biobehavioral Reviews 35, 991–998.

Blanchard, R.J., Hebert, M.A., Ferrari, P., Palanza, P., Figueira, R., Blanchard, D.C., and Parmigiani, S. (1998). Defensive behaviors in wild and laboratory (Swiss) mice: the mouse defense test battery. Physiology & Behavior 65, 201–209.

Blivis, D., Haspel, G., Mannes, P.Z., O’Donovan, M.J., and Iadarola, M.J. Identification of a novel spinal nociceptive-motor gate control for Aδ pain stimuli in rats. ELife 6.

Blood, A.J., and Zatorre, R.J. (2001). Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proc. Natl. Acad. Sci. U.S.A. 98, 11818–11823.

Bocchio, M., McHugh, S.B., Bannerman, D.M., Sharp, T., and Capogna, M. (2016).

102

Serotonin, Amygdala and Fear: Assembling the Puzzle. Front Neural Circuits 10.

Boulanger-Bertolus, J., Rincón-Cortés, M., Sullivan, R.M., and Mouly, A.-M. (2017). Understanding pup affective state through ethologically significant ultrasonic vocalization frequency. Scientific Reports 7, 13483.

Bradley, B., Peter R.H., Jenner, P. (2005). The Neuromodulators, Volume 64 - 1st Edition (San Diego, CA: Elsevier Academic Press).

Brudzynski, S. (2009). Handbook of Mammalian Vocalization, Volume 19 - 1st Edition (Academic Press).

Brudzynski, S.M. (2005). Principles of Rat Communication: Quantitative Parameters of Ultrasonic Calls in Rats. Behav Genet 35, 85–92.

Brudzynski, S.M. (2007). Ultrasonic calls of rats as indicator variables of negative or positive states: Acetylcholine–dopamine interaction and acoustic coding. Behavioural Brain Research 182, 261–273.

Brudzynski, S. M., Silkstone, M., Komadoski, M., Scullion, K., Duffus, S., Burgdorf, J., ... & Panksepp, J. (2011). Effects of intraaccumbens amphetamine on production of 50 kHz vocalizations in three lines of selectively bred Long-Evans rats. Behavioural brain research, 217(1), 32-40.

Brudzynski, S.M. (2013). Ethotransmission: communication of emotional states through ultrasonic vocalization in rats. Current Opinion in Neurobiology 23, 310–317.

Brudzynski, S.M. (2014). The Ascending Mesolimbic Cholinergic System—A Specific Division of the Reticular Activating System Involved in the Initiation of Negative Emotional States. Journal of Molecular Neuroscience 53, 436–445.

Brudzynski, S.M., and Chiu, E.M. (1995). Behavioural responses of laboratory rats to playback of 22 kHz ultrasonic calls. Physiol. Behav. 57, 1039–1044.

Brudzynski, S.M., and Fletcher, N.H. (2010). Chapter 3.3 - Rat ultrasonic vocalization: short-range communication. In Handbook of Behavioral Neuroscience, S.M. Brudzynski, ed. (Elsevier), pp. 69–76.

Buffalari, D. M., & Grace, A. A. (2007). Noradrenergic modulation of basolateral amygdala neuronal activity: opposing influences of α-2 and β receptor activation. Journal of Neuroscience, 27(45), 12358-12366.

Butler, Cresswell, Whittingham, and Quinn (2009). Very short delays prior to escape from potential predators may function efficiently as adaptive risk-assessment periods. Behaviour 146, 795–813.

103

Cardinal, R.N., Parkinson, J.A., Hall, J., and Everitt, B.J. (2002). Emotion and motivation: the role of the amygdala, ventral striatum, and prefrontal cortex. Neuroscience & Biobehavioral Reviews 26, 321–352.

Carlsen, J., Záborszky, L., and Heimer, L. (1985). Cholinergic projections from the basal forebrain to the basolateral amygdaloid complex: a combined retrograde fluorescent and immunohistochemical study. J. Comp. Neurol. 234, 155–167.

Carson, A. (2012). The Human Illnesses: Neuropsychiatric Disorders and the Nature of the By Peter Williamson & John Allman. Oxford University Press USA. 2011. £45.00 (hb). 304 pp. ISBN: 9870195368567 (Cambridge University Press).

Cäsar, C., Zuberbühler, K., Young, R.J., and Byrne, R.W. (2013). Titi monkey call sequences vary with predator location and type. Biology Letters 9, 20130535.

Castellucci, G.A., Calbick, D., and McCormick, D. (2018). The temporal organization of mouse ultrasonic vocalizations. PLoS One 13.

Cervantes Constantino, F., and Simon, J.Z. (2018). Restoration and Efficiency of the Neural Processing of Continuous Speech Are Promoted by Prior Knowledge. Front Syst Neurosci 12.

Chabout, J., Sarkar, A., Dunson, D.B., and Jarvis, E.D. (2015). Male mice song syntax depends on social contexts and influences female preferences. Front. Behav. Neurosci. 9.

Chambers, C., Akram, S., Adam, V., Pelofi, C., Sahani, M., Shamma, S., and Pressnitzer, D. (2017). Prior context in audition informs binding and shapes simple features. Nat Commun 8.

Chen, Q., Panksepp, J.B., and Lahvis, G.P. (2009). Empathy Is Moderated by Genetic Background in Mice. PLoS One 4.

Cheng, K.Y., and Frye, M.A. (2020). Neuromodulation of motion vision. J Comp Physiol A 206, 125–137.

Cheng, M.-F., and Durand, S.E. (2004). Song and the Limbic Brain: A New Function for the Bird’s Own Song. Annals of the New York Academy of Sciences 1016, 611–627.

Choi, J.-S., and Brown, T.H. (2003). Central amygdala lesions block ultrasonic vocalization and freezing as conditional but not unconditional responses. J. Neurosci. 23, 8713–8721.

Ciocchi, S., Herry, C., Grenier, F., Wolff, S.B.E., Letzkus, J.J., Vlachos, I., Ehrlich, I., Sprengel, R., Deisseroth, K., Stadler, M.B., et al. (2010). Encoding of conditioned fear in central amygdala inhibitory circuits. Nature 468, 277–282.

104

Clemens, A.M., Lenschow, C., Beed, P., Li, L., Sammons, R., Naumann, R.K., Wang, H., Schmitz, D., and Brecht, M. (2019). Estrus-Cycle Regulation of Cortical Inhibition. Current Biology 29, 605-615.e6.

Coimbra, N.C., Paschoalin-Maurin, T., Bassi, G.S., Kanashiro, A., Biagioni, A.F., Felippotti, T.T., Elias-Filho, D.H., Mendes-Gomes, J., Cysne-Coimbra, J.P., Almada, R.C., et al. (2017). Critical neuropsychobiological analysis of panic attack- and anticipatory anxiety-like behaviors in rodents confronted with snakes in polygonal arenas and complex labyrinths: a comparison to the elevated plus- and T-maze behavioral tests. Brazilian Journal of Psychiatry 39, 72–83.

Cole, R.D., Kawasumi, Y., Parikh, V., and Bangasser, D.A. (2016). Corticotropin releasing factor impairs sustained attention in male and female rats. Behav. Brain Res. 296, 30– 34.

Collins, D.R., and Paré, D. (2000). Differential fear conditioning induces reciprocal changes in the sensory responses of lateral amygdala neurons to the CS(+) and CS(-). Learn. Mem. 7, 97–103.

Costafreda, S.G., Brammer, M.J., David, A.S., and Fu, C.H.Y. (2008). Predictors of amygdala activation during the processing of emotional stimuli: A meta-analysis of 385 PET and fMRI studies. Brain Research Reviews 58, 57–70.

Cragg, S.J. (2006). Meaningful silences: how dopamine listens to the ACh pause. Trends in Neurosciences 29, 125–131.

Crosson, B., Radonovich, K., Sadek, J.R., Gökçay, D., Bauer, R.M., Fischler, I.S., Cato, M.A., Maron, L., Auerbach, E.J., Browd, S.R., et al. (1999). Left-hemisphere processing of emotional connotation during word generation. NeuroReport 10, 2449–2455.

Dai Mitsushima Kaori Yamada Kenkichi Takase Toshiya Funabashi Fukuko Kimura Sex differences in the basolateral amygdala: the extracellular levels of serotonin and dopamine, and their responses to restraint stress in rats. European J of Neuroscience 24.

D’Amato, F.R., Scalera, E., Sarli, C., and Moles, A. (2005). Pups Call, Mothers Rush: Does Maternal Responsiveness Affect the Amount of Ultrasonic Vocalizations in Mouse Pups? Behav Genet 35, 103–112.

Dautan, D. Extrinsic Sources of Cholinergic Innervation of the Striatal Complex: A Whole- Brain Mapping Analysis. Front. Neuroanat., 22 January 2016.

Davis, M., and Whalen, P.J. (2001). The amygdala: vigilance and emotion. Molecular Psychiatry 6, 13–34.

Daviu, N., Bruchas, M.R., Moghaddam, B., Sandi, C., and Beyeler, A. (2019). Neurobiological links between stress and anxiety. Neurobiology of Stress 11, 100191.

105

DeBerry JJ, Robbins MT, Ness TJ. The amygdala central nucleus is required for acute stress-induced bladder hyperalgesia in a rat visceral pain model. Brain Res. 2015;1606:77- 85. doi:10.1016/j.brainres.2015.01.008

De’Franceschi, G., Vivattanasarn, T., Saleem, A.B., and Solomon, S.G. (2016). Vision Guides Selection of Freeze or Flight Defense Strategies in Mice. Current Biology 26, 2150–2154.

Deng, H., Xiao, X., and Wang, Z. (2016). Periaqueductal Gray Neuronal Activities Underlie Different Aspects of Defensive Behaviors. J. Neurosci. 36, 7580–7588.

Devereux, C.L., Fernàndez‐Juricic, E., Krebs, J.R., and Whittingham, M.J. (2008). Habitat affects escape behaviour and alarm calling in Common Starlings Sturnus vulgaris. Ibis 150, 191–198.

DeYoung, C.G. (2013). The neuromodulator of exploration: A unifying theory of the role of dopamine in personality. Front Hum Neurosci 7.

Di Ciano, P., and Everitt, B.J. (2004). Direct interactions between the basolateral amygdala and nucleus accumbens core underlie cocaine-seeking behavior by rats. J. Neurosci. 24, 7167–7173.

Dielenberg, R.A., and McGregor, I.S. (2001). Defensive behavior in rats towards predatory odors: a review. Neuroscience & Biobehavioral Reviews 25, 597–609.

Dietz, M, and Zimmermann, E (2004). Does call structure in a nocturnal primate change with arousal? Folia Primatologica 75, 645.

Duvarci, S., and Pare, D. (2014). AMYGDALA MICROCIRCUITS CONTROLLING LEARNED FEAR. Neuron 82, 966–980.

E. Acquas, G.D.C. Dopamine — Acetylcholine Interactions.

Edeline, J.-M., Manunta, Y., and Hennevin, E. (2011). Induction of selective plasticity in the frequency tuning of auditory cortex and auditory thalamus neurons by locus coeruleus stimulation. Hear. Res. 274, 75–84.

Egnor, S.R., and Seagraves, K.M. (2016). The contribution of ultrasonic vocalizations to mouse courtship. Current Opinion in Neurobiology 38, 1–5.

Egorov, A.V., Unsicker, K., and Halbach, O.V.B. und (2006). Muscarinic control of graded persistent activity in lateral amygdala neurons. European Journal of Neuroscience 24, 3183–3194.

Ehret, G. (2013). Sound communication in house mice: emotions in their voices and . Evolution of emotional communication: from sounds in nonhuman mammals to

106 speech and music in man, 63, 74.

Ethofer, T., Anders, S., Erb, M., Droll, C., Royen, L., Saur, R., ... & Wildgruber, D. (2006). Impact of voice on emotional judgment of faces: An event‐related fMRI study. Human brain mapping, 27(9), 707-714.

Evans, D.A., Stempel, A.V., Vale, R., Ruehle, S., Lefler, Y., and Branco, T. (2018). A synaptic threshold mechanism for computing escape decisions. Nature 558, 590–594.

Evans, D.A., Stempel, A.V., Vale, R., and Branco, T. (2019). Cognitive Control of Escape Behaviour. Trends in Cognitive Sciences 23, 334–348.

Fallow, P.M., Gardner, J.L., and Magrath, R.D. (2011). Sound familiar? Acoustic similarity provokes responses to unfamiliar heterospecific alarm calls. Behav Ecol 22, 401–410.

Favero, M., Varghese, G., and Castro-Alamancos, M.A. (2012). The state of somatosensory cortex during neuromodulation. J Neurophysiol 108, 1010–1024.

Fecteau, S., Belin, P., Joanette, Y., and Armony, J.L. (2007). Amygdala responses to nonlinguistic emotional vocalizations. Neuroimage 36, 480–487.

Flower, T. (2011a). Fork-tailed drongos use deceptive mimicked alarm calls to steal food. Proceedings of the Royal Society B: Biological Sciences 278, 1548–1555.

Flower, T. (2011b). The bird that cries hawk: fork-tailed drongos rob meerkats with false alarms. Proceedings of the Royal Society B: Biological Sciences 1711, 1548–1555.

Forcelli, P.A., Waguespack, H.F., and Malkova, L. (2017). Defensive Vocalizations and Motor Asymmetry Triggered by Disinhibition of the Periaqueductal Gray in Non-human Primates. Front Neurosci 11, 163.

Frühholz, S., Ceravolo, L., & Grandjean, D. (2012). Specific brain networks during explicit and implicit decoding of emotional prosody. Cerebral cortex, 22(5), 1107-1117.

Frühholz, S., Sander, D., & Grandjean, D. M. (2014). Functional neuroimaging of human vocalizations and affective speech. Behavioral and Brain Sciences, 37(6), 554-555.

Frühholz, S., Trost, W., & Kotz, S. A. (2016). The sound of emotions—Towards a unifying neural network perspective of affective sound processing. Neuroscience & Biobehavioral Reviews, 68, 96-110.

Füzesi, T., Daviu, N., Wamsteeker Cusulin, J.I., Bonin, R.P., and Bains, J.S. (2016). Hypothalamic CRH neurons orchestrate complex behaviors after stress. Nature Communications 7, 1–14.

Gabbott, P. L., Warner, T. A., Jays, P. R., Salway, P., & Busby, S. J. (2005). Prefrontal

107 cortex in the rat: projections to subcortical autonomic, motor, and limbic centers. Journal of Comparative ,492(2), 145-177.

Gadziola, M.A., Grimsley, J.M.S., Shanbhag, S.J., and Wenstrup, J.J. (2012a). A novel coding mechanism for social vocalizations in the lateral amygdala. J. Neurophysiol. 107, 1047–1057.

Gadziola, M.A., Grimsley, J.M.S., Faure, P.A., and Wenstrup, J.J. (2012b). Social Vocalizations of Big Brown Bats Vary with Behavioral Context. PLOS ONE 7, e44550.

Gadziola, M.A., Shanbhag, S.J., and Wenstrup, J.J. (2016). Two distinct representations of social vocalizations in the basolateral amygdala. J. Neurophysiol. 115, 868–886.

Garner, J., Weisker, S., Dufour, B., and Mench, J. (2004). Barbering (fur and whisker trimming) by laboratory mice as a model of human trichotillomania and obsessive- compulsive spectrum disorders. Comp Med 54, 216–224.

Gaub, S., Fisher, E., and Ehret, G. (2016). Ultrasonic vocalizations of adult male Foxp2‐ mutant mice: behavioral contexts of arousal and emotion. Genes, Brain and Behavior 15, 243–259.

Genud-Gabai, R., Klavir, O., and Paz, R. (2013). Safety signals in the primate amygdala. J. Neurosci. 33, 17986–17994.

Gholampour, F., Riem, M.M.E., and van den Heuvel, M.I. (2020). Maternal brain in the process of maternal-infant bonding: Review of the literature. Soc Neurosci 1–5.

Gibbs, R. (1996a). Fluctuations in relative levels of choline acetyltransferase mRNA in different regions of the rat basal forebrain across the estrous cycle: effects of estrogen and progesterone. J. Neurosci. 16, 1049–1055.

Gibbs, R.B. (1996b). Fluctuations in relative levels of choline acetyltransferase mRNA in different regions of the rat basal forebrain across the estrous cycle: effects of estrogen and progesterone. J. Neurosci. 16, 1049–1055.

Gielow, M.R., and Zaborszky, L. (2017). The Input-Output Relationship of the Cholinergic Basal Forebrain. Cell Reports 18, 1817–1830.

Gil-da-Costa, R., Braun, A., Lopes, M., Hauser, M.D., Carson, R.E., Herscovitch, P., and Martin, A. (2004). Toward an evolutionary perspective on conceptual representation: Species-specific calls activate visual and affective processing systems in the macaque. PNAS 101, 17516–17521.

Gold, R., Butler, P., Revheim, N., Leitman, D.I., Hansen, J.A., Gur, R.C., et al., 2012. Auditory emotion recognition impairments in schizophrenia: relationship to acoustic features and cognition. Am. J. Psychiatr. 169 (4), 424–432.

108

Goldstein, L.E., Rasmusson, A.M., Bunney, B.S., and Roth, R.H. (1996). Role of the amygdala in the coordination of behavioral, neuroendocrine, and prefrontal cortical monoamine responses to psychological stress in the rat. J. Neurosci. 16, 4787–4798.

Goosens, K.A., Holt, W., and Maren, S. (2000). A role for amygdaloid PKA and PKC in the acquisition of long-term conditional fear in rats. Behav. Brain Res. 114, 145– 152.

Gore, F., Schwartz, E.C., Brangers, B.C., Aladi, S., Stujenske, J.M., Likhtik, E., Russo, M.J., Gordon, J.A., Salzman, C.D., and Axel, R. (2015). Neural Representations of Unconditioned Stimuli in Basolateral Amygdala Mediate Innate and Learned Responses. Cell 162, 134–145.

Gorka, A.X., Knodt, A.R., and Hariri, A.R. (2015). Basal forebrain moderates the magnitude of task-dependent amygdala functional connectivity. Soc Cogn Affect Neurosci 10, 501–507.

Gourbal, B.E.F., Barthelemy, M., Petit, G., and Gabrion, C. (2004). Spectrographic analysis of the ultrasonic vocalisations of adult male and female BALB/c mice. Naturwissenschaften 91, 381–385.

Green, D.B., Shackleton, T.M., Grimsley, J.M.S., Zobay, O., Palmer, A.R., and Wallace, M.N. (2018). Communication calls produced by electrical stimulation of four structures in the guinea pig brain. PLOS ONE 13, e0194091.

Grimsley, J.M., Monaghan, J.J.M., and Wenstrup, J.J. (2011). Development of Social Vocalizations in Mice.

Grimsley, J.M.S., Hazlett, E.G., and Wenstrup, J.J. (2013). Coding the meaning of sounds: contextual modulation of auditory responses in the basolateral amygdala. J. Neurosci. 33, 17538–17548.

Grimsley, J., Sheth, S., Vallabh, N., Grimsley, C. A., Bhattal, J., Latsko, M., ... & Wenstrup, J. J. (2016). Contextual modulation of vocal behavior in mouse: newly identified 12 kHz “mid-frequency” vocalization emitted during restraint. Frontiers in behavioral neuroscience,10, 38.

Grisendi, T., Reynaud, O., Clarke, S., and Da Costa, S. (2019). Processing pathways for emotional vocalizations. Brain Struct Funct 224, 2487–2504.

Gründemann, J., Bitterman, Y., Lu, T., Krabbe, S., Grewe, B.F., Schnitzer, M.J., and Lüthi, A. (2019). Amygdala ensembles encode behavioral states. Science 364.

Hager, T., Jansen, R.F., Pieneman, A.W., Manivannan, S.N., Golani, I., van der Sluis, S., Smit, A.B., Verhage, M., and Stiedl, O. (2014). Display of individuality in avoidance

109 behavior and risk assessment of inbred mice. Front Behav Neurosci 8.

Hahn, M.E., and Lavooy, M.J. (2005). A review of the methods of studies on infant production and maternal retrieval in small rodents. Behav. Genet. 35, 31–52.

Hamann, S., and Mao, H. (2002). Positive and negative emotional verbal stimuli elicit activity in the left amygdala. Neuroreport 13, 15–19.

Hammerschmidt, K., Radyushkin, K., Ehrenreich, H., and Fischer, J. (2009). Female mice respond to male ultrasonic ‘songs’ with approach behaviour. Biol Lett 5, 589–592.

Hanson, J.L., and Hurley, L.M. (2012). Female Presence and Estrous State Influence Mouse Ultrasonic Courtship Vocalizations. PLOS ONE 7, e40782.

Hanson, J. L., & Hurley, L. M. (2014). Context-dependent fluctuation of serotonin in the auditory midbrain: the influence of sex, reproductive state, and experience. Journal of Experimental Biology, 217(4), 526-535.

Hanson, J. L., & Hurley, L. M. (2016). Serotonin, estrus, and social context influence c-Fos immunoreactivity in the inferior colliculus. Behavioral neuroscience, 130(6), 600.

Wang, H., Liang, S., Burgdorf, J., Wess, J., & Yeomans, J. (2008). Ultrasonic vocalizations induced by sex and amphetamine in M2, M4, M5 muscarinic and D2 dopamine receptor knockout mice. PloS one, 3(4), e1893.

Hariri, A.R., Tessitore, A., Mattay, V.S., Fera, F., and Weinberger, D.R. (2002). The Amygdala Response to Emotional Stimuli: A Comparison of Faces and Scenes. NeuroImage 17, 317–323.

Harris, C.R., and Jenkins, M. (2006). Gender Differences in Risk Assessment: Why do Women Take Fewer Risks than Men? Judgment and Decision Making 1, 16.

Heckman, J., McGuinness, B., Celikel, T., and Englitz, B. (2016). Determinants of the mouse ultrasonic vocal structure and repertoire. Neuroscience & Biobehavioral Reviews 65, 313–325.

Henriques-Alves, A.M., and Queiroz, C.M. (2015). Ethological Evaluation of the Effects of Social Defeat Stress in Mice: Beyond the Social Interaction Ratio. Front Behav Neurosci 9, 364.

Hiraoka, D., Ooishi, Y., Mugitani, R., and Nomura, M. (2019). Differential Effects of Infant Vocalizations on Approach-Avoidance Postural Movements in Mothers. Front. Psychol. 10.

Hoebel, B.G., Avena, N.M., and Rada, P. (2007). Accumbens dopamine-acetylcholine balance in approach and avoidance. Curr Opin Pharmacol 7, 617–627.

110

Holliday, N., and Jaggers, Z. (2015). Influence of suprasegmental features on perceived ethnicity of American politicians. In ICPhS, p.

Holliday, N.R., and Jaggers, Z.S. Influence of Suprasegmental Features on Perceived Ethnicity of American Politicians. 5.

Holly, K.S., Orndorff, C.O., and Murray, T.A. (2016). MATSAP: An automated analysis of stretch-attend posture in rodent behavioral experiments. Sci Rep 6.

Holveck, M.-J., Vieira de Castro, A.C., Lachlan, R.F., ten Cate, C., and Riebel, K. (2008). Accuracy of song syntax learning and consistency signal early condition in zebra finches. Behav Ecol 19, 1267–1281.

Holy, T.E., and Guo, Z. (2005). Ultrasonic Songs of Male Mice. PLOS Biology 3, e386.

Hurley, L. (2019). Neuromodulatory Feedback to the Inferior Colliculus.

Hurley, L.M., and Kalcounis-Rueppell, M.C. (2018). State and Context in Vocal Communication of Rodents. In Rodent Bioacoustics, M.L. Dent, R.R. Fay, and A.N. Popper, eds. (Cham: Springer International Publishing), pp. 191–221.

Hurley, L.M., and Pollak, G.D. (2005). Serotonin modulates responses to species-specific vocalizations in the inferior colliculus. J Comp Physiol A 191, 535–546.

Ingo, W. Phasic Dopamine Release in the Nucleus Accumbens in Response to Pro-Social 50 kHz Ultrasonic Vocalizations in Rats. J Neuro.

Janak, P.H., and Tye, K.M. (2015). From circuits to behaviour in the amygdala. Nature 517, 284–292.

Jiang, C., Liu, F., and Wong, P.C.M. (2017). Sensitivity to musical emotion is influenced by tonal structure in congenital . Sci Rep 7.

Jiang, L., Kundu, S., Lederman, J.D., López-Hernández, G.Y., Ballinger, E.C., Wang, S., Talmage, D.A., and Role, L.W. (2016). Cholinergic signaling controls conditioned-fear behaviors and enhances plasticity of cortical-amygdala circuits. Neuron 90, 1057–1070.

Jimenez, S.A., and Maren, S. (2009). Nuclear disconnection within the amygdala reveals a direct pathway to fear. Learn. Mem. 16, 766–768.

Johansen, J.A., Clemens, L.G., and Nunez, A.A. (2008). Characterization of Copulatory behavior in female mice: evidence for paced mating. Physiol Behav 95, 425–429.

Johnstone, T., Reekum, C.M.V., Bänziger, T., Hird, K., Kirsner, K., and Scherer, K.R. (2007). The effects of difficulty and gain versus loss on vocal physiology and acoustics. Psychophysiology 44, 827–837.

111

Julie L. Fudge, M.D. Ana B. Emiliano, M.D. (2003). The Extended Amygdala and the Dopamine System: Another Piece of the Dopamine Puzzle. J Neuropsychiatry Clin Neurosci. 15, 306–316.

Jürgens, U., and Ploog, D. (1970). Cerebral representation of vocalization in the squirrel monkey. Exp Brain Res 10.

Kang, O., and Johnson, D. (2018). The roles of suprasegmental features in predicting English oral proficiency with an automated system. Language Assessment Quarterly 15, 150–168.

Keesom, S.M., and Hurley, L.M. (2016). Socially induced serotonergic fluctuations in the male auditory midbrain correlate with female behavior during courtship. J. Neurophysiol. 115, 1786–1796.

Keifer, O.P., Gutman, D.A., Hecht, E.E., Keilholz, S.D., and Ressler, K.J. (2015). A comparative analysis of mouse and human medial geniculate nucleus connectivity: a DTI and anterograde tracing study. Neuroimage 105, 53–66.

Kennedy, A., Asahina, K., Hoopfer, E., Inagaki, H., Jung, Y., Lee, H., Remedios, R., and Anderson, D.J. (2014). Internal States and Behavioral Decision-Making: Toward an Integration of Emotion and Cognition. Cold Spring Harb Symp Quant Biol 79, 199–210.

Kim, J., Lee, S., Park, K., Hong, I., Song, B., Son, G., Park, H., Kim, W.R., Park, E., Choe, H.K., et al. (2007). Amygdala depotentiation and fear extinction. Proc. Natl. Acad. Sci. U.S.A. 104, 20955–20960.

Kim, S.M., Su, C.-Y., and Wang, J.W. (2017). Neuromodulation of Innate Behaviors in Drosophila. Annual Review of Neuroscience 40, 327–348.

Klasen, M., Kenworthy, C. A., Mathiak, K. A., Kircher, T. T., & Mathiak, K. (2011). Supramodal representation of emotions. Journal of Neuroscience, 31(38), 13635-13643.

Knox, D. (2016). The role of basal forebrain cholinergic neurons in fear and extinction memory. Neurobiology of Learning and Memory 133, 39–52.

Knutson, B., Burgdorf, J., and Panksepp, J. (1998). Anticipation of play elicits high- frequency ultrasonic vocalizations in young rats. Journal of Comparative Psychology 112, 65–73.

Koelsch, S., Fritz, T., V Cramon, D.Y., Müller, K., and Friederici, A.D. (2006). Investigating emotion with music: an fMRI study. Hum Brain Mapp 27, 239–250.

Krichmar, J.L. (2008). The Neuromodulatory System: A Framework for Survival and Adaptive Behavior in a Challenging World. Adaptive Behavior 16, 385–399.

112

Kröner, S., Rosenkranz, J.A., Grace, A.A., and Barrionuevo, G. (2005). Dopamine Modulates Excitability of Basolateral Amygdala Neurons In Vitro. Journal of Neurophysiology 93, 1598–1610.

Kršiak, M., Šulcová, A., Tomašiková, Z., Dlohožková, N., Kosař, E., and Mašek, K. (1981). Drug Effects on Attack, Defense and Escape in Mice. Pharmacology Biochemistry and Behavior 14, 47–52.

Lahvis, G.P., Alleva, E., and Scattoni, M.L. (2011). Translating Mouse Vocalizations: Prosody and Frequency Modulation. Genes Brain Behav 10, 4–16.

Lamont, E.W., and Kokkinidis, L. (1998). Infusion of the dopamine D1 receptor antagonist SCH 23390 into the amygdala blocks fear expression in a potentiated startle paradigm. Brain Res. 795, 128–136.

Lanuza, E., Nader, K., & Ledoux, J. E. (2004). Unconditioned stimulus pathways to the amygdala: effects of posterior thalamic and cortical lesions on fear conditioning. Neuroscience, 125(2), 305-315.

Larson, C.M., Wilcox, G.L., and Fairbanks, C.A. (2019). The Study of Pain in Rats and Mice. Comp Med 69, 555–570.

LeDoux, J. (2003). The emotional brain, fear, and the amygdala. Cell. Mol. Neurobiol. 23, 727–738.

LeDoux, J.E. (2000). Emotion circuits in the brain. Annu. Rev. Neurosci. 23, 155–184.

LeDoux, J.E., Thompson, M.E., Iadecola, C., Tucker, L.W., and Reis, D.J. (1983). Local cerebral blood flow increases during auditory and emotional processing in the conscious rat. Science 221, 576–578.

LeDoux, J. (2007). The amygdala. Current biology, 17(20), R868-R874.

LeDoux, J.E., Sakaguchi, A., Iwata, J., and Reis, D.J. (1985). Auditory emotional memories: establishment by projections from the medial geniculate nucleus to the posterior neostriatum and/or dorsal amygdala. Ann. N. Y. Acad. Sci. 444, 463–464.

LeDoux, J.E., Farb, C.R., and Romanski, L.M. (1991). Overlapping projections to the amygdala and striatum from auditory processing areas of the thalamus and cortex. Neurosci. Lett. 134, 139–144.

Lefebvre, E., Granon, S., and Chauveau, F. (2020). Social context increases ultrasonic vocalizations during restraint in adult mice. Anim Cogn 23, 351–359.

Leitman, D.I., Wolf, D.H., Ragland, J.D., Laukka, P., Loughead, J., Valdez, J.N., Javitt, D.C., Turetsky, B.I., and Gur, R.C. (2010). “It’s Not What You Say, But How You Say it”: A

113

Reciprocal Temporo-frontal Network for Affective Prosody. Front Hum Neurosci 4.

Leitman, D. I., Laukka, P., Juslin, P. N., Saccente, E., Butler, P., & Javitt, D. C. (2010). Getting the cue: sensory contributions to auditory emotion recognition impairments in schizophrenia. Schizophrenia bulletin, 36(3), 545-556.

Leitman, D. I., Sehatpour, P., Higgins, B. A., Foxe, J. J., Silipo, G., & Javitt, D. C. (2010). Sensory deficits and distributed hierarchical dysfunction in schizophrenia. American Journal of Psychiatry, 167(7), 818-827.

Lemasson A, Remeuf K, Rossard A, Zimmermann E (2012). Cross-taxa similarities in affect-induced changes of vocal behavior and voice in arboreal monkeys. PLoS One 7, e45106.

Lepistö, T., Kujala, T., Vanhala, R., Alku, P., Huotilainen, M., & Näätänen, R. (2005). The discrimination of and orienting to speech and non-speech sounds in children with autism. Brain research, 1066(1-2), 147-157.

Li, X.F., Stutzmann, G.E., and LeDoux, J.E. (1996). Convergent but temporally separated inputs to lateral amygdala neurons from the auditory thalamus and auditory cortex use different postsynaptic receptors: in vivo intracellular and extracellular recordings in fear conditioning pathways. Learn. Mem. 3, 229–242.

Li, T., Horta, M., Mascaro, J. S., Bijanki, K., Arnal, L. H., Adams, M., ... & Rilling, J. K. (2018). Explaining individual variation in paternal brain responses to infant cries. Physiology & behavior, 193, 43-54.

Li, Y., Mathis, A., Grewe, B.F., Osterhout, J.A., Ahanonu, B., Schnitzer, M.J., Murthy, V.N., and Dulac, C. (2017). Neuronal Representation of Social Information in the Medial Amygdala of Awake Behaving Mice. Cell 171, 1176-1190.e17.

Li, X., Yu, B., Sun, Q., Zhang, Y., Ren, M., Zhang, X., ... & Zeng, H. (2018). Generation of a whole-brain atlas for the cholinergic system and mesoscopic projectome analysis of basal forebrain cholinergic neurons. PNAS, 115(2), 415-420.

Liao, D.A., Zhang, Y.S., Cai, L.X., and Ghazanfar, A.A. (2018). Internal states and extrinsic factors both determine monkey vocal production. PNAS 115, 3978–3983.

Liebenthal, E., Silbersweig, D.A., Stern, E., 2016. The language, and prosody of emotions: neural substrates and dynamics of spoken-word emotion perception. Front. Neurosci. 10 (NOV), 1–13.

Likhtik, E., and Johansen, J.P. (2019). Neuromodulation in circuits of aversive emotional learning. Nature Neuroscience 22, 1586–1597.

Lin, D., Boyle, M.P., Dollar, P., Lee, H., Lein, E.S., Perona, P., and Anderson, D.J. (2011).

114

Functional identification of an aggression locus in the mouse hypothalamus. Nature 470, 221–226.

Litvin, Y., Blanchard, D.C., and Blanchard, R.J. (2010). Chapter 5.1 - Vocalization as a social signal in defensive behavior. In Handbook of Behavioral Neuroscience, S.M. Brudzynski, ed. (Elsevier), pp. 151–157.

Liu, H.-X., Lopatina, O., Higashida, C., Fujimoto, H., Akther, S., Inzhutova, A., Liang, M., Zhong, J., Tsuji, T., Yoshihara, T., et al. (2013). Displays of paternal mouse pup retrieval following communicative interaction with maternal mates. Nature Communications 4, 1346.

Love, T.M. (2013). Oxytocin, Motivation and the Role of Dopamine. Pharmacol Biochem Behav 49–60.

Lutas, A., Kucukdereli, H., Alturkistani, O., Carty, C., Sugden, A.U., Fernando, K., Diaz, V., Flores-Maldonado, V., and Andermann, M.L. (2019). State-specific gating of salient cues by midbrain dopaminergic input to basal amygdala. Nat Neurosci 22, 1820–1833.

Ma, J., and Kanwal, J.S. (2014). Stimulation of the basal and central amygdala in the mustached triggers echolocation and agonistic vocalizations within multimodal output. Front Physiol 5.

Macedo, C.E., Martinez, R.C.R., de Souza Silva, M.A., and Brandão, M.L. (2005). Increases in extracellular levels of 5-HT and dopamine in the basolateral, but not in the central, nucleus of amygdala induced by aversive stimulation of the inferior colliculus. Eur. J. Neurosci. 21, 1131–1138.

Maney, D.L. (2013). The incentive salience of courtship vocalizations: Hormone- mediated ‘wanting’ in the auditory system. Hearing Research 305, 19–30.

Mangieri, L.R., Jiang, Z., Lu, Y., Xu, Y., Cassidy, R.M., Justice, N., Xu, Y., Arenkiel, B.R., and Tong, Q. (2019). Defensive Behaviors Driven by a Hypothalamic-Ventral Midbrain Circuit. ENeuro 6.

Manser, M.B., Seyfarth, R.M., and Cheney, D.L. (2002). Suricate alarm calls signal predator class and urgency. Trends Cogn. Sci. (Regul. Ed.) 6, 55–57.

Mansvelder, H.D., Mertz, M., and Role, L.W. (2009). Nicotinic modulation of synaptic transmission and plasticity in cortico-limbic circuits. Semin. Cell Dev. Biol. 20, 432–440.

Mariappan, S., Bogdanowicz, W., Raghuram, H., Marimuthu, G., and Rajan, K.E. (2016). Structure of distress call: implication for specificity and activation of dopaminergic system. J. Comp. Physiol. A Neuroethol. Sens. Neural. Behav. Physiol. 202, 55–65.

Mark, G.P., Shabani, S., Dobbs, L.K., and Hansen, S.T. (2011). Cholinergic modulation of

115 mesolimbic dopamine function and reward. Physiology & Behavior 104, 76–81. van Marle, H.J.F., Hermans, E.J., Qin, S., and Fernández, G. (2009). From specificity to sensitivity: how acute stress affects amygdala processing of biologically salient stimuli. Biol. Psychiatry 66, 649–655.

Márquez, R. (1995). Female Choice in the Midwife Toads (Alytes Obstetricans and a. Cisternasii). Behaviour 132, 151–161.

Mascagni, F., Muly, E.C., Rainnie, D.G., and McDonald, A.J. (2009). Immunohistochemical characterization of parvalbumin-containing interneurons in the monkey basolateral amygdala. Neuroscience 158, 1541–1550.

Matsumoto, Y.K., and Okanoya, K. (2016). Phase-Specific Vocalizations of Male Mice at the Initial Encounter during the Courtship Sequence. PLOS ONE 11, e0147102.

Matsumoto, Y.K., and Okanoya, K. (2018). Mice modulate ultrasonic calling bouts according to sociosexual context. Royal Society Open Science 5, 180378.

Matsumoto, J., Nishimaru, H., Takamura, Y., Urakawa, S., Ono, T., and Nishijo, H. (2016a). Amygdalar Auditory Neurons Contribute to Self-Other Distinction during Ultrasonic Social Vocalization in Rats. Front Neurosci 10, 399.

Matsumoto, J., Nishimaru, H., Takamura, Y., Urakawa, S., Ono, T., and Nishijo, H. (2016b). Amygdalar Auditory Neurons Contribute to Self-Other Distinction during Ultrasonic Social Vocalization in Rats. Front Neurosci 10, 399.

Matsumoto, Y.K., Okanoya, K., and Seki, Y. (2012). Effects of amygdala lesions on male mouse ultrasonic vocalizations and copulatory behaviour. Neuroreport 23, 676–680.

McDonald, A.J. (1998). Cortical pathways to the mammalian amygdala. Prog. Neurobiol. 55, 257–332.

McEwen, B.S. (1998). Multiple ovarian hormone effects on brain structure and function. J Gend Specif Med 1, 33–41.

McGaugh, J.L. (2002). Memory consolidation and the amygdala: a systems perspective. Trends Neurosci. 25, 456.

McGaugh, J.L., McIntyre, C.K., and Power, A.E. (2002). Amygdala modulation of memory consolidation: interaction with other brain systems. Neurobiol Learn Mem 78, 539–552.

McLean, A.C., Valenzuela, N., Fai, S., and Bennett, S.A.L. (2012). Performing vaginal lavage, crystal violet staining, and vaginal cytological evaluation for mouse estrous cycle staging identification. J Vis Exp e4389.

116

Mesulam, M.M., Mufson, E.J., Wainer, B.H., and Levey, A.I. (1983). Central cholinergic pathways in the rat: an overview based on an alternative nomenclature (Ch1-Ch6). Neuroscience 10, 1185–1201.

Metherate, R. (2011). Functional connectivity and cholinergic modulation in auditory cortex. Neurosci Biobehav Rev 35, 2058–2063.

Minces V, Pinto L, Dan Y, Chiba AA. Cholinergic shaping of neural correlations. Proc Natl Acad Sci U S A. 2017;114(22):5725-5730. doi:10.1073/pnas.1621493114

Moles, A., Costantini, F., Garbugino, L., Zanettini, C., and D’Amato, F.R. (2007). Ultrasonic vocalizations emitted during dyadic interactions in female mice: a possible index of sociability? Behav. Brain Res. 182, 223–230.

Morrow, J., Mosher, C., and Gothard, K. (2019). Multisensory Neurons in the Primate Amygdala. J. Neurosci. 39, 3663–3675.

Mortillaro, M., Mehu, M., and Scherer, K.R. (2013). The evolutionary origin of multimodal synchronization and emotional expression (Oxford University Press).

Moser, V.C. (1999). Neurobehavioral Screening in Rodents. Current Protocols in Toxicology 00, 11.2.1-11.2.16.

Muller, J.F., Mascagni, F., and McDonald, A.J. (2009). Dopaminergic innervation of pyramidal cells in the rat basolateral amygdala. Brain Struct Funct 213, 275–288.

Murley, A.G., and Rowe, J.B. (2018). Neurotransmitter deficits from frontotemporal lobar degeneration. Brain 141, 1263–1285.

Nadim, F., and Bucher, D. (2014). Neuromodulation of Neurons and Synapses. Curr Opin Neurobiol 0, 48–56.

Namburi, P., Beyeler, A., Yorozu, S., Calhoon, G.G., Halbert, S.A., Wichmann, R., Holden, S.S., Mertens, K.L., Anahtar, M., Felix-Ortiz, A.C., et al. (2015). A circuit mechanism for differentiating positive and negative associations. Nature 520, 675–678.

Namburi, P., Al-Hasani, R., Calhoon, G.G., Bruchas, M.R., and Tye, K.M. (2016). Architectural Representation of Valence in the Limbic System. Neuropsychopharmacology 41, 1697–1715.

Naneix, F., Marchand, A.R., Di Scala, G., Pape, J.-R., and Coutureau, E. (2012). Parallel Maturation of Goal-Directed Behavior and Dopaminergic Systems during Adolescence. J Neurosci 32, 16223–16232.

Narins, P. M., & Smith, S. L. (1986). Clinal variation in anuran advertisement calls: basis for acoustic isolation? Behavioral Ecology and Sociobiology, 19(2), 135-141.

117

Narins, P.M., Feng, A.S., Lin, W., Schnitzler, H.-U., Denzinger, A., Suthers, R.A., and Xu, C. (2004). Old World frog and bird vocalizations contain prominent ultrasonic harmonics. The Journal of the Acoustical Society of America 115, 910–913.

Naumann, R. T., & Kanwal, J. S. (2011). Basolateral amygdala responds robustly to social calls: spiking characteristics of single unit activity. Journal of neurophysiology, 105(5), 2389-2404.

Neunuebel, J.P., Taylor, A.L., Arthur, B.J., and Egnor, S.R. (2015). Female mice ultrasonically interact with males during courtship displays. ELife 4, e06203.

Newman, J.D., and Bachevalier, J. (1997). Neonatal ablations of the amygdala and inferior temporal cortex alter the vocal response to social separation in rhesus macaques. Brain Res. 758, 180–186.

Nicolakis, D., Marconi, M.A., Zala, S.M., and Penn, D.J. (2020). Ultrasonic vocalizations in house mice depend upon genetic relatedness of mating partners and correlate with subsequent reproductive success. Front. Zool. 17, 10.

Nieh, E.H., Kim, S.-Y., Namburi, P., and Tye, K.M. (2013). Optogenetic dissection of neural circuits underlying emotional valence and motivated behaviors. Brain Research 1511, 73–92.

Niemczura, A.C. (2019). Stress, Emotionality, and Hearing in Social Communication and Tinnitus. Ph.D. Kent State University.

Okanoya, K., and Screven, L.A. (2018). Rodent Vocalizations: Adaptations to Physical, Social, and Sexual Factors. In Rodent Bioacoustics, M.L. Dent, R.R. Fay, and A.N. Popper, eds. (Cham: Springer International Publishing), pp. 13–41.

O’Neill, P.-K., Gore, F., and Salzman, C.D. (2018). Basolateral amygdala circuitry in positive and negative valence. Current Opinion in Neurobiology 49, 175–183.

Orsini, C.A., Yan, C., and Maren, S. (2013). Ensemble coding of context-dependent fear memory in the amygdala. Front. Behav. Neurosci. 7.

Otani, S., Daniel, H., Roisin, M. P., & Crepel, F. (2003). Dopaminergic modulation of long- term synaptic plasticity in rat prefrontal neurons. Cerebral cortex, 13(11), 1251-1256.

Pannese, A., Grandjean, D., & Frühholz, S. (2016). Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions. Cortex, 85, 116- 125.

Parr, L.A., and Waller, B.M. (2006). Understanding chimpanzee facial expression: insights into the evolution of communication. Soc Cogn Affect Neurosci 1, 221–228.

118

Parsana, A.J., Li, N., and Brown, T.H. (2012). Positive and negative ultrasonic social signals elicit opposing firing patterns in rat amygdala. Behav. Brain Res. 226, 77–86.

Paton, J.J., Belova, M.A., Morrison, S.E., and Salzman, C.D. (2006). The primate amygdala represents the positive and negative value of visual stimuli during learning. Nature 439, 865–870.

Payne, C., and Bachevalier, J. (2013). Crossmodal integration of conspecific vocalizations in rhesus macaques. PLoS ONE 8, e81825.

Payne, C., and Bachevalier, J. (2019). Early amygdala damage alters the way rhesus macaques process species-specific audio-visual vocalizations. Behav. Neurosci. 133, 1– 17.

Peterson, D.C., and Wenstrup, J.J. (2012). Selectivity and persistent firing responses to social vocalizations in the basolateral amygdala. Neuroscience 217, 154–171.

Phelps, E.A., and LeDoux, J.E. (2005). Contributions of the Amygdala to Emotion Processing: From Animal Models to Human Behavior. Neuron 48, 175–187.

Picciotto, M.R., Higley, M.J., and Mineur, Y.S. (2012). Acetylcholine as a neuromodulator: cholinergic signaling shapes nervous system function and behavior. Neuron 76, 116–129.

Pidoplichko, V.I., Prager, E.M., Aroniadou-Anderjaska, V., and Braga, M.F.M. (2013). α7- Containing nicotinic acetylcholine receptors on interneurons of the basolateral amygdala and their role in the regulation of the network excitability. J. Neurophysiol. 110, 2358–2369.

Pignatelli, M., and Beyeler, A. (2019). Valence coding in amygdala circuits. Current Opinion in Behavioral Sciences 26, 97–106.

Pitkänen, A., Pikkarainen, M., Nurminen, N., and Ylinen, A. (2000). Reciprocal connections between the amygdala and the hippocampal formation, perirhinal cortex, and postrhinal cortex in rat. A review. Ann. N. Y. Acad. Sci. 911, 369–391.

Pomerantz, S.M., Nunez, A.A., and Jay Bean, N. (1983). Female behavior is affected by male ultrasonic vocalizations in house mice. Physiology & Behavior 31, 91–96.

Porges, S. W., & Lewis, G. F. (2010). The polyvagal hypothesis: common mechanisms mediating autonomic regulation, vocalizations and listening. In Handbook of Behavioral Neuroscience (Vol. 19, pp. 255-264). Elsevier.

Portfors, C.V., and Perkel, D.J. (2014). The role of ultrasonic vocalizations in mouse communication. Current Opinion in Neurobiology 28, 115–120.

119

Poulin, J.-F., Caronia, G., Hofer, C., Cui, Q., Helm, B., Ramakrishnan, C., Chan, C.S., Dombeck, D., Deisseroth, K., and Awatramani, R. (2018). Mapping projections of molecularly defined dopamine neuron subtypes using intersectional genetic approaches. Nat Neurosci 21, 1260–1271.

Reinhard, S.M., Rais, M., Afroz, S., Hanania, Y., Pendi, K., Espinoza, K., Rosenthal, R., Binder, D.K., Ethell, I.M., and Razak, K.A. (2019). Reduced perineuronal net expression in Fmr1 KO mice auditory cortex and amygdala is linked to impaired fear-associated memory. Neurobiology of Learning and Memory 164, 107042.

Reinisch, E., Jesse, A., and McQueen, J.M. (2011). Speaking Rate Affects the Perception of Duration as a Suprasegmental Lexical-stress Cue: Language and Speech.

Remedios, R., Logothetis, N.K., and Kayser, C. (2009). Monkey drumming reveals common networks for perceiving vocal and nonvocal communication sounds. PNAS 106, 18010–18015.

Rendall, D. (2003). Acoustic correlates of caller identity and affect intensity in the vowel- like grunt vocalizations of baboons. J. Acoust. Soc. Am. 113, 3390–3402.

Rendall, D., and Owren, M.J. (2010). Chapter 5.4 - Vocalizations as tools for influencing the affect and behavior of others. In Handbook of Behavioral Neuroscience, S.M. Brudzynski, ed. (Elsevier), pp. 177–185.

Rendon, N.M., Keesom, S.M., Amadi, C., Hurley, L.M., and Demas, G.E. (2015). Vocalizations convey sex, seasonal phenotype, and aggression in a seasonal mammal. Physiol. Behav. 152, 143–150.

Rice, M.E. (2019). Closing in on what motivates motivation. Nature 570, 40–42.

Roelfsema, F., van den Berg, G., Frölich, M., Veldhuis, J.D., van Eijk, A., Buurman, M.M., and Etman, B.H. (1993). Sex-dependent alteration in cortisol response to endogenous adrenocorticotropin. J. Clin. Endocrinol. Metab. 77, 234–240.

Roelofs, K. (2017). Freeze for action: neurobiological mechanisms in animal and human freezing. Philos Trans R Soc Lond B Biol Sci 372.

Romanski, L.M., and LeDoux, J.E. (1992). Equipotentiality of thalamo-amygdala and thalamo-cortico-amygdala circuits in auditory fear conditioning. J. Neurosci. 12, 4501– 4509.

Ronald, K.L., Zhang, X., Morrison, M.V., Miller, R., and Hurley, L.M. (2020). Male mice adjust courtship behavior in response to female multimodal signals. PLOS ONE 15, e0229302.

Rosenkranz, J.A., and Grace, A.A. (1999). Modulation of Basolateral Amygdala Neuronal

120

Firing and Afferent Drive by Dopamine Receptor Activation In Vivo. J. Neurosci. 19, 11027–11039.

Rosenblau, G., Kliemann, D., Dziobek, I., Heekeren, H.R., 2017. Emotional prosody processing in disorder. Soc. Cognit. Affect. Neurosci. 12 (2), 224–239.

Sah, P., Faber, E.S.L., Lopez De Armentia, M., and Power, J. (2003). The amygdaloid complex: anatomy and physiology. Physiol. Rev. 83, 803–834.

Sales, G., & Pye, D. (1974). Ultrasound in rodents. In Ultrasonic communication by animals (pp. 149-201). Springer, Dordrecht.

Planalp, S. (1998). Communicating Emotion (Cambridge University Press).

Sander, K., and Scheich, H. (2001). Auditory perception of laughing and crying activates human amygdala regardless of attentional state. Brain Res Cogn Brain Res 12, 181–198.

Sander, D., Grafman, J., & Zalla, T. (2003). The human amygdala: an evolved system for relevance detection. Reviews in the Neurosciences, 14(4), 303-316.

Sander, K., & Scheich, H. (2005). Left auditory cortex and amygdala, but right insula dominance for human laughing and crying. Journal of cognitive Neuroscience, 17(10), 1519-1531.

Sander, D., Grandjean, D., Pourtois, G., Schwartz, S., Seghier, M.L., Scherer, K.R., and Vuilleumier, P. (2005). Emotion and attention interactions in social cognition: brain regions involved in processing anger prosody. Neuroimage 28, 848–858.

Sangiamo, D.T., Warren, M.R., and Neunuebel, J.P. (2020). Ultrasonic signals associated with different types of social behavior of mice. Nature Neuroscience 23, 411–422.

Sacchetti, B., Lorenzini, C. A., Baldi, E., Tassoni, G., & Bucherelli, C. (1999). Auditory thalamus, dorsal hippocampus, basolateral amygdala, and perirhinal cortex role in the consolidation of conditioned freezing to context and to acoustic conditioned stimulus in the rat. Journal of Neuroscience, 19(21), 9570-9578.

Seyfarth RM, Cheney DL, Marler P. Monkey responses to three different alarm calls: evidence of predator classification and semantic communication. Science. 1980;210(4471):801-803. doi:10.1126/science.7433999

Sayin, S., Boehm, A.C., Kobler, J.M., De Backer, J.-F., and Grunwald Kadow, I.C. (2018). Internal State Dependent Odor Processing and Perception—The Role of Neuromodulation in the Fly Olfactory System. Front. Cell. Neurosci. 12.

Scattoni, M.L., and Branchi, I. (2010). Chapter 3.5 - Vocal repertoire in mouse pups:

121 strain differences. In Handbook of Behavioral Neuroscience, S.M. Brudzynski, ed. (Elsevier), pp. 89–95.

Scattoni, M.L., Crawley, J., and Ricceri, L. (2009). Ultrasonic vocalizations: A tool for behavioural phenotyping of mouse models of neurodevelopmental disorders. Neuroscience & Biobehavioral Reviews 33, 508–515.

Schneider, I., Bertsch, K., Izurieta Hidalgo, N.A., Müller, L.E., Defiebre, N., and Herpertz, S.C. (2019). The Sound and Face of Others: Vocal Priming Effects on Facial Emotion Processing in Posttraumatic Stress Disorder. Psychopathology 52, 283–293.

Schofield, B.R., and Hurley, L. (2018). Circuits for Modulation of Auditory Function. In “The Mammalian Auditory Pathways: Synaptic Organization and Microcircuits”, D.L. Oliver, N.B. Cant, R.R. Fay, and A.N. Popper, eds. (Cham: Springer International Publishing), pp. 235–267.

Schönfeld, L.-M., Zech, M.-P., Schäble, S., Wöhr, M., and Kalenscher, T. (2020). Lesions of the rat basolateral amygdala reduce the behavioral response to ultrasonic vocalizations. Behav. Brain Res. 378, 112274.

Schwarting, R.K.W., and Wöhr, M. (2012). On the relationships between ultrasonic calling and anxiety-related behavior in rats. Braz J Med Biol Res 45, 337–348.

See, R.E., Kruzich, P.J., and Grimm, J.W. (2001). Dopamine, but not glutamate, receptor blockade in the basolateral amygdala attenuates conditioned reward in a rat model of relapse to cocaine-seeking behavior. Psychopharmacology (Berl.) 154, 301–310.

See, R.E., McLaughlin, J., and Fuchs, R.A. (2003). Muscarinic receptor antagonism in the basolateral amygdala blocks acquisition of cocaine-stimulus association in a model of relapse to cocaine-seeking behavior in rats. Neuroscience 117, 477–483.

Seyfarth, R., and Cheney, D. (1990). The assessment by vervet monkeys of their own and another species’ alarm calls. Animal Behaviour 40, 754–764.

Seyfarth, R.M., Cheney, D.L., and Marler, P. (1980). Monkey responses to three different alarm calls: evidence of predator classification and semantic communication. Science 210, 801–803.

Shabel, S.J., and Janak, P.H. (2009). Substantial similarity in amygdala neuronal activity during conditioned appetitive and aversive emotional arousal. PNAS 106, 15031–15036.

Shekhar, A., and Dimicco, J.A. (1987). Defense reaction elicited by injection of GABA antagonists and synthesis inhibitors into the posterior hypothalamus in rats. Neuropharmacology 26, 407–417.

Shepard, K.N., and Liu, R.C. (2011). Experience restores innate female preference for

122 male ultrasonic vocalizations. Genes, Brain and Behavior 10, 28–34.

Sheppard, S.M., Keator, L.M., Breining, B.L., Wright, A.E., Saxena, S., Tippett, D.C., and Hillis, A.E. (2020). Right hemisphere ventral stream for emotional prosody identification: Evidence from acute stroke. Neurology 94, e1013–e1020.

Shughrue, P.J., Scrimo, P.J., and Merchenthaler, I. (2000a). Estrogen binding and estrogen receptor characterization (ERalpha and ERbeta) in the cholinergic neurons of the rat basal forebrain. Neuroscience 96, 41–49.

Shughrue, P.J., Scrimo, P.J., and Merchenthaler, I. (2000b). Estrogen binding and estrogen receptor characterization (ERalpha and ERbeta) in the cholinergic neurons of the rat basal forebrain. Neuroscience 96, 41–49.

Silva, B.A., Gross, C.T., and Gräff, J. (2016). The neural circuits of innate fear: detection, integration, action, and memorization. Learn Mem 23, 544–555.

Sirotin, Y. B., Costa, M. E., & Laplagne, D. A. (2014). Rodent ultrasonic vocalizations are bound to active sniffing behavior. Frontiers in behavioral neuroscience, 8, 399.

Snyder, J.S., Schwiedrzik, C.M., Vitela, A.D., and Melloni, L. (2015). How previous experience shapes perception in different sensory modalities. Front Hum Neurosci 9.

Soltis, J., Blowers, T.E., and Savage, A. (2011). Measuring positive and negative affect in the voiced sounds of African (Loxodonta africana). J. Acoust. Soc. Am. 129, 1059–1066.

Spencer, K.A., Buchanan, K.L., Leitner, S., Goldsmith, A.R., and Catchpole, C.K. (2005). Parasites affect song complexity and neural development in a songbird. Proc. Biol. Sci. 272, 2037–2043.

Städele, C., Keleş, M.F., Mongeau, J.-M., and Frye, M.A. (2020). Non-canonical Receptive Field Properties and Neuromodulation of Feature-Detecting Neurons in Flies. Curr. Biol.

Stewart, A.M., Lewis, G.F., Heilman, K.J., Davila, M.I., Coleman, D.D., Aylward, S.A., and Porges, S.W. (2013). The covariation of acoustic features of infant cries and autonomic state. Physiol. Behav. 120, 203–210.

Stuber, G.D., Sparta, D.R., Stamatakis, A.M., van Leeuwen, W.A., Hardjoprajitno, J.E., Cho, S., Tye, K.M., Kempadoo, K.A., Zhang, F., Deisseroth, K., et al. (2011). Excitatory transmission from the amygdala to nucleus accumbens facilitates reward seeking. Nature 475, 377–380.

Suthers, R.A., Narins, P.M., Lin, W.-Y., Schnitzler, H.-U., Denzinger, A., Xu, C.-H., and Feng, A.S. (2006). Voices of the dead: complex nonlinear vocal signals from the larynx of an ultrasonic frog. Journal of Experimental Biology 209, 4984–4993.

123

Tang, W., Kochubey, O., Kintscher, M., and Schneggenburger, R. (2019). Dopamine in the basal amygdala signals salient somatosensory events during fear learning. BioRxiv 716589.

Terburg, D., Scheggia, D., Triana del Rio, R., Klumpers, F., Ciobanu, A.C., Morgan, B., Montoya, E.R., Bos, P.A., Giobellina, G., van den Burg, E.H., et al. (2018). The Basolateral Amygdala Is Essential for Rapid Escape: A Human and Rodent Study. Cell 175, 723- 735.e16.

Tingley, D., Alexander, A.S., Kolbu, S., de Sa, V.R., Chiba, A.A., and Nitz, D.A. (2014). Task- phase-specific dynamics of basal forebrain neuronal ensembles. Front. Syst. Neurosci. 8.

Todd, R.M., and Anderson, A.K. (2009). Six degrees of separation: the amygdala regulates social behavior and perception. Nature Neuroscience 12, 1217–1218.

Trainor, B.C. (2011). Stress responses and the mesolimbic dopamine system: Social contexts and sex differences. Hormones and Behavior 60, 457–469.

Tye, K.M. (2018). Neural Circuit Motifs in Valence Processing. Neuron 100, 436–452.

Tye, K.M., Stuber, G.D., de Ridder, B., Bonci, A., and Janak, P.H. (2008). Rapid strengthening of thalamo-amygdala synapses mediates cue–reward learning. Nature 453, 1253–1257.

Unal, C.T., Pare, D., and Zaborszky, L. (2015). Impact of Basal Forebrain Cholinergic Inputs on Basolateral Amygdala Neurons. J Neurosci 35, 853–863.

Vale, R., Evans, D.A., and Branco, T. (2017). Rapid Spatial Learning Controls Instinctive Defensive Behavior in Mice. Current Biology 27, 1342–1349.

Valtcheva, S., and Froemke, R.C. (2019). Neuromodulation of maternal circuits by oxytocin. Cell Tissue Res. 375, 57–68.

Vander Weele, C.M., Siciliano, C.A., Matthews, G.A., Namburi, P., Izadmehr, E.M., Espinel, I.C., Nieh, E.H., Schut, E.H.S., Padilla-Coreano, N., Burgos-Robles, A., et al. (2018). Dopamine enhances signal-to-noise ratio in cortical-brainstem encoding of aversive stimuli. Nature 563, 397–401.

Verma, R., Balhara, Y.P.S., and Gupta, C.S. (2011). Gender differences in stress response: Role of developmental and biological determinants. Ind Psychiatry J 20, 4–10.

Viinikainen, M., Kätsyri, J., Sams, M., 2012. Representation of perceived sound valence in the human brain. Hum. Brain Mapp. 33 (10), 2295–2305.

Vogel, A.P., Tsanas, A., and Scattoni, M.L. (2019). Quantifying ultrasonic mouse vocalizations using acoustic analysis in a supervised statistical machine learning

124 framework. Scientific Reports 9, 8100. van Vugt, B., van Kerkoerle, T., Vartak, D., & Roelfsema, P. R. (2020). The contribution of AMPA and NMDA receptors to persistent firing in the dorsolateral prefrontal cortex in working memory. Journal of Neuroscience, 40(12), 2458-2470.

Wang, F., and Pereira, A. (2016). Neuromodulation, Emotional Feelings and Affective Disorders. Mens Sana Monogr 14, 5–29.

Wang, F., Yang, J., Pan, F., Ho, R.C., and Huang, J.H. (2020). Editorial: Neurotransmitters and Emotions. Front Psychol 11.

Wang, H., Liang, S., Burgdorf, J., Wess, J., and Yeomans, J. (2008). Ultrasonic vocalizations induced by sex and amphetamine in M2, M4, M5 muscarinic and D2 dopamine receptor knockout mice. PLoS ONE 3, e1893.

Wang, L., Chen, I.Z., and Lin, D. (2015). Collateral pathways from the ventromedial hypothalamus mediate defensive behaviors. Neuron 85, 1344–1358.

Weiner, B., , S., Perets, N., and London, M. (2016). Social Ultrasonic Vocalization in Awake Head-Restrained Mouse. Front. Behav. Neurosci. 10.

Wenstrup, J.J., Ghasemahmad, Z., Hazlett, E., Shanbhag, S. (2020) The amygdala – a hub of the social auditory brain. In: The Senses: Volume II Audition, B. Grothe, ed. Elsevier. https://doi.org/10.1016/B978-0-12-809324-5.24194-1.

Wesson, D. W., Verhagen, J. V., & Wachowiak, M. (2009). Why sniff fast? The relationship between sniff frequency, odor discrimination, and receptor neuron activation in the rat. Journal of neurophysiology, 101(2), 1089-1102.

Wiethoff, S., Wildgruber, D., Grodd, W., and Ethofer, T. (2009). Response and habituation of the amygdala during processing of emotional prosody. Neuroreport 20, 1356–1360.

Willuhn, I., Tose, A., Wanat, M.J., Hart, A.S., Hollon, N.G., Phillips, P.E.M., Schwarting, R.K.W., and Wöhr, M. (2014). Phasic dopamine release in the nucleus accumbens in response to pro-social 50 kHz ultrasonic vocalizations in rats. J. Neurosci. 34, 10616– 10623.

Wise, K.K., Conover, M.R., and Knowlton, F.F. (1999). Response of Coyotes to Avian Distress Calls: Testing the Startle-Predator and Predator-Attraction Hypotheses. Behaviour 136, 935–949.

Wöhr, M., and Schwarting, R.K.W. (2007). Ultrasonic Communication in Rats: Can Playback of 50-kHz Calls Induce Approach Behavior? PLOS ONE 2, e1365.

125

Wong, J.-M.T., Malec, P.A., Mabrouk, O.S., Ro, J., Dus, M., and Kennedy, R.T. (2016). Benzoyl chloride derivatization with liquid chromatography–mass spectrometry for targeted metabolomics of neurochemicals in biological samples. Journal of Chromatography A 1446, 78–90.

Woolf, N.J., and Butcher, L.L. (1982). Cholinergic projections to the basolateral amygdala: A combined Evans Blue and acetylcholinesterase analysis. Brain Research Bulletin 8, 751–763.

Woolley, S.M.N., and Portfors, C.V. (2013). Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain. Hear Res 305.

Wright, J.M., Dobosiewicz, M.R.S., and Clarke, P.B.S. (2013). The role of dopaminergic transmission through D1-like and D2-like receptors in amphetamine-induced rat ultrasonic vocalizations. Psychopharmacology (Berl.) 225, 853–868.

Y, K., E, L., and SR, V. (1992). Ultrastructure of Cholinergic Neurons in the Laterodorsal Tegmental Nucleus of the Rat:Interaction With Catecholamine Fibers. Brain Research Bulletin 29, 479–91.

Yang, M., Augustsson, H., Markham, C.M., Hubbard, D.T., Webster, D., Wall, P.M., Blanchard, R.J., and Blanchard, D.C. (2004). The rat exposure test: a model of mouse defensive behaviors. Physiology & Behavior 81, 465–473.

Ydenberg, R.C., and Dill, L.M. (1986). The Economics of Fleeing from Predators. In Advances in the Study of Behavior, J.S. Rosenblatt, C. Beer, M.-C. Busnel, and P.J.B. Slater, eds. (Academic Press), pp. 229–249.

Yizhar, O., and Klavir, O. (2018). Reciprocal amygdala–prefrontal interactions in learning. Current Opinion in Neurobiology 52, 149–155.

Zaborszky, L., Pang, K., Somogyi, J., Nadasdy, Z., and Kallo, I. (1999). The Basal Forebrain Corticopetal System Revisited. Annals NY Acad Sci 877, 339–367.

Zaborszky, L., van den Pol, A., and Gyengesi, E. (2012). Chapter 28 - The Basal Forebrain Cholinergic Projection System in Mice. In The Mouse Nervous System, C. Watson, G. Paxinos, and L. Puelles, eds. (San Diego: Academic Press), pp. 684–718.

Zald, D.H. (2003). The human amygdala and the emotional evaluation of sensory stimuli. Brain Res. Brain Res. Rev. 41, 88–123.

Zaldivar, A., and Krichmar, J.L. (2013). Interactions between the neuromodulatory systems and the amygdala: exploratory survey using the Allen Mouse Brain Atlas. Brain Struct Funct 218, 1513–1530.

Zhang, X., and Li, B. (2018). Population coding of valence in the basolateral amygdala.

126

Nature Communications 9, 1–14.

Zhang, J., Muller, J.F., and McDonald, A.J. (2013a). Noradrenergic innervation of pyramidal cells in the rat basolateral amygdala. Neuroscience 228, 395–408.

Zhang, W., Schneider, D.M., Belova, M.A., Morrison, S.E., Paton, J.J., and Salzman, C.D. (2013b). Functional Circuits and Anatomical Distribution of Response Properties in the Primate Amygdala. J. Neurosci. 33, 722–733.

Zimmermann, E., Leliveld, L., & Schehka, S. (2013). Toward the evolutionary roots of affective prosody in human acoustic communication: a comparative approach to mammalian voices. Evolution of emotional communication: from sounds in nonhuman mammals to speech and music in man, 116, 132.

Zuberbühler, K., Jenny, D., and Bshary, R. The Predator Deterrence Function of Primate Alarm Calls - Zuberbühler - 1999 - - Wiley Online Library.

127