Sound Localization Behavior in Drosophila

The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters

Citation Batchelor, Alexandra Victoria. 2019. Localization Behavior in Drosophila. Doctoral dissertation, Harvard University, Graduate School of Arts & Sciences.

Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:41121298

Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA behavior in Drosophila

A dissertation presented

by

Alexandra Victoria Batchelor

to

The Division of Medical Sciences

in partial fulfillment of the requirements

for the degree of

Doctor of Philosophy

in the subject of

Neurobiology

Harvard University

Cambridge, Massachusetts

September 2018

© 2018 Alexandra Victoria Batchelor

All rights reserved.

Dissertation Advisor: Dr. Rachel I. Wilson Alexandra Victoria Batchelor

Sound localization behavior in Drosophila

ABSTRACT

Drosophila melanogaster hear with their antennae: sound evokes vibration of the distal antennal segment, and this vibration is transduced by specialized cells. The left and right antennae vibrate preferentially in response to arising from different azimuthal angles.

Therefore, by comparing signals from the two antennae, it should be possible to obtain information about the azimuthal angle of a sound source. However, behavioral evidence of sound localization has not been reported in Drosophila. Here we show that walking Drosophila do indeed turn in response to lateralized sounds. We confirm that this behavior is evoked by vibrations of the distal antennal segment. The rule for turning is different for sounds arriving from different locations:

Drosophila turn toward sounds in their front hemifield, but they turn away from sounds in their rear hemifield, and they do not turn at all in response to sounds from 90° or -90°. All these findings can be explained by a simple rule: the fly steers away from the antenna with the larger vibration amplitude. Finally, we show that these behaviors generalize to sound stimuli with diverse spectro- temporal features, and that these behaviors are found in both sexes. Our findings demonstrate the behavioral relevance of the antenna’s directional tuning properties. They also pave the way for investigating the neural implementation of sound localization, as well as the potential roles of sound-guided steering in courtship and exploration.

iii

TABLE OF CONTENTS

Abstract ...... iii Table of Contents ...... iv List of Figures ...... v List of Abbreviations ...... vi Acknowledgements ...... vii Chapter 1: Introduction ...... 1 Why study sound localization in Drosophila? ...... 1 A review of sound localization in vertebrates ...... 3 A review of sound localization in invertebrates ...... 10 Background on the Drosophila ...... 16 References ...... 21 Chapter 2: Sound localization behavior in Drosophila depends on inter-antenna vibration amplitude comparisons ...... 25 Introduction ...... 25 Materials and Methods ...... 29 Results ...... 38 Discussion ...... 62 References ...... 68 Chapter 3: Conclusions and Future Directions ...... 74 A discussion of open questions about sound localization ...... 74 Future behavioral experiments ...... 78 References ...... 81 Appendix A: Supplementary Figures for Chapter 2 ...... 82 References ...... 90

iv

LIST OF FIGURES

Figure 1: Directional tuning of sound-evoked antennal vibrations ...... 28

Figure 2: Lateralized sounds elicit phonotaxis as well as acoustic startle ...... 39

Figure 3: Trial-to-trial variation in phonotaxis behavior...... 42

Figure 4: Phonotaxis requires vibration of the distal antennal segment...... 45

Figure 5: Turning is contralateral to the antenna with larger vibrations...... 48

Figure 6: Lateralized sounds arriving from the back elicit negative phonotaxis...... 51

Figure 7: Sounds from any of the four cardinal directions elicit no phonotaxis...... 55

Figure 8: Phonotaxis generalizes to sounds with diverse spectro-temporal features...... 58

Figure 9: Both males and females display phonotaxis...... 61

Figure S1: Speaker calibrations...... 83

Figure S2: Distribution of forward velocities...... 85

Figure S3: Additional single-trial examples of phonotaxis and acoustic startle behavior...... 86

Figure S4: Trial-to-trial variation in forward and lateral velocity are not strongly correlated. .... 88

v

LIST OF ABBREVIATIONS

ILD – Interaural level difference

IPD – Interaural phase difference

ITD – Interaural time difference

LSO – Lateral superior olive

VLVp – Ventral nucleus of the lateral lemniscus

MSO – Medial superior olive

NL – Nucleus laminaris

JONs – Johnston’s Organ

AMMC – Antennal mechanosensory and motor area

WED – Wedge

vi

ACKNOWLEDGEMENTS

This work would not have been possible (and the last five years would have been a lot less fun) without the support I’ve had from friends, family and colleagues.

Thank you to Rachel for working so hard to support my work in the lab. You have taught me many important lessons: from knowing the importance of ‘good enough’ to how much you can learn from small details in your data.

Thank you to the current and former members of the Wilson Lab. It has been such a pleasure to work with all of you. Special thanks to Stephen Holtz for answering thousands of my question - I will dearly miss our Friday Night Comedy chat and coffee breaks. Thanks to Allison Baker for your friendship and mentorship, to Alexandra Moore for late night conversations, to Asa Barth-

Maron and Paola Patella for your thoughtful friendship, to Yvette Fisher for your endless enthusiasm, to Jenny Lu for putting up with my chatter, to Michael Marquis for going out of your way to improve lab operations, to Tatsuo Okubo for lots of great conversations about books, to

Sasha Rayshubskiy making fun of me, and to Helen Yang for your wise advice. Thank you to the past members of the Wilson Lab (Tony Azevedo, Joe Bell, Mehmet Fisek, Betty Hong, Jamie

Jeanne, Wendy Liu, Kathy Nagel, Willie Tobin and John Tuthill) for your mentorship when I didn’t know anything.

Thank you to my friends in the Program in Neuroscience. Thank you to Hannah Somhegyi for being an incredible friend and always being there at the right time. Thank you to Stephen

Thornquist for your support and encouragement. Comp club was a great learning experience for me, so thank you to the Harvey lab members, Emma Krause and Allie for making it happen.

Special thanks to all of the PiN alumni who helped me find a job.

vii

Thank you to all the members of my DACs over the years – Ben de Bivort, Bernardo Sabatini,

Mike Crickmore, Nao Uchida, Roz Segal and David Ginty - for taking the time to provide me with scientific advice. Special thanks to Roz and David for making this happen. Thank you to Ben and the other members of my exam committee – Aravi Samuel, Dan Polley, and Steve Flavell – for taking the time to read my dissertation and provide me with feedback.

Thank you to Karen for being there when I needed a break from science - I will miss our conversations. Thank you to Pavel and Ofer for helping me with many technical challenges.

Thank you to John (from the machine shop) for making a lot of speaker holders. Thank you to

Alan (from facilities) for your friendly greetings in the hallways – even when you’re working at

11pm on a Friday night.

Thank you to the Kennedy Memorial Trust and Boehringer Ingelheim Fonds for funding and providing a wonderful community.

Thank you to the friends I’ve made in Cambridge. Thank you to Jen, Emily and Sonia for being the perfect roommates. Thank you to Dima Ayyash, Cesar Echavarria, Carlo Amadei, Armin

Schoech, Ivana Cvijović, Sam Sinai, and Cristina Aggazzotti – for at least one hundred fun dinners.

Thank you to Hayley Fenn and Clara Hungr for so many fun times and long conversations. Thank you to the Nashton folks for putting up with me as a part-time roommate, the bike gang for lots of adventures and the Insight fellows for your support over the last few months.

Thank you to my friends back home – Tor, Hannah, Tom, James, Gaby, Pip, and Jim – for still being there despite my sporadic texting.

Thank you to David Ding for encouraging me to be adventurous, for making me think hard, and making me laugh.

viii

Thank you to Helen Bowns and Stann, Mark and Emma Chonofsky for being my American family.

Stann helped me feel so at home when I first moved to the U.S. – I’ll never forget that.

Thank you to my extended family for lots of fun times when we get together.

Thank you to my wonderful Grandma and Grandpa for your love and for being so interested in what I’m doing. I always think of you, Grandpa, when I’m writing because I know you’ll always read my work. The hardest part of the last five years was saying goodbye to Grandma. I miss her every day and probably always will.

Thank you to my Mum and Dad for your unconditional love and for working so hard to help me and Will in every way that you can. It’s hard to express how much I appreciate everything you’ve given us.

Finally, thank you to my little brother, Will. The worst part about living in the U.S. for the last five years was not getting to spend more time with you.

ix

In Memory of

My Wonderful Grandma

Doris Wright

x

CHAPTER 1: INTRODUCTION

This thesis describes the work I have done to understand sound localization behavior in Drosophila melanogaster. In this introduction, I first discuss why this a worthwhile topic of study. I then review the sound localization literature in vertebrates and invertebrates. Finally, I review the

Drosophila audition literature. Throughout this review, I aim to highlight the open questions that this study addresses or may help to address in the future.

Why study sound localization in Drosophila?

The ability to localize sounds is evolutionarily important

Many animals use sounds to localize prey or avoid predators. Localizing sounds can also help to separate sounds coming from different sources – which can be useful when trying to have a conversation with someone at a cocktail party (Grothe et al., 2010). Given the benefits that animals gain from being able to localize sounds, it seems worthwhile to understand the diversity of algorithms that animals use to localize sounds and also how these algorithms implemented. This understanding may also help us develop sound localization algorithms for technology such as virtual assistants.

Locating a sound involves canonical computations

One of the main ways that animals locate sounds is by comparing inputs from their two .

Comparing two or more values is a common computation that brains perform. For example, animals determine the color of an object by comparing the activity of photoreceptors that detect different wavelengths of light (Solomon and Lennie, 2007). They determine the identity and concentration of odor by comparing the activity of one type of olfactory receptor to the activity of other types (Olsen et al., 2010). And they may select actions by comparing the expected

1 value of different outcomes (Rangel et al., 2008). The ubiquity of comparisons makes understanding their implementation a worthwhile pursuit and by studying sound localization we may be able to answer some of the general open questions regarding comparisons. These questions include: how does comparison of two or more values work over a wide range of values? What is the range of possible mechanisms that can implement a comparison? Are these mechanisms specialized for particular types of comparison? What is the role of each brain hemisphere in comparisons that involve both the right and left sides of the body?

Why study sound localization in Drosophila?

The physics of sound means that many animals must use the same acoustic cues to localize sounds and must perform the same computation to calculate sound location from these cues. This means that by studying sound localization in multiple animals we can start to understand the range of possible mechanisms that can implement a particular computation.

It may also be easier to answer open questions in sound localization in the fruit fly than in other species. This is because in the fruit fly there are currently large collections of genetic lines that can be used to stereotypically label small populations of neurons (Gohl et al., 2011; Jenett et al.,

2012; Tirian and Dickson, 2017). This allows us to record and manipulate the activity of the same neurons across flies. Additionally, a complete electron microscopy volume of the fruit fly brain has now been acquired (Zheng et al., 2018). And a complete connectome of is likely to be available in the next 5 to 10 years. These tools will make it easier to find and study the neural circuits that implement sound localization.

2

However, before we can use these tools to study the neural implementation of sound localization and comparisons in general, we must first understand whether and how Drosophila localize sounds. This is the goal of this thesis.

A review of sound localization in vertebrates

Acoustic cues for sound localization

There are several parameters of a sound wave that animals could measure. An animal could measure the air pressure changes associated with a sound, the air pressure gradient, or the velocity of the air particles. The majority of work on sound localization has been done on mammals and birds. These species have tympanal ears that detect sound pressure. Tympanal ears consist of a membrane or which lies over an air-filled cavity. When a sound arrives at the , pressure changes on the outer side of the membrane cause the membrane to vibrate and these vibrations are propagated to mechanosensory cells whose activity is dependent on parameters of the vibration such a frequency and level. These signals then propagate to the brain where inputs from the two ears are compared.

As a sound moves past the head, diffraction causes the sound to be attenuated so that the pressure oscillations at the ear contralateral to the sound source have a smaller amplitude than oscillations at the ipsilateral ear. This creates an interaural level difference (ILD). Sounds at the ipsilateral ear arrive earlier and have an ongoing phase lead. This is known as an interaural phase difference

(IPD). We will use the term interaural time difference (ITD) to refer to both onset time differences and IPDs.

By placing tiny microphones in the ear canals of mammals or birds it has been show that these cues vary with the angle of the sound source (Middlebrooks, 2015). And so by measuring the

3 value of these cues, mammals and birds could in principle infer the location of a sound. ILD, onset time differences and IPD can be independently varied by playing sounds through headphones.

Experiments using this technique have shown that mammals and birds use ILDs to locate high frequency sounds and IPDs to locate low frequency sounds (Grothe et al., 2010; Macpherson and

Middlebrooks, 2002). ILDs cannot be used to locate low frequency sounds because at low frequencies there is little attenuation of the sound by diffraction around the head and so the ILDs are very small. IPDs cannot be used at very high frequencies because neurons cannot phase lock to very high frequency sounds; i.e. at high frequencies the instantaneous activity of any neuron cannot depend on the phase of a sound . Phase locking is necessary to be able to detect phase differences sound arriving at the two ears. Whether and for which frequencies mammals and birds use onset times to locate sounds seems to have received much less attention (Bernstein,

2001).

Most mammals and birds use ILDs and ITDs to locate sounds in the azimuthal plane (left vs. right).

Barn are an exception. They have asymmetrical ears: the left ear points downwards and the right ear points upwards. This means that level differences at the two ears depend on the elevation

(up vs. down) of the sound while timing differences depend on the azimuth of the sound. So, owls use ILDs to locate a sound in elevation and ITDs to locate a sound in azimuth (Knudsen and

Konishi, 1979).

This thesis focusses on the use of binaural cues for sound localization where a comparison of the parameters of the sound arriving at each ear is needed to localize a sound. However, it is worth noting that monaural cues can also be used to localize sounds. Spectral cues are an example of monaural cues: the head and pinnae differentially filter broadband sounds depending on the location of the sound source. Filtering introduces peaks and notches in the sound spectrum and

4 so it is possible to locate a sound by analyzing the spectrum of the sound (Middlebrooks, 2015).

Monaural spectral cues are less reliable than binaural cues for locating sounds in the azimuthal plane, partly because the spectrum of the sound at the source can vary too. So, people with two functional ears place more weight on binaural cues when locating sounds in the azimuthal plane

(Macpherson and Middlebrooks, 2002; Middlebrooks, 2015). However, people who are congenitally deaf in one ear can sometimes locate sounds and it is assumed that they are using spectral cues to do so (Slattery and Middlebrooks, 1994; Middlebrooks, 2015). And people with two functional ears do use monaural cues to distinguish sound sources that are located at the same azimuthal angle but differ in their elevation and also to resolve front-back ambiguity because there are locations in the front and back hemifields where binaural cues are identical

(Middlebrooks, 2015).

In the next section we will continue our discussion of binaural cues and discuss how comparisons of the sounds arriving at the two ears are implemented.

Neural mechanisms for implementing comparisons

The circuits that compute the ILD and ITD to infer sound location are similar in birds and mammals despite the fact they are likely to have evolved independently (Grothe et al., 2010).

Computing the ILD or ITD requires ‘comparator neurons’ that compare inputs from the left and right ears. In both mammals and birds, ILDs and ITDs are processed in mostly separate parallel pathways and there are comparator neurons in each of these pathways.

Processing of ILDs

The lateral superior olive (LSO) in mammals and the ventral nucleus of the lateral lemniscus

(VLVp) in birds are the first brain areas in the auditory pathway which are highly sensitive to

ILDs. These brain areas receive excitatory input from the ipsilateral ear and inhibitory input from

5 the contralateral ear. Postsynaptic neurons integrate these inputs so their firing rate is a sigmoidal function of ILD. Different neurons in the population receive different levels of inhibition so they are most sensitive to different ILDs (Konishi, 1993). Since ILDs vary with the location of a sound source, this makes different neurons most sensitive to sounds coming from different locations.

Processing of ITDs

ITDs are first processed in the medial superior olive (MSO) in mammals and the nucleus laminaris

(NL) in owls. These areas receive excitatory inputs from both ears. These inputs have spikes that are phase-locked to the sound stimulus. MSO and NL neurons are thought to act as coincidence detectors: they are most likely to fire when their inputs from the two sides are coincident. The prevailing model is that different neurons in the MSO and NL populations are sensitive to different

ITDs because their inputs from the left and right ears have different delays. For example, imagine a sound source that is 90º to an animal’s right. Sounds from this source might arrive at the right ear about 700µs before they arrive at the left ear (i.e. ITD = 700µs). If a neuron in the MSO always receives its input from the right ear 700µs after it receives its input from the left ear, incoming spikes from the two ears will be coincident and the MSO neuron is likely to fire. If the ITD is not

700µs, incoming spikes will not be coincident and the neuron will not fire. In this way, the neurons in the MSO and NL become tuned to different regions of space (Grothe et al., 2010; Konishi, 1993;

Konishi, 2006).

There is some debate about how delays are created. In the “Jeffress model”, developed by the 20th century psychologist Lloyd Jeffress, delays are created by the inputs to the MSO and NL having different length . There is empirical support for this model in owls (Carr and Konishi, 1990) but in mammals the mechanism for creating delays is hotly debated (Grothe et al., 2010). One alternative mechanism is asymmetric precisely-timed inhibition that delays input to MSO neurons

6 but only for input from one ear. If inhibitory input arrives slightly before excitatory input and the resulting IPSP is only transient, then the EPSP can be delayed without being completely blocked.

This model has received recent support (Brand et al., 2002; Pecka et al., 2008) and may explain why the MSO receives very fast and precisely timed inhibitory input.

In support of another alternative model, recent work has shown that the peak spike rate in MSO neurons is not necessarily achieved by the EPSPs, evoked by inputs to the left and right ears, arriving exactly coincidentally. Instead, maximal spike rate is reached, on average, when ipsilateral

EPSPs precede contralateral EPSPs (Franken et al., 2015). The authors provide evidence that intrinsic conductances create an internal delay within MSO neurons, allowing non-coincident

EPSPs to evoke maximal spiking (Franken et al., 2015).

Interestingly, the MSO and LSO receive their inhibitory input from the same cells. This means that the inhibitory input to the LSO is also very fast and temporally precise. The speed of inhibitory inputs may allow inhibition and excitation to arrive simultaneously despite the fact that there is an extra neuron in the inhibitory pathway (Grothe et al., 2010). The temporal precision is likely to be useful in the processing of ITDs for amplitude modulated sounds (Joris et al., 2004).

Comparison of amplitude modulated sounds

As mentioned earlier, ITD cannot be used to locate sound sources emitting high frequency pure tones because neurons cannot phase lock to these high frequencies. However, if a high frequency sound is amplitude-modulated at a low enough frequency, neurons can phase lock to the modulation wave. And indeed, psychophysical experiments confirm that humans can use ITDs to locate high frequency sounds as long as those sounds are amplitude modulated (Grothe et al., 2010;

Joris et al., 2004). In mammals, the ITD for amplitude modulated sounds is computed in both the

MSO and the LSO. Computation of the ITD in the MSO works by coincidence detection in the

7 same way as it does for purely low frequency sounds. In the LSO ITD is computed by subtraction as opposed to a multiplicative mechanism like coincidence detection (Joris et al., 2004; Tollin and

Yin, 2005). This relies on the fact that LSO neurons receive excitatory input from one ear and inhibitory input from the other. When inputs from the two ears are perfectly in-phase, the two inputs will cancel each other to some extent and the spike rate of LSO neurons will be relatively low. When the inputs are maximally out-of-phase, inhibition will not cancel excitation and the spike rate of LSO neurons will be relatively high. Note that the tuning of LSO neurons to ITD is opposite to that of MSO neurons: LSO neurons fire when their inputs are maximally out-of-phase,

MSO neurons fire when their inputs are in-phase. The possible benefits, if any, of this opponent code have not yet been explored.

What is the role of each brain hemisphere?

Our current knowledge of the ILD and ITD processing pathways suggests that the pathways are bilaterally symmetrical; there are the same brains areas and circuits in both hemispheres. Why have two copies of the same circuitry? One possibility is that each hemisphere processes ILDs and ITDs corresponding to one hemifield of space. This seems to be the case for ITD processing in the : the left hemisphere mainly processes ITDs corresponding to sound sources on the right side of the body (Grothe et al., 2010). Alternatively, both hemispheres may process ILDs and

ITDs corresponding to sounds coming from any region of space. This seems to be the case for

ITD and ILD processing in mammals (Grothe et al., 2010).

Computing the same ITDs and ILDs in both hemispheres seems somewhat redundant. One reason to have redundant processing is to enable the removal of noise. Stochastic fluctuations in neural activity reduce the ability of a neuron to encode sound location. If noise is correlated across hemispheres and comparator neurons have opposite tuning, noise can be removed by having

8 mutual inhibition across hemispheres. It is not known whether there are interhemispheric interactions between binaural processing regions in the mammalian brain. It is a difficult problem to tackle because the populations of neurons processing binaural cues are large, so is it difficult to find interactions between neurons involved in processing sounds from similar locations.

How are comparisons made over a wide range of auditory stimuli?

Sounds vary not only in their source location, but also in their frequency spectrum, intensity, and temporal pattern (Grothe et al., 2010). How do animals locate sounds in a way that is invariant to other parameters of the sound? One solution is to have many different ‘frequency channels’, with each channel containing many neurons which are collectively dedicated to encoding ITDs at a particular frequency. Within each frequency channel, different neurons have different preferred

ITDs. Sound location can be determined by which neuron in the population is active. This seems to be the case in owls (Grothe et al., 2010). In mammals, the best delay of MSO neurons, i.e. the

ITD that evokes the peak action potential rate, varies with frequency (Day and Semple, 2011). So, frequency invariance must be achieved downstream of MSO neurons. In the LSO, the ILD that a neuron responds to can depend on other parameters, such as the intensity of the sound at the source

(Tsai et al., 2010). It has been hypothesized that sound source location is determined by comparing the activity of the same LSO neurons in each hemisphere (Grothe et al., 2010). This hypothesis has not yet been tested.

Open questions

In summary, the main open questions in mammalian sound localization are: how do neurons acquire sensitivity to binaural timing differences? What is the role of inhibitory inputs in binaural comparisons? How much are ITD and ILD processing pathways really segregated? How does the brain handle binaural comparisons over a wide range of stimulus frequencies and intensities? And,

9 finally what is the role of interhemispheric interactions do they reduce noise or play a role in producing invariance?

A review of sound localization in invertebrates

Sound localization has also been studied in several invertebrate species, including several

Orthopteran species (crickets, bushcrickets, grasshoppers and locusts) and the fly .

Othopterans use sound to locate mates; either a male or female sings a song, dependent on the species, and an individual of the opposite sex recognizes the song and locates the source. Ormia is a parasitoid fly which lays its larvae in a live cricket host. It finds these hosts by locating the source of their song.

Acoustic cues for sound localization

Like mammals and birds, these species have two tympanal ears and so they could also locate sounds using binaural cues. However, their small size means that even the largest ILDs and ITDs are very small and there is little variation in ILDs and ITDs with sound source location. The maximum ITD is ~30µs in Orthopterans and only ~1.5µs in Ormia, compared with ~700µs in humans (Grothe et al., 2010; Mason et al., 2001; Robert, 2005).

Although binaural differences are necessarily small in small animals, insects are nevertheless able to use binaural cues to lateralize or localize sounds. Localization is defined here as a turning behavior where the angle of the turn depends linearly on the angle of the sound source (the “target angle”). Lateralization is defined here as a turning behavior where the turning direction is matched to the target direction, but the turning angle is relatively invariant to the target angle. Notably,

10 lateralization can still be used to drive accurate navigation, because lateralization behavior will cause the animal to zig-zag towards the sound source.

Localization or lateralization behaviors can be quite accurate in invertebrates. The parasitoid fly

Ormia can localize sound sources that are only 2° apart which corresponds to an ITD of only 50ns

(Mason et al., 2001). Crickets also accurately localize sounds (Hennig et al., 2004). Grasshoppers do not localize sounds, but they are quite accurate at lateralization (Helversen and Helversen,

1997). When the target is only 10° away from the midline, grasshoppers turn towards the correct side almost 100% of the time (Hennig et al., 2004).

Mechanical pre-processing enhances interaural differences

Insects are able to accurately localize or lateralize sounds, despite the small size of ILDs and ITDs, because they have anatomical adaptations that amplify differences in binaural cues. One of these adaptations turns the auditory receiver into a pressure difference receiver by having an air tube that connects the cavities behind the two tympanal membranes. Sounds can reach each tympanal membrane either directly from the outside, or indirectly by travelling across the opposite tympanal membrane and through the air tube. The sound that travels indirectly is attenuated and delayed and so this mechanism accentuates ILDs and ITDs. In addition, interference of the sounds waves on each side of the membrane may further emphasize differences (Robert, 2005). Another variant of a pressure difference receiver is a tube that connects each air cavity to the outside environment which provides a second pathway for sound to reach the inside of the tympanum. In Ormia, the tympani are connected not by an air tube, but rather by a mechanical linkage. This linkage amplifies maximum ITDs from 1.5µs to 50-60µs and ILDs from virtually 0dB to 3-12dB (Robert,

2005).

11

Which cues are used for sound localization/lateralization?

One downside of studying sound localization in Orthopterans or Ormia is that it is not possible to independently stimulate the ears in behaving animals. This makes it difficult to determine to what extent these animals use ILDs vs. ITDs to locate sounds. This problem has been somewhat overcome by placing a speaker 90° to the left and right of the animal so that they mostly only stimulate the ipsilateral ear. These experiments have shown that grasshoppers lateralize sounds with 75% accuracy with ITDs of only 0.4ms and ILDs of 0.6dB (Hedwig and Pollack, 2008;

Helversen and Rheinlaender, 1988). Crickets are less sensitive to ITDs (they only start turning for

ITDs > 4ms) but exhibit reliable turning for an ILD of 1-2dB (Hedwig and Pollack, 2008; Hedwig and Poulet, 2005). These ILDs and ITDs are larger than normal acoustic binaural differences.

However, the ILDs are within the range expected after mechanical pre-processing (Hedwig and

Pollack, 2008). These ITDs are outside the range expected even after mechanical pre-processing.

These results suggest that Orthopterans use ILDs rather than ITDs to localize/lateralize sounds.

Nonetheless, spike timing differences may still be important for decoding sound location because they can be created by ILDs. In grasshoppers, receptor neurons spike with a shorter latency when for higher intensity sounds. Latency differences of up to 6 ms are possible via this mechanism

(Mörchen et al., 1978).

Neural mechanisms for implementing comparisons

In Orthopterans, much progress has been made in understanding the neural mechanisms for comparing binaural cues. This is due in part to the fact that Orthopterans use fewer neurons than vertebrates to localize or lateralize sounds. Another reason is that it is easier in Orthopterans, than in vertebrates, to record and manipulate the activity of individual neurons involved in localizing sounds in behaving animals (Schildberger and Hörner, 1988).

12

Processing of ILDs

Increasing the intensity of a sound increases the spike rate and decreases spike latency in auditory receptor afferents (Mörchen et al., 1978). Thus ILDs can be determined by either comparing the spike rate or latency of left and right afferents.

An example of using spike rate to encode ILD comes from a pair of comparator neurons, known as AN1 neurons, which are present in many insects and receive ipsilateral excitation and contralateral inhibition1. The spike rate of AN1 neurons depends on ILD in a graded manner so that AN1 neurons fire most in response to sounds coming from the ipsilateral side (Horseman and

Huber, 1994). AN1 neurons have been shown to have a causal role in sound localization behavior.

Normally, crickets walking on a spherical treadmill turn towards a speaker playing cricket song.

However, hyperpolarizing the AN1 neuron ipsilateral to the speaker, using an intracellular electrode, causes crickets turn away from the speaker (Schildberger and Hörner, 1988). The inhibitory input to AN1 comes from the contralateral side from a neuron called ON1. Inactivating

ON1 by laser ablation or hyperpolarization removes the inhibitory input to AN1 and reduces the tendency of crickets to walk towards a sound source ipsilateral to that ON1 (Faulkes and Pollack,

2000; Schildberger and Hörner, 1988; Selverston et al., 1985).

An example of using spike timing to encode ILD comes from a study by Rheinlaender and

Mörchen (1979). They found a pair of interneurons in the grasshopper that compare spike timing.

These interneurons were known to receive ipsilateral excitation and contralateral inhibition, like neurons in the mammalian LSO. Two piezoelectric actuators were used to independently stimulate each ear. Delivering sounds that were identical in intensity at the two sides but differed in timing of onset showed that the interneurons are very sensitive to small ITDs. If the contralateral stimulus

1 The AN1 cell body happens to be on the contralateral side compared with its dendrites and . AN1 receives excitation from the side ipsilateral to its dendrites and axon and inhibition from the side contralateral to them. 13 preceded the ipsilateral stimulus by as little as 2ms, ipsilateral interneuron spiking was completely inhibited. On the other hand, if the ipsilateral stimulus preceded the contralateral stimulus by 2ms, the ipsilateral interneuron spiked at almost maximal rate. This all-or-nothing spiking is consistent with the fact that grasshoppers lateralize rather than localize sounds.

Currently less is known about the circuitry mediating sound localization in Ormia. So far the only neurons to have been recorded from are the auditory afferents (Mason et al., 2001; Oshinsky and

Hoy, 2002). As in Orthopterans, the latency to spike in afferents is dependent on sound intensity.

Unlike Orthopterans, Ormia afferents fire only one spike in response to an auditory stimulus, irrespective of intensity. Therefore sound location is encoded purely by the relative timing of spikes in left and right auditory afferents. Afferent neurons have several adaptations that make them very effective at transmitting this temporal code: they have a low spontaneous firing rate and a low spike jitter (Mason et al., 2001, 200; Oshinsky and Hoy, 2002; Robert, 2005). It will be interesting to see whether the comparator neurons in Ormia, if they exist, are sensitive to timing through a coincidence detector mechanism like in mammals, or alternatively through having precisely timed excitation and inhibition like in grasshoppers.

What is the role of each brain hemisphere? Are there interactions between the two brain hemispheres?

Like in mammals, the circuits processing ILDs in Orthopterans are bilaterally symmetric and neurons in each hemisphere seem to process sounds coming from any region of space.

Interestingly, the ON1 neurons in each hemisphere mutually inhibit each other (Selverston et al.,

1985). This interaction is thought to enhance differences in the AN1 neurons that receive inhibition from ON1 neurons and that send information about ILD to the brain. However, since

ON1 neurons have opposite tuning, it may also play a role in removing correlated noise between

14

ON1 neurons and thus preventing this noise from being transmitted to AN1 neurons. It has been suggested that there are yet more comparators in the brain that compare the activity of the two

AN1 neurons. This was suggested because hyperpolarization of AN1 seems to only cause errors in phonotaxis if AN1 is hyperpolarized beyond the level of activity expected in the opposite AN1

(Schildberger and Hörner, 1988).

How are comparisons made over a wide range of auditory stimuli?

Orthopterans and Ormia also display phonotaxis to bat calls which have a much higher frequency than cricket song. However, this phonotaxis is negative rather than positive, which makes sense given that these species are predated by bats. It is currently unknown whether the same circuitry is used for sound localization in both cases.

In crickets, the accuracy of sound localization behavior depends on sound intensity. Specifically, the angle of the cricket’s turn grows with sound intensity (Schildberger and Hörner, 1988). This is unexpected because, in the field, the intensity of a mate’s calling song will vary widely depending on their distance away. So crickets must still be able to locate mates despite variations in intensity. Crickets might solve this problem in the field by combining ILD cues with spectral cues: the high frequency components of song are quieter relative to the low frequency components as distance from the song source increases (Römer, 1987). It should be noted that the available experimental data were collected in tethered crickets running on a spherical treadmill, where spectral cues did not vary with intensity, and so this strategy was not available to the animal

(Schildberger and Hörner, 1988).

Some open questions

Some authors have documented a large amount of diversity in comparator neurons in Orthopterans.

For example, comparators can differ widely in the relative timing of their excitatory and inhibitory 15 inputs (Römer et al., 1981). The source and significance of this diversity is currently unknown.

Another question is why there are several layers of comparison in Orthopterans. These layers are thought to enhance directionality but it is odd that the local inhibitory neurons (ON1 neurons) are actually more directionality selective than the projection neurons that send signals to the brain

(AN1 neurons) (Hennig et al., 2004). A related question is whether there are neurons in the

Orthopteran brain that compare the activity of the two AN1 neurons. This is a difficult question to answer because it is currently not possible to label genetically identified populations of neurons in Othopteran brains.

Background on the Drosophila auditory system

The auditory behaviors of the fruit fly have been studied for over 50 years (Bennet-Clark and

Ewing, 1969; Mayr, 1950; Shorey, 1962). During courtship, male flies sing to female flies by flapping a single wing (Bennet-Clark and Ewing, 1967; Bennet-Clark and Ewing, 1969; Ewing,

1978). A pair of flies are much less likely to copulate if either the male is muted (by cutting off its wings) or the female is deafened (Bennet-Clark and Ewing, 1967; Bennet-Clark and Ewing,

1969; von Schilcher, 1976). The male’s song has two components: pulse and sine. The power in both these components is concentrated below 250Hz (Murthy, 2010). It is thought that the pulse component is the most important for courtship (Talyn and Dowse, 2004).

Fruit flies hear using their antennae (Ewing, 1978). The antenna is a flagellar ear as opposed to a tympanal ear. Flagellar ears use a feathery or hair-like structure, rather than a membrane, to detect sounds. In fruit flies, the antenna has three segments: a1, a2 and a3. Segment a3 is solidly attached to a feathery structure called the arista. Sound causes the arista and a3 to vibrate relative to a2.

Segment a2 contains mechanosensory neurons, called Johnston’s Organ Neurons (JONs), which

16 are attached at one end to the tip of a3, which protrudes into a2. Vibration of a3 relative to a2 causes JONs to alternately stretch and relax. JONs transduce this mechanical signal into an electrical signal which then propagates along their axons to the brain (Göpfert and Robert, 2001;

Göpfert and Robert, 2002).

The antenna is a particle velocity detector

Unlike tympanal ears which detect the pressure changes associated with a sound, the antenna detects particle velocity changes (Bennet-Clark, 1971). The antennae cannot detect pressure changes because the aristae are so small that the pressure differences between the front and back of an arista are negligible (Fletcher, 1978). Larger pressure differences would be required to generate a force large enough to move the antenna. However, the viscous forces, generated by air particles flowing forwards and backwards past the arista during a sound, are large enough to move the arista (Fletcher, 1978). The force on the arista at any moment in time depends on the velocity of the air particles and so the arista is a particle velocity detector. The particle velocity component of a sound decays much more rapidly with the distance from the sound source than the pressure component2. Thus the ability of flies to hear sounds decays rapidly as they move away from the source, and so flies are said to have ‘near field’ . Coupled with the fact that the male wing produces small particle velocity fluctuations, this explains why male flies stand very close (<5mm) to females when singing courtship song (Bennet-Clark, 1971).

All higher flies (suborder Brachycera, includes Drosophila) have antennae with a very similar structure to Drosophila. These antennae move in response to sound and have very similar frequency tuning (Robert and Göpfert, 2002). Their resonant frequency is around 400Hz although

2 Particle velocity and displacement can be dissociated because not all particle displacements produced by a sound source produce pressure variation (Tautz, 1979). 17 this resonant frequency is known to increase with sound intensity in Drosophila (Göpfert and

Robert, 2002; Robert and Göpfert, 2002). There is no prior work demonstrating that any higher fly can localize sounds using their antennae3.

Aristal movement is intrinsically directional

Recent work in Drosophila demonstrates that there are binaural acoustic cues available that flies could use to localize sounds (Morley et al., 2012). If the fruit fly’s antennae were pressure detectors, there would be only very tiny binaural cues. Drosophila are even smaller than Ormia, so ITDs would be less than 1µs and ILDs would be negligible. However, the antennae are particle velocity detectors. Particle velocity is a vector quantity (it has both magnitude and direction), while pressure is scalar (it has no direction). The amplitude of vibration of the arista in response to a sound depends on both the magnitude and direction of the air particle velocity. Particles moving at a faster speed will cause the arista to vibrate with a larger amplitude. Particles moving perpendicular to the axis of the arista will evoke the largest vibrations of the arista, all else being equal. This means the left antenna vibrates most in response to sounds coming from the front right and back left, and the right antenna vibrates most in response to sounds coming from the front left and back right.

Flies must compare inputs from their two antennae in order to localize sounds

Despite each antenna having intrinsic directionality, it should not be possible for flies to locate sounds by just measuring the vibrations of one antenna. This is because a loud sound from the antenna’s non-preferred direction can produce the same amplitude of vibrations as a quiet sound

3 Ormia also belongs to the Brachycera and so as well as its two tympanal ears it has two flagellar ears. The tympanal ears detect high frequency sounds in the far field and the flagellar ears detect low frequency sounds in the near field (Robert and Göpfert, 2002; Robert et al., 1992). Previous research has only investigated how Ormia localize high frequency sounds using their tympanal ears. 18 from the preferred direction. Fortunately, because the antennae have different preferred directions, the location of the sound can be determined by comparing the amplitude of vibrations at the two antennae. For example, a loud sound coming from the left antenna’s non-preferred direction will produce large vibrations at the right antenna since the direction of the sound will be the right antenna’s preferred direction. In contrast, the quiet sound coming from the left antenna’s preferred direction will produce only small vibrations at the right antenna.

Boundary layer effects create differences in peak air particle velocity at the antennae

Comparing the amplitudes of vibration of the intrinsically directional antennae may be sufficient for flies to localize sounds. However, recently it was shown that there are several other mechanisms that may further enhance a fly’s ability to localize sounds. First, because the arista exist in the boundary layer of the fly, the presence of the fly’s head dramatically changes the peak particle speed at each arista. The change in peak particle speed depends on the direction of the sound source relative to the fly and complements the intrinsic directionality of the antennae. Peak speed is increased for sounds coming from an antenna’s preferred direction, and decreased for sounds coming from the non-preferred direction (Morley et al., 2012). Note that without the boundary layer effects, the velocity at both antennae would be identical and sound direction could only be determined because each antenna is intrinsically tuned to a different velocity. Thus, boundary layer effects further accentuate differences in the amplitude of movements of the two antennae, making it easier to decode sound location by comparing their relative movements.

The phase of antennal movements also encodes information about sound source location

The phase of antennal movements depends on the azimuthal angle of a sound source. For example, when a speaker is moved from the front-right diagonal to the back-left diagonal, the direction

(phase) of all antennal movements is inverted. In this case, the relative amplitude of movement of

19 the two antennae is identical (Morley et al., 2012) and so phase could be used to disambiguate sound sources in these two locations. Similarly, when sounds arrive from the cardinal directions

(front, back, left, and right), both antennae move with equal amplitude. What distinguishes these four directions is the phase of antennal movements. In the same way as for sound sources on the diagonals, the phase of an antenna’s movement is inverted for stimuli 180° apart. For stimuli 90° apart, the phase difference between the two antennae is different. When stimuli arrive from the front or back, the antennae oscillate out of phase with each other (i.e. when one antenna moves to the left, the other antenna moves to the right). When stimuli arrive from the left or right, the antennae oscillate in phase. So again, phase information could be used to disambiguate these four locations.

Notably, we know that this phase information is encoded by neurons in the Drosophila brain. The membrane voltage of B1 neurons oscillates at the same frequency as a sound stimulus for sound stimuli within a certain frequency range (Azevedo and Wilson, 2017). There are two opponent populations of B1 neurons that oscillate out of phase with each other. One population depolarizes when the ipsilateral antenna moves to the left while the other population hyperpolarizes (Azevedo and Wilson, 2017). Thus Drosophila could determine the phase of movement of a single antenna by seeing which B1 neurons are active at sound onset. And Drosophila could determine the relative phase of movement of the two antennae by comparing the phases of membrane voltage oscillations of B1 neurons that get input from the left antennae with B1 neurons that get input from the right antenna.

In the next chapter we characterize sound localization behavior in Drosophila and investigate the relative contribution of amplitude vs. phase cues.

20

References

Azevedo, A. W. and Wilson, R. I. (2017). Active Mechanisms of Vibration Encoding and Frequency Filtering in Central Mechanosensory Neurons. Neuron 96, 446-460.e9.

Bennet-Clark, H. C. (1971). Acoustics of Insect Song. Nature 234, 255–259.

Bennet-Clark, H. C. and Ewing, A. W. (1967). Stimuli provided by Courtship of Male Drosophila melanogaster. Nature 215, 669–671.

Bennet-Clark, H. C. and Ewing, A. W. (1969). Pulse interval as a critical parameter in the courtship song of Drosophila melanogaster. Animal Behaviour 17, 755–759.

Bernstein, L. R. (2001). Auditory processing of interaural timing information: New insights. J. Neurosci. Res. 66, 1035–1046.

Brand, A., Behrend, O., Marquardt, T., McAlpine, D. and Grothe, B. (2002). Precise inhibition is essential for microsecond interaural time difference coding. Nature 417, 543–547.

Carr, C. E. and Konishi, M. (1990). A circuit for detection of interaural time differences in the brain stem of the barn owl. J. Neurosci. 10, 3227–3246.

Day, M.L., Semple, M. N. (2011) Frequency-dependent interaural delays in the medial superior olive: implications for interaural cochlear delays. J. Neurophysiol 106, 1985–99.

Ewing, A. W. (1978). The antenna of Drosophila as a ‘love song’ receptor. Physiological Entomology 3, 33–36.

Faulkes, Z. and Pollack, G. S. (2000). Effects of inhibitory timing on contrast enhancement in auditory circuits in crickets (Teleogryllus oceanicus). J. Neurophysiol. 84, 1247–1255.

Fletcher, N. H. (1978). Acoustical response of hair receptors in insects. J. Comp. Physiol. 127, 185–189.

Franken, T. P., Roberts, M. T., Wei, L., Golding, N. L., Joris, P. X. (2015). In vivo coincidence detection in mammalian sound localization generates phase delays. Nature Neuroscience 18, 444–452.

Gohl, D. M., Silies, M. A., Gao, X. J., Bhalerao, S., Luongo, F. J., Lin, C.-C., Potter, C. J. and Clandinin, T. R. (2011). A versatile in vivo system for directed dissection of gene expression patterns. Nat. Methods 8, 231–237.

Göpfert, M. C. and Robert, D. (2001). Biomechanics. Turning the key on Drosophila audition. Nature 411, 908.

Göpfert, M. C. and Robert, D. (2002). The mechanical basis of Drosophila audition. J. Exp. Biol. 205, 1199–1208.

Grothe, B., Pecka, M. and McAlpine, D. (2010). Mechanisms of sound localization in mammals. Physiol. Rev. 90, 983–1012. 21

Hedwig, B. and Pollack, G. S. (2008). 3.31 - Invertebrate Auditory Pathways. In The Senses: A Comprehensive Reference (ed. Gardner, R. H. M. D. A. D. A. H. M. D. O. F. K. B. C. B. I. B. H. K. P.), pp. 525–564. New York: Academic Press.

Hedwig, B. and Poulet, J. F. A. (2005). Mechanisms underlying phonotactic steering in the cricket Gryllus bimaculatus revealed with a fast trackball system. J. Exp. Biol. 208, 915–927.

Helversen, D. von and Helversen, O. von (1997). Recognition of sex in the acoustic communication of the grasshopper Chorthippus biguttulus (Orthoptera, Acrididae). J Comp Physiol A 180, 373–386.

Helversen, D. von and Rheinlaender, J. (1988). Interaural intensity and time discrimination in an unrestraint grasshopper: a tentative behavioural approach. J. Comp. Physiol. 162, 333–340.

Hennig, R. M., Franz, A. and Stumpner, A. (2004). Processing of auditory information in insects. Microsc. Res. Tech. 63, 351–374.

Horseman, G. and Huber, F. (1994). Sound localisation in crickets. J Comp Physiol A 175, 389– 398.

Jenett, A., Rubin, G. M., Ngo, T.-T. B., Shepherd, D., Murphy, C., Dionne, H., Pfeiffer, B. D., Cavallaro, A., Hall, D., Jeter, J., et al. (2012). A GAL4-Driver Line Resource for Drosophila Neurobiology. Cell Reports 2, 991–1001.

Joris, P. X., Schreiner, C. E. and Rees, A. (2004). Neural processing of amplitude-modulated sounds. Physiol. Rev. 84, 541–577.

Knudsen, E. I. and Konishi, M. (1979). Mechanisms of sound localization in the barn owl (Tyto alba). J. Comp. Physiol. 133, 13–21.

Konishi, M. (1993). of sound localization in the owl. J Comp Physiol A 173, 3–7.

Konishi, M. (2006). Listening with two ears. Scientific American 16, 28–35.

Macpherson, E. A. and Middlebrooks, J. C. (2002). Listener weighting of cues for lateral angle: The duplex theory of sound localization revisited. The Journal of the Acoustical Society of America 111, 2219–2236.

Mason, A. C., Oshinsky, M. L. and Hoy, R. R. (2001). Hyperacute directional hearing in a microscale auditory system. Nature 410, 686–690.

Mayr, E. (1950). The Role of the Antennae in the Mating Behavior of Female Drosophila. Evolution 4, 149.

Middlebrooks, J. C. (2015). Sound localization. Handb Clin Neurol 129, 99–116.

Mörchen, A., Rheinlaender, J. and Schwartzkopff, J. (1978). Latency shift in insect auditory nerve fibers. Naturwissenschaften 65, 656–657.

22

Morley, E. L., Steinmann, T., Casas, J. and Robert, D. (2012). Directional cues in Drosophila melanogaster audition: structure of acoustic flow and inter-antennal velocity differences. Journal of Experimental Biology 215, 2405–2413.

Murthy, M. (2010). Unraveling the auditory system of Drosophila. Current Opinion in Neurobiology 20, 281–287.

Olsen, S. R., Bhandawat, V. and Wilson, R. I. (2010). Divisive Normalization in Olfactory Population Codes. Neuron 66, 287–299.

Oshinsky, M. L. and Hoy, R. R. (2002). Physiology of the auditory afferents in an acoustic parasitoid fly. J. Neurosci. 22, 7254–7263.

Pecka, M., Brand, A., Behrend, O. and Grothe, B. (2008). Interaural time difference processing in the mammalian medial superior olive: the role of glycinergic inhibition. J. Neurosci. 28, 6914–6925.

Rangel, A., Camerer, C. and Montague, P. R. (2008). A framework for studying the neurobiology of value-based decision making. Nat Rev Neurosci 9, 545–556.

Robert, D. (2005). Directional Hearing in Insects. In Sound Source Localization (ed. Popper, A. N.) and Fay, R. R.), pp. 6–35. Springer New York.

Robert, D. and Göpfert, M. C. (2002). Acoustic sensitivity of fly antennae. J. Insect Physiol. 48, 189–196.

Robert, D., Amoroso, J. and Hoy, R. R. (1992). The evolutionary convergence of hearing in a parasitoid fly and its cricket host. Science 258, 1135–1137.

Römer, H. (1987). Representation of auditory distance within a central neuropil of the bushcricketMygalopsis marki. J. Comp. Physiol. 161, 33–42.

Römer, H., Rheinlaender, J. and Dronse, R. (1981). Intracellular studies on auditory processing in the metathoracic ganglion of the locust. J. Comp. Physiol. 144, 305–312.

Schildberger, K. and Hörner, M. (1988). The function of auditory neurons in cricket phonotaxis. J. Comp. Physiol. 163, 621–631.

Selverston, A. I., Kleindienst, H. U. and Huber, F. (1985). Synaptic connectivity between cricket auditory interneurons as studied by selective photoinactivation. J. Neurosci. 5, 1283– 1292.

Shorey, H. H. (1962). Nature of the Sound Produced by Drosophila melanogaster during Courtship. Science 137, 677–678.

Slattery III, W. H., Middlebrooks, J. C. (1994). Monaural sound localization: Acute versus chronic impairment. Hear Res 75, 38–46.

Solomon, S. G. and Lennie, P. (2007). The machinery of colour vision. Nat Rev Neurosci 8, 276– 286.

23

Talyn, B. C. and Dowse, H. B. (2004). The role of courtship song in sexual selection and species recognition by female Drosophila melanogaster. Animal Behaviour 68, 1165–1180.

Tautz, J. (1979). Reception of particle oscillation in a medium — an unorthodox sensory capacity. Naturwissenschaften 66, 452–461.

Tirian, L. and Dickson, B. (2017). The VT GAL4, LexA, and split-GAL4 driver line collections for targeted expression in the Drosophila nervous system. bioRxiv 198648.

Tollin, D. J. and Yin, T. C. T. (2005). Interaural phase and level difference sensitivity in low- frequency neurons in the lateral superior olive. J. Neurosci. 25, 10648–10657.

Tsai, J. J., Koka, K. and Tollin, D. J. (2010). Varying Overall Sound Intensity to the Two Ears Impacts Interaural Level Difference Discrimination Thresholds by Single Neurons in the Lateral Superior Olive. J Neurophysiol 103, 875–886. von Schilcher, F. (1976). The role of auditory stimuli in the courtship of Drosophila melanogaster. Animal Behaviour 24, 18–26.

Zheng, Z., Lauritzen, J. S., Perlman, E., Robinson, C. G., Nichols, M., Milkie, D., Torrens, O., Price, J., Fisher, C. B., Sharifi, N., et al. (2018). A Complete Electron Microscopy Volume of the Brain of Adult Drosophila melanogaster. Cell 174, 730-743.e22.

24

CHAPTER 2: SOUND LOCALIZATION BEHAVIOR IN DROSOPHILA DEPENDS ON INTER-ANTENNA

VIBRATION AMPLITUDE COMPARISONS

Attributions: This chapter is a modified version of a manuscript, which has been accepted by The

Journal of Experimental Biology, with the following authors: Batchelor, A.V. and Wilson, R.I.

Introduction

Sound localization is a basic function of auditory systems. In organisms with tympanal ears, sound localization depends primarily on inter-aural differences in the amplitude of eardrum vibrations, as well as inter-aural differences in the timing of those vibrations (Ashida and Carr, 2011; Grothe et al., 2010; Middlebrooks, 2015). Lord Rayleigh (1907) was the first to realize that amplitude differences are mainly used for localizing high-frequency sounds, whereas timing differences are mainly used for localizing low-frequency sounds.

For insects with tympanal ears, sound localization can be a heroic achievement, because the insect’s body is small, and so interaural differences are small (Michelsen, 1992; Robert, 2005;

Robert and Hoy, 1998). Some insects, such as the tiny fly Ormia ochracea, have specialized tympanal ears that allow them to detect inter-aural timing differences as small as 50 ns (Mason et al., 2001; Miles et al., 1995; Robert et al., 1998). Specializations for directional hearing can also be found at the level of the insect central nervous system, as demonstrated by electrophysiological studies in crickets, locusts, and katydids (Atkins and Pollack, 1987; Brodfuehrer and Hoy, 1990;

Horseman and Huber, 1994a; Horseman and Huber, 1994b; Marsat and Pollack, 2005; Molina and

Stumpner, 2005; Rheinlaender and Römer, 1980; Schildberger and Hörner, 1988; Selverston et al.,

1985). In insects, the most well-studied behavioral evidence of sound localization ability is phonotaxis, defined as sound-guided locomotion (Atkins et al., 1984; Bailey and Thomson, 1977;

25

Hedwig and Poulet, 2004; Mason et al., 2001; Schildberger and Hörner, 1988; Schildberger and

Kleindienst, 1989; Schmitz et al., 1982).

Because Drosophila melanogaster are small (even tinier than Ormia ochracea), sound localization might seem impossible. However, Drosophila have evolved a non-tympanal auditory organ which is well-suited to directional hearing. Protruding from the distal antennal segment (a3) is a hairy planar branching structure called the arista (Fig. 1A). The arista is rigidly coupled to a3, so when air particles push the arista, a3 rotates freely (around its long axis) relative to the proximal antenna

(Göpfert and Robert, 2002). Sound waves are composed of air particle velocity oscillations as well as pressure oscillations (Kinsler and Frey, 1962), and it is the air particle velocity component of sound which drives sound-locked antennal vibrations (Göpfert and Robert, 2002).

The directional tuning of the Drosophila auditory organ arises from two factors. First, the movement of the arista-a3 structure is intrinsically most sensitive to air particle movement perpendicular to the plane of the arista (Morley et al., 2012). The two antennae are intrinsically tuned to different air movement directions, because the two aristae are oriented at different azimuthal angles (Fig. 1B). Second, boundary layer effects distort the flow of air particles around the head. Specifically, the shape of the head creates high air particle velocities at the arista contralateral to the sound source, with comparatively lower particle velocities at the ipsilateral arista (Fig. 1C) (Morley et al., 2012). These boundary layer effects reinforce the left-right asymmetry in antennal vibration amplitudes when a sound source is lateralized. Taken together, the intrinsic directionality of the antennae (Fig. 1B) and these boundary layer effects (Fig. 1C) can produce large inter-antennal differences in vibration amplitudes when a sound source is lateralized

(Fig. 1D). Specialized in Johnston’s organ transduce these vibrations, with

26 larger-amplitude vibrations producing larger neural responses (Effertz et al., 2011; Kamikouchi et al., 2009; Lehnert et al., 2013; Patella and Wilson, 2018).

In short, there are good reasons why Drosophila should be capable of sound localization. However, this prediction has not been tested. Most studies of auditory behavior in Drosophila have focused on the effects of auditory stimuli on locomotor speed. For example, when a courting male sings to a receptive walking female, it causes her to gradually slow down, with more singing producing a higher probability of slowing (Bussell et al., 2014; Clemens et al., 2015; Coen et al., 2014).

Drosophila also transiently suppress locomotion and other movements in response to nonspecific sounds; these behaviors are termed acoustic startle responses (Lehnert et al., 2013; Menda et al.,

2011). No studies have described evidence of sound localization ability in Drosophila.

Here we report that walking Drosophila turn in response to lateralized sounds. They turn toward sounds in their front hemifield (positive phonotaxis), but they turn away from sounds in their rear hemifield (negative phonotaxis), and they do not turn at all in response to sounds originating from

90° or -90°. All these results can be explained by a simple heuristic: Drosophila compare vibration amplitudes at the two antennae, and they turn away from the antenna with larger-amplitude vibrations. Although this heuristic is simple, we argue that it can produce potentially adaptive outcomes during courtship and exploration.

27

Figure 1: Directional tuning of sound-evoked antennal vibrations

(A) Segments of the Drosophila antenna. The arista is rigidly coupled to a3. The arista and a3 rotate (relative to a2) in response to sound. The dashed line is the axis of rotation.

(B) Dorsal view of the head. The arista (red and black lines) are oriented approximately 45° from the midline. The rotation of the arista-a3 structure (about the long axis of a3) is intrinsically most sensitive to air particle movement perpendicular to the plane of the arista.

(C) Schematized air speed heatmap for a sound source positioned in the horizontal plane at an azimuthal angle of 45°. Air speed is highest in the vicinity of the contralateral antenna, due to boundary layer effects (adapted from data in Morley et al., 2012).

(D) The amplitude of antennal vibrations is plotted in polar coordinates as a function of the azimuthal angle of the sound source (adapted from data in Morley et al., 2012). This mechanical tuning profile reflects the intrinsic directionality of each arista, plus boundary layer effects.

28

Materials and Methods

Fly strains and culture conditions

Experiments were performed using cultures of Drosophila melanogaster (Meigen) established from 200 wild-caught individuals (Frye and Dickinson, 2004). Flies were cultured in 175 ml plastic bottles on custom food consisting of: 83.40% water, 7.42% molasses solids, 5.50% cornmeal,

2.31% inactivated yeast, 0.51% agar, 0.32% ethanol, 0.28% propionic acid, 0.19% phosphoric acid, 0.08% Tegosept (Archon Scientific, Durham, NC, USA). Culture bottles were started with five female and three male flies, and these parental flies were left in the bottle until the first progeny eclosed. Bottles were stored in a 25°C incubator with a 12 hour / 12 hour light / dark (L/D) cycle and 50-70% humidity. Progeny were collected on the day of eclosion (0 days old) on CO2 pads and housed (grouped by sex) in vials at 25°C. Females were used for all experiments except Fig.

9A-D. Flies were aged 2 days (Figs. 2-5) or 1 day (Figs. 6-9). A few flies in Figs. 4-5 were aged 3 days, and a few flies in Figs. 6-9 were aged 2 days.

Antennal gluing

Gluing was performed the day before an experiment. First, a fly was cold-anaesthetized and moved to a metal block cooled by ice water and covered with a damp low-lint wipe. The fly was immobilized with two glass slides placed at its sides. A small drop of glue (KOA-300, Poly-Lite,

York, PA, USA) was mouth-pipetted onto the antenna(e). To immobilize the a1-a2 joint, we placed flies ventral-side down on the metal block, and we used glue to attach a2 to the head. We ensured the glue did not change the antenna’s resting position. To immobilize the a2-a3 joint, we placed flies dorsal side down, and the glue drop was placed on the medial side of the antenna over the joint. The glue was cured with 3-5 seconds of UV light (LED-100 or LED-200, Electro-Lite,

29

Bethel, CT, USA, held ~1cm from the fly). “Sham-glued” flies underwent the same steps except that no glue was placed anywhere on the fly.

Tethering

Flies were tethered immediately before an experiment. Flies were immobilized on a cool block as described above. A third glass slide was placed, like a bridge, over the two lateral slides and the abdomen. A drop of glue was placed on the end of a tungsten wire tether, and the wire was lowered onto the thorax with a micromanipulator. The glue was UV-cured as described above. Next, bilateral drops of glue were used to attach the posterior-lateral eye to the thorax and again cured as above.

Tethered walking experiments

The room had a temperature ranging from 21.3°C to 22.9°C with a mean of 22.3°C. The humidity ranged from 21-51% with a mean of 30%. Most experiments were started 0-6 hours before the

L→D transition of the fly’s L/D cycle but occasionally experiments were started up to 8 hours before or 5 hours after this time. The fly was lowered onto the spherical treadmill using a micromanipulator attached to the tether. Three cameras with zoom lenses (anterior, dorsal, and lateral views) were used to align the center of fly’s thorax with the center of the ball and to adjust the fly’s height from the ball. The cameras were one of two USB 2.0 models: FMVU-03MTM-CS or FMVU-13S2C-CS (FLIR Integrated Imaging Solutions Inc., Richmond, BC, Canada). The lenses were also one of two models: MLM3X-MP (Computar, Cary, NC, USA) or JZ1169M mold

(SPACECOM, Whittier, CA, USA). The MLM3X-MP lens was mounted to the camera with a 5 mm spacer. The JZ1169M mold was mounted with a 10 mm spacer to increase magnification. The fly was then left to habituate for ~30 minutes. The fine alignment of the fly was sometimes adjusted during this period to reduce systematic biases in walking direction.

30

During an experiment, stimuli were presented in a block design: the order of stimuli within the block was random, and within a block each stimulus condition was presented the same number of times. Stimuli with the same waveform but delivered from a different sound source location were treated as different stimulus conditions. The block size used was either two times (Figs. 2, 6-8) or four times (Figs. 2, 3, 4, 5, and 9) the number of different stimulus conditions used in an experiment. Some experiments were run with a ‘no stimulus’ condition (all except Fig. 6 and part of Fig. 7) but the data for the no stimulus condition is only shown in Fig. 5 and Figs. 2-4. Each trial began with two seconds of silence, followed by the stimulus, and concluded with another two seconds of silence. Between each trial, there was a variable length of time (~10 - 20 seconds).

Each experiment was run for as long as possible (maximum = 9.5 hours). For each fly and each stimulus, at least 56 “accepted” trials were acquired, where accepted trials were defined as trials where the resultant velocity was above a threshold of 10 mm/s but not saturated (see below). The maximum number of accepted trials per stimulus was 884 and the mean was 296. Mean forward velocity during the pre-stimulus period for most trials was generally ≥10 mm/s. Experiments were performed on 59 flies and 49 were included in analyses; 4 experiments were stopped because the resultant velocity never consistently reached threshold (10 mm/s); 3 were stopped because the ball of the spherical treadmill got stuck before sufficient trials were acquired; 3 were excluded post hoc because the flies did not consistently run at or above 10 mm/s for any portion of the experiment.

Spherical treadmill apparatus

A hollow plastic ball (1/4” diameter) was held in a plenum chamber, supported by a cushion of air under positive pressure, and an optical sensor was positioned below the ball. Spacers 1/8” thick were placed between the sensor lens and the plenum so that the reference surface of the lens was

~ 1/8” from the ball surface. Data was acquired using an ADNS-9800 High-Performance

31

LaserStream™ Gaming Sensor (Avago, San Jose, CA, USA) and breakout board (JACK

Enterprises, Cookeville, TN, USA). An Arduino Due read data from the sensor and sent a digital output to a USB-6343 DAQ (National Instruments). The arduino-clockwork library

(https://github.com/UniTN-Mechatronics/arduino-clockwork) was used to ensure data was read every 10 ms. The sensor outputs x and y velocities with a resolution of 0.31 mm/s (Configuration_1 register was set to 8200 counts per inch). The Arduino sent an 8-bit signal to the DAC for each axis and so velocity was saturated at ± 39.37 mm/s. For later experiments, to reduce saturation, the output velocity range of the Arduino was shifted so that the output range was: -15.5 mm/s to 63.24 mm/s. The sensor was factory-calibrated.

Yaw velocity was not measured because the sensor was placed under the ball; the sensor only measured forward velocity (pitch) and lateral velocity (roll). Orienting behaviors can be measured by monitoring roll (Gaudry et al., 2013) because roll and yaw are correlated; however, because we do not measure yaw, we will underestimate the magnitude of turns.

The apparatus and speakers were all contained in a sound-absorbing, light-proof box (Lehnert et al. 2013). The floor consisted of a smooth surface with an optomechanical breadboard. There were no light sources except for the laser used by the ball motion sensor (λ = 832-865 nm), which is outside the visible range for flies (Salcedo et al., 1999).

Design of sound stimuli

Every figure contains data obtained with a stimulus consisting of 10 pips with a carrier frequency of 225 Hz. Each pip lasted for ~15 ms (adjusted slightly to be a multiple of half a wavelength).

The duration between pip onsets was 34 ms. The amplitude envelope was cosine-shaped with a wavelength equal to the duration of the pip.

32

For the experiments in Fig. 8, additional stimuli were used. Pips were identical to those in other figures except the carrier frequency was 100, 140, 300, or 800 Hz, in addition to 225 Hz. Sustained tone stimuli were delivered at the same frequencies; these had the same total duration as the pip trains (0.322 seconds) and were also modulated by a cosine-shaped envelope, with a wavelength equaling the duration of a pip from the pip stimulus, so that the tones and pips had the same on- and offset profile. All stimuli were synthesized using MATLAB 2017a and sampled at 40 kHz.

Sound delivery and sound intensity measurements

Sound stimuli were delivered from four speakers (ScanSpeak Discovery 10F/4424G00, 89.5 mm diameter) placed 22 cm from fly, centered on the horizontal plane of the fly. Stimuli were only delivered from one speaker at a time. These speakers were able to produce the frequencies we used with minimal distortions (Fig. S1). Speakers were driven by either a Crown D-45 amplifier

(HARMAN Professional Solutions, Northridge, CA, USA) or a SLA-1 amplifier (Applied

Research and Technology, Niagara Falls, NY, USA). During calibration, each speaker was driven by the same amplifier and channel that was used during experiments.

All stimuli were calibrated to produce a peak particle velocity of 1.25 mm/s at the fly (88 dB SVL), verified for all speakers and all carrier frequencies (Fig. S1). This value was chosen because it is close to the intensity of male song experienced by females during courtship (Bennet-Clark, 1971;

Morley et al., 2018). Sound intensity at the fly’s location was measured using a particle velocity microphone (Knowles Electronics NR-23158) and pre-amplifier (Stanford Research Systems

SR560) as described by Lehnert et al. (2013). The pre-amplifier amplified (500× gain) and band- pass filtered the signal (6 dB/octave roll-off, 3 Hz and 30 kHz cutoffs). Sound intensity was measured in the same box where behavioral experiments were performed. The particle velocity microphone was placed in the same position, relative to the speaker, that the fly was placed in

33 during behavioral experiments. The front face of the particle velocity microphone was parallel to the front face of the speaker. Data from 10 trials were averaged and the pre-stimulus mean was subtracted. The data were then integrated and high-passed filtered with a 10 Hz cutoff. Peak particle velocity was estimated by taking the mean of the peak from several sound cycles. Finally, the command voltage waveform for each stimulus was adjusted (by rescaling the digital command waveform) until the measured peak particle velocity was within 10% of 1.25 mm/s for all stimuli.

This was necessary to compensate for the frequency characteristics of the speakers.

Data analysis

Data were analyzed offline using custom routines in MATLAB 2016b. Raw velocities were processed by first converting each of the 8-bit binary vectors, output by the Arduino, to signed integers in units of mm/s. A mode filter was used to remove errors caused by asynchronous updating of the Arduino’s digital output channels. Velocities were integrated to obtain x and y displacements. The x and y displacements were set to 0 at the start of the sound stimulus. The data were then downsampled from 40 kHz to 100 Hz.

Trials were excluded from further analysis if the mean pre-stimulus resultant velocity was below threshold (10 mm/s; Fig. S2). For most flies, 7 to 32 % of trials were excluded for this reason. For two flies, ~70% of trials were excluded because these flies ran consistently at the beginning of the experiment but stopped running consistently later in the experiment. Trials were also excluded if the velocity exceeded the maximum output from the Arduino: 0.1 to 14% of trials (mean = 4%) were excluded for this reason. In the figures, only a portion of the two second pre-stimulus period is plotted; movement outside the plotted period affects the mean pre-stimulus resultant velocity that was used to select trials.

34

Measurements were corrected so that the mean pre-stimulus running direction was straight ahead.

Specifically, for each trial, the median x and y displacements during the pre-stimulus period of the surrounding 50 trials were calculated to obtain a ‘median trajectory’. The mean x-y displacement of this ‘median trajectory’ was then calculated, and the angle between this trajectory and a straight- ahead trajectory was measured. This angle was then used to rotate the x and y displacements for that trial. This same angle was also used to rotate the x and y velocities for that trial.

To summarize each experiment in stripchart format, we measured lateral velocity and the decrease in forward velocity at two specific time points. Namely, lateral velocity was measured at stimulus offset. The decrease in forward velocity was computed as the forward velocity just before stimulus onset, minus the forward velocity 120 ms after stimulus onset. In trials where no stimulus was delivered, these values were measured at the equivalent time point within the trial epoch.

Statistical testing

Statistical analyses were performed in MATLAB 2016b and R Version 3.5.1. For Fig. 2E, Fig. 5C and Fig. 7F, paired two-sided t-tests were performed. For Fig. 7E, a two-sided one-sample t-test was performed on the combined data for the two cardinal speakers (90° and -90°).

For Fig. 4C-D, a Welch’s ANOVA for unequal variances was performed using the oneway function in the lattice package (version 0.20-35) in R. This test showed there were significant differences between the three conditions for both lateral and forward velocity (p < 0.005 in both cases). Given the significant difference, the Games-Howell post-hoc test was run using the userfriendlyscience package (version 0.7.1) in R.

For Fig. 6C-D and Fig. 7H, a linear mixed model was used to model the data with the speaker angle as a fixed effect and fly identity as a random effect (to take account of the repeated measures

35 design). The linear mixed model was implemented with the lme function in the nlme package

(version 3.1-13). To test whether the speaker angle had an effect on velocity, we compared this model to a baseline model with the angle fixed effect removed. The equations of the two models are written as follows in R: baseline_model <- lme(velocity ~ 1, random = ~1 | fly, data = my_data, method = "ML") augmented_model <- lme(velocity ~ angle, random = ~1 | fly, data

= my_data, method = "ML")

We performed a likelihood ratio test to test whether the augmented model was significantly better than the baseline model (implemented with the built-in ANOVA function). Given that the augmented model was significantly better, we performed post-hoc Tukey tests to test which speaker angles had significantly different effects (implemented with the multcomp package version 1.4-8).

To analyze the data in Fig. 7G, we used the same linear mixed model approach except that an extra term was added to both the baseline model and the augmented model. This extra term allows the estimated variances to be different for the data for each speaker angle. The models with this additional term are written in R as: baseline_model <- lme(velocity ~ 1, random = ~1 | fly, weights=varIdent(form=~1|angle), data = my_data, method = "ML") augmented_model <- lme(velocity ~ angle, random = ~1 | fly, weights=varIdent(form=~1|angle), data = my_data, method = "ML")

36

We did not perform statistics on Fig. 8 or Fig. 9 because it would be difficult to distinguish subtle differences given the small sample sizes used.

37

Results

Lateralized sounds elicit phonotaxis as well as acoustic startle

To determine whether walking Drosophila alter their locomotor behavior in response to sounds, we placed tethered flies on a spherical treadmill (Fig. 2A), and we delivered sounds from azimuthal angles of 45° and -45° (Fig. 2B). In each trial, we delivered a sound waveform from one of the two speakers; sounds consisted of 10 pips at 225 Hz, with an inter-pip interval of 34 ms (Fig. 2C).

This stimulus was designed to approximate the pulse song of male Drosophila. Conspecific song is a sound likely to be encountered by both females (which we focused on initially) and males

(which we tested in separate experiments described later). The sound intensity was 0.125 cm/s (88 dB SVL) at the fly’s location, which is comparable to the intensity of natural courtship song according to classic theoretical predictions (80-95 dB SVL, Bennet-Clark, 1971) and recent measurements (88-99 dB SVL, median sound level for sine and pulse song respectively; Morley et al., 2018).

In a cohort of 19 female flies, all individuals turned toward these sound stimuli (Fig. 2D-F).

Turning was detectable as early as the second or third pip, and it persisted throughout the pip train.

After the offset of the pip train, some flies simply returned to walking straight, while others executed a compensatory turn that partially cancelled their deviation from their initial path. For example, in a fly where sound from the right had evoked rightward lateral deviation, sound offset often elicited a lateral deviation to the left (Fig. 2G). This compensatory behavior was observed in some but not all flies.

38

Figure 2: Lateralized sounds elicit phonotaxis as well as acoustic startle (A) A female fly is tethered to a pin and positioned on a spherical treadmill. An optical sensor monitors the forward and lateral velocity of the treadmill; these values are inverted to obtain the fly’s fictive velocity.

(B) Sound stimuli are delivered from speakers placed at different azimuthal angles. On a given trial, speakers are activated individually (not together).

(C) A sound stimulus similar to the pulse component of Drosophila courtship song. It consists of

10 pips with a carrier frequency of 225 Hz and an inter-pip interval of 34 ms (total duration 322 ms).

(D) Lateral and forward velocity over time. Each trace shows data for one fly averaged across trials (n=19 flies). Flies transiently decrease their forward velocity (“acoustic startle”, Lehnert et al., 2013) and then turn toward the sound source.

(E) Lateral velocity (measured at stimulus offset) is significantly different when the speakers are positioned at 45° versus -45° (p = 8 x 10-6, t-test). Each dot is one fly, averaged across trials.

Lines are mean ± SEM across flies.

(F) Trial-averaged paths (x and y displacements), one per fly.

(G) Two paths from (F), with sound offset indicated. These examples show how some flies make compensatory turns shortly after sound offset.

39

Figure 2 (Continued)

40

Behavioral responses were variable from trial-to-trial (Fig. 3A-B, Fig. S3). On some trials, flies did not turn, and on rare occasions they even turned in the “wrong” direction. Overall, however, lateral velocities were clearly shifted in the direction of the stimulus.

In addition to turning in response to sound, flies also tended to stop walking briefly after sound onset (Fig. 3B). In trial-averaged data, this appears as a decrease in forward velocity (Fig. 2D).

This decrease in forward velocity can be dissociated from turning (Fig. S4), and so it is not a mere by-product of turning. On many individual trials, flies briefly stopped walking just after sound onset, and then resumed walking, often turning toward the sound as walking resumed (Fig.

3B). We interpret the initial pause as an “acoustic startle” behavior (Lehnert et al., 2013; Menda et al., 2011). The subsequent turn we call “phonotaxis”.

41

Figure 3: Trial-to-trial variation in phonotaxis behavior. (A) Distribution of single-trial lateral velocity values (at stimulus offset), for five example flies.

Histograms are normalized so they have the same area.

(B) Randomly selected examples of one typical fly’s responses to sounds from the left (-45°).

Periodic fluctuations in lateral velocity correspond to individual strides (Gaudry et al., 2013).

Thick lines are the trial-averaged data for this fly.

42

Phonotaxis requires vibration of the distal antennal segment

Next, we examined the role of antennal movement in phonotaxis. The antenna has two mobile joints, a distal joint (a3-a2) and a proximal joint (a2-a1). The distal joint vibrates freely in response to sound and transmits these vibrations to Johnston’s organ neurons (Göpfert and Robert, 2002).

The proximal joint does not vibrate in response to sound; however, muscular control of the proximal joint can indirectly affect sound-induced vibrations of the distal joint. For example, a flying fly can use its antennal muscles to position an antenna so that it is more sensitive to the sound of the ipsilateral wing, thereby increasing the vibrational response of the antenna to that wing’s beating rhythm (Mamiya et al., 2011). More relevant to our experiments, a fly can also use its antennal muscles to change the angle of the antenna relative to the sound source (Mamiya et al., 2011), and this could alter sound-evoked antennal vibrations even if the antenna is not positioned appreciably closer to the sound source.

Therefore, we sought to test how both joints contribute to the auditory behaviors we are studying.

We divided sibling flies into three groups. In the first two groups, we used drops of glue to bilaterally immobilize the proximal joint or the distal joint, respectively (Fig. 4A). In the last group, we handled and cold-anesthetized the flies just as in the other two groups, but we did not immobilized the antennae (“sham-glued”).

We found that eliminating voluntary movements of the antennae had no effect on sound-induced turning (Figs. 4A-C). It also had no effect on acoustic startle (Figs. 4A and 4D). Thus, neither behavior requires muscular control of the antennae.

By contrast, eliminating sound-evoked vibrations of the distal antennal segment completely abolished sound-evoked turning (Figs. 4A-C). It also abolished acoustic startle (Figs. 4A and 4D).

These results indicate that both behaviors are responses to sound-evoked vibrations of the distal

43 antennal segment, and not responses to sound-evoked vibrations of the spherical treadmill (given that insects also have vibration sensors in their legs and/or tarsi; Fabre et al., 2012; Michelsen,

1992).

44

Figure 4: Phonotaxis requires vibration of the distal antennal segment. (A) Left: immobilizing the proximal antennal joint (the a2-a1 joint) does not eliminate phonotaxis or acoustic startle. Right: immobilizing the distal antennal joint (the a3-a2 joint) completely eliminates both behaviors. Speakers were positioned at 45° and -45°. In each plot, each trace shows data for one fly (left: n=5 flies. right: n=4 flies.) averaged across trials. Blue dots in schematics represent glue drops. With regard to baseline forward velocity, note that both groups of manipulated flies are similar to unmanipulated flies (Fig. 2D).

(B) Trial-averaged paths (x and y displacements) for each fly.

(C) Lateral velocity toward the speaker, averaging data from the two speaker positions. Lateral velocity is significantly different when the distal joint is immobilized compared with either immobilizing the proximal joint or “sham glued” controls (p < 0.05 in both cases, Games-Howell test; sham data are a subset of the data from Figs. 2-3). When the proximal joint is immobilized, lateral velocity is not significantly different from “sham glued” controls. Each dot is one fly. Lines are mean ± SEM across flies.

(D) Decrease in forward velocity, averaging data from the two speaker positions. Decrease in forward velocity is significantly different when the distal joint is immobilized compared with either immobilizing the proximal joint or “sham glued” controls (p < 0.05 in both cases, Games-

Howell test; sham data are a subset of the data from Figs. 2-3). When the proximal joint is immobilized, the decrease in forward velocity is not significantly different from “sham glued” controls.

45

Figure 4 (Continued)

46

Turning is contralateral to the antenna with larger vibrations

When a sound is in the front hemifield, it is the contralateral antenna that vibrates more (Morley et al., 2012). For example, a speaker at 45° should produce larger vibrations in the left antenna; conversely, a speaker at -45° should produce larger vibrations in the right antenna (Fig. 1D). We therefore hypothesized that the nervous system compares vibration amplitudes at the two antenna, and steers away from the antenna with the larger amplitude.

To test this idea, we asked what happens when the speaker is placed directly in front of the fly, but only one antenna is allowed to vibrate. We eliminated vibrations in the other antenna by immobilizing the distal antennal joint. Under these conditions, we found that every fly steered away from the intact antenna (Fig. 5A-C). This result supports the hypothesis that, in order to turn toward a sound, the fly turns away from the antenna with the larger vibration amplitude.

As an aside, we note that this rule – turning away from the antenna with the larger vibration amplitude – also occurs in flying Drosophila (Mamiya et al., 2011). However, in that case, the sound source is not an object in the external environment, but the fly’s own wing. When the antennae “hear” that the two wings are beating with asymmetric amplitudes, this drives a reflex that amplifies the wingbeat amplitude on the side where it is already larger. The proposed function of this reflex is to reinforce the fly’s own ongoing turning maneuver in flight.

47

Figure 5: Turning is contralateral to the antenna with larger vibrations. (A) Unilaterally deafening flies (by immobilizing the distal antennal joint) causes flies to turn away from the intact antenna when a sound is delivered from a speaker in front of the fly. In each plot, each trace shows trial-averaged data for one fly (n=5 flies per manipulation). Forward velocity also transiently decreases at stimulus onset (“acoustic startle”).

(B) Trial-averaged paths (x and y displacements) for each fly.

(C) Lateral velocity away from the intact antenna, combining data from the two manipulations

(left and right deafening). These values are significantly different for trials where no stimulus was delivered (p = 0.002, t-test). Each dot is one fly. Lines are mean ± SEM across flies.

48

Lateralized sounds arriving from the back elicit negative phonotaxis

We next asked what happens when sounds originate from behind the fly. We placed two speakers in the back hemifield, at 135° and -135°. These are the two positions in the back hemifield where auditory sensitivity is highest (Morley et al., 2012) and they are the two positions predominately occupied by the wing of a singing male in the coordinate frame of a courted female (Morley et al.,

2018). For comparison, we also placed two speakers in the front hemifield (at 45° and -45°).

We found that sounds arriving from front-right and back-left elicited indistinguishable right turns

(45° and -135°, Fig. 6A-C). Conversely, sounds arriving from front-left and back-right -elicited indistinguishable left turns (-45° and 135°). In other words, sounds in the front hemifield elicited positive phonotaxis, while sounds in the back hemifield elicited negative phonotaxis. All four stimuli elicited similar acoustic startle responses (Fig. 6A,D), suggesting that all four stimuli had similar perceived intensity.

This pattern of phonotaxis fits with the antenna’s vibration amplitude tuning (Morley et al., 2012;

Morley et al., 2018). Antennal vibration amplitudes do not change when the sound source moves from 45° to -135° (Fig. 6E). The finding that these two speaker positions elicit the same phonotaxis behavior is therefore evidence that phonotaxis depends on vibration amplitude cues alone.

It should be noted that vibration amplitude cues are not the only cues available for phonotaxis.

When the speaker position moves from 45° to -135°, the direction (phase) of all antennal movements should be inverted (Fig. 6F), and in principle, this could have inverted the fly’s behavior. Specifically, we might imagine a rule whereby the fly steers toward the first detectable vibration in either antenna. If the first detectable vibration was rightward for a speaker positioned at 45°, then the first detectable vibration would be leftward for a speaker positioned at -135° playing the same sound waveform. Our results imply that Drosophila phonotaxis is not guided by

49 this type of vibration-direction rule. Phonotaxis can be most parsimoniously explained by vibration amplitude cues alone. That said, Drosophila might rely on vibration-direction cues in other contexts, given that there are neurons in the brain that keep track of vibration phase information

(Azevedo and Wilson, 2017).

50

Figure 6: Lateralized sounds arriving from the back elicit negative phonotaxis. (A) Trial-averaged velocity responses to speakers placed on the diagonals (45°, -135°, -45°, 135°), grouped by fly. Flies turn right for the speakers at 45° and -135°, while they turn left for the speakers at -45° and 135°. All stimuli elicit acoustic startle. (Figs. 2-3 show data from the same flies, but for 45° and -45° stimuli alone.)

(B) Trial-averaged paths (x and y displacements) for each stimulus condition, grouped by fly.

(C) Lateral velocity. Each dot is one fly. Back bars are mean ± SEM across flies. Responses are significantly different for stimulus source locations that are 90° apart (p < 1 x 10-6 for all comparisons between stimuli 90° apart, Tukey test), and not significantly different for stimulus source locations that are 180° apart (p > 0.9 for all comparisons between stimuli 180° apart, Tukey test).

(D) The decrease in forward velocity is not significantly different for stimulus source locations (p

= 0.668, likelihood ratio test).

(E) The amplitude of antennal vibrations is plotted in polar coordinates as a function of the azimuthal angle of the sound source (adapted from data in Morley et al., 2012).

(F) When the speaker position moves from 45° to -135°, the direction of all antennal movements will be inverted. Our results indicate that this inversion has no effect on phonotaxis.

51

Figure 6 (Continued)

52

Sounds from any of the four cardinal directions elicit no phonotaxis

Next, we tried delivering the same sounds from speakers at 90° or -90°. We found that both of these speaker locations elicited no phonotaxis (Fig. 7A,B). Importantly, flies were not deaf to these speaker locations, because both stimuli elicited acoustic startle behavior that was similar to the acoustic startle elicited by speakers at 45° and -45°, measured in the same flies.

We noticed that the speakers at 90° and -90° often evoked small turns, but a given fly typically made small turns in the same direction in both cases. For example, flies 1 and 2 made right turns to both the 90° stimulus and the -90° stimulus; these flies were also biased rightward in general

(i.e. the turn towards the 45° stimulus was larger than the turn towards the -45° stimulus) (Fig.

7A,B). It seems likely that the small turns in response to the 90° and -90° stimuli were due to some idiosyncratic “handedness” in each fly, either biological handedness (Buchanan et al., 2015) or else a slight artifactual asymmetry in the way the fly interacted with the spherical treadmill apparatus. When a fly resumed walking after an acoustic startle response, this handedness evidently produced a small nonspecific bias in its walking behavior. The key point here is that no fly turned in opposite directions in response to the 90° and -90° stimuli – thus, turning was not guided by the position of the stimulus, meaning it was not phonotaxis.

In a separate set of flies, we compared responses speakers at 0°, 45°, 90°, and 180°. As expected, flies always turned toward the 45° speaker. By contrast, 0°, 90°, and 180° did not elicit consistent turning. All three of the latter stimuli elicited either straight walking, or small idiosyncratic turns, with a given fly typically making these idiosyncratic turns in a reliable direction (Fig. 7C,D).

In summary, we found no phonotactic response to stimuli arriving from any of the cardinal directions (90°, -90°, 0°, and 180°); however, all these stimuli elicited an acoustic startle response, confirming that they are all audible (Fig. 7E-H). Why don’t flies phonotax in response to sounds

53 arriving from 90° or -90°? A speaker at 90° elicits equal left-right vibration amplitudes; the same is true for any stimulus arriving from a cardinal direction. What distinguishes these four stimuli is the direction (phase) of antennal vibrations. For example, speakers at 0° and 180° cause the antennae to move toward the midline at the same phase of the sound cycle. By contrast, speakers at 90° or -90° cause the antennae to move toward the midline at opposite phases of the sound cycle

(Fig. 7I). Which antenna initially moves towards the midline will depend on whether the speaker is at 90° or-90°. Our results indicate that none of these differences matter for the behaviors we measured in our experiments. All that seems to matter is the amplitude of antennal vibration, and if left-right amplitudes are equal, there is no systematic tendency to turn relative to the sound source location.

54

Figure 7: Sounds from any of the four cardinal directions elicit no phonotaxis. (A) Trial-averaged velocity responses. The stimuli from the cardinal directions (90° and -90°) elicit no systematic turning related to speaker position. However, these sounds sometimes elicit turns in a fly-specific direction (flies 1 and 2 turn right in response to 90° and -90°, whereas fly 3 turns left). Note that all five flies in this cohort turn right in response to the 45° stimulus and left in response to the -45° stimulus. (Figs. 2-3 show data from the same flies, but for 45° and -45° stimuli alone.)

(B) Trial-averaged paths (x and y displacements) for each stimulus condition, grouped by fly.

(C-D) In separate flies, we confirmed that a 90° stimulus elicits behavior that is not systematically different from 0° or 180°. As before, individual flies make idiosyncratic turns in response to 90°, but this is generally similar to their response to 0° and 180°.

(E-F) Lateral velocity toward speaker, and decrease in forward velocity, for the flies in (A-B).

Data are combined for the two diagonal speakers (45° and -45°), and also for the two cardinal speakers (90° and -90°). Responses to the latter are not significantly different from zero (p = p =

0.932, t-test). The decrease in forward velocity for the diagonal and cardinal speakers is significantly different (p = 0.0314, t-test), although the effect size is small. Each dot is one fly.

Lines are mean ± SEM across flies.

(G-H) Lateral velocity, and decrease in forward velocity, for the flies in (C-D). The 45° stimulus is significantly different from all others (p < 0.005 in all cases, Tukey test). Lateral velocities for

0°, 90° and 180° are not significantly different (p > 0.15 for all possible comparisons, Tukey test).

Decreases in forward velocity are not significantly different (p = 0.848, likelihood ratio test).

(I) Schematized antennal phase relationships. Speakers at 0° and 180° cause the antennae to move toward the midline in phase, whereas speakers at 90° or -90° cause the antennae to move in opposite phases. Displacements are exaggerated for clarity.

55

Figure 7 (Continued)

56

Phonotaxis generalizes to sounds with diverse spectro-temporal features

Thus far, we have used a train of sound pips with a fixed carrier frequency (225 Hz). We initially selected this frequency because it is close to the dominant frequencies in Drosophila pulse song

(Murthy, 2010). However, phonotaxis might have relevance for other situations, beyond courtship.

This idea motivated us to test a wider range of sound carrier frequencies (100, 140, 225, 300, and

800 Hz; Fig. 8A). In the same experiments, we also tried varying the temporal structure of the sound stimulus: in addition to delivering pips, we delivered sustained tones (322 ms in duration, the same duration as the pip trains; Fig. 8A). We used a particle velocity microphone to verify that all stimuli had the same intensity at the fly’s location. All stimuli were delivered from two speaker positions, 45° and -45°.

We observed phonotaxis behavior in response to both pip trains and sustained tones, at every carrier frequency we tested (Fig. 8B-D). Every carrier frequency also elicited acoustic startle behavior (Fig. 8B,E). In general, behavioral responses were similar for all carrier frequencies. The one exception was 800 Hz, which evoked weaker responses.

57

Figure 8: Phonotaxis generalizes to sounds with diverse spectro-temporal features.

(A) Example stimuli: pip stimuli with a 225 Hz or 800 Hz carrier frequency, and sustained tone stimuli with a 225 Hz or 800 Hz carrier frequency.

(B) Velocity responses to pip stimuli (left) and sustained tone stimuli (right) from speakers placed at 45° and 45°. Data were averaged across trials for each fly, and then averaged across flies (n=5 flies). See (C) for color codes. The data from the 225 Hz pip stimuli are also shown in Figs. 2-3.

(C) Paths (x and x displacements) for each stimulus condition. Data were averaged across trials for each fly, and then averaged across flies.

(D) Lateral velocity toward speaker, averaging data from the two speaker positions. Each dot is one fly. Lines are mean ± SEM across flies.

(E) Decrease in forward velocity.

58

Figure 8 (Continued)

59

Both males and females display phonotaxis

Courtship is one potential natural situation where phonotaxis would be relevant. In the context of courtship, females listen to male song (Hall, 1994), but males also potentially listen to the songs of nearby males (Boekhoff-Falk and Eberl, 2014; Tauber and Eberl, 2002). This motivated us to compare the behavior of males and females.

We returned to our standard sound stimulus (10 pips at 225 Hz, with an inter-pip interval of 34 ms), and we again positioned speakers at 45° and -45°. We found phonotaxis was similar in males and females (Fig. 9A-C), as was acoustic startle behavior (Fig. 9A,D). Thus, these behaviors are not sex-specific.

60

Figure 9: Both males and females display phonotaxis. (A) Trial-averaged velocity responses to sounds from 45° and -45° for males (left, n=5 flies) and females (right, n=5 flies, a subset of the data from Fig. 6 which is also shown in Figs. 2-3).

(B) Trial-averaged paths (x and y displacements) for each fly.

(C) Lateral velocity toward speaker, averaging data from the two speaker positions. Each dot is one fly. Lines are mean +/- SEM across flies.

(D) Decrease in forward velocity, averaging data from the two speaker positions.

61

Discussion

Hearing with one ear

If one ear is transiently plugged in a normal human subject, the subject will consistently mislocalize sounds to the side of the intact ear (Middlebrooks, 2015). The same occurs in crickets and grasshoppers (Moiseff et al., 1978; Ronacher et al., 1986). Crickets and grasshoppers, like humans, have tympanal ears. The tympanum closer to the sound vibrates with larger amplitude and/or leading phase. Thus, in order to orient toward a sound, humans and crickets should turn toward the ear with the larger (and/or leading) response. When one ear is blocked, this rule produces turning toward the intact ear.

By contrast, in unilaterally-deafened Drosophila, we observed the opposite reaction: flies turned away from the intact side. This tells us that Drosophila use a flipped rule: they turn away from the auditory organ with the larger response. This rule makes sense for Drosophila, because they have flagellar rather than tympanal auditory organs, where each auditory organ is optimally stimulated by the contralateral front hemifield (Morley et al., 2012; Morley et al., 2018). In short, the flip in auditory mechanics likely explains the flipped outcome of the unilateral deafening experiment.

Ambiguities in binaural cues

Even when both ears are functional, it is still possible to find systematic errors in sound localization. For example, in vertebrates, every azimuthal location in the front hemifield maps onto to another location in the back hemifield that elicits the same inter-aural cues. This can cause front- back ambiguities in perception when the stimulus is a low-frequency pure tone (Rayleigh, 1876;

Schnupp et al., 2011).

62

In Drosophila, we would not expect to find front-back ambiguity, because the relevant cues are not symmetric in the front and back. Instead, we would predict a different type of ambiguity. Each antenna has a vibration amplitude tuning curve which is symmetrical about both azimuthal diagonals. Thus, any pair of vibration amplitudes maps onto a set of azimuthal locations which are reflections across the diagonals. For example, the same pair of vibration amplitudes maps to 45° and also to its reflection across the diagonals (-135°), and accordingly we found indistinguishable behavioral responses to these two stimulus locations. Similarly, 0° reflects to 90° (across one diagonal), -90° (across the other diagonal), and 180° (across both diagonals), and again we observed the same behavioral responses to all four of these speaker locations. These findings support the conclusion that antennal vibration amplitudes are the physical cues that specify phonotaxis behavior.

Fine discrimination of nearby sound source locations

High-acuity discrimination of nearby sound source positions has been well-documented in crickets. These insects can localize sound sources with an azimuthal precision close to 10° (Bailey and Thomson, 1977; Latimer and Lewis, 1986; Pollack, 1982). In this regard, the fly Ormia ochracea is a particular virtuoso: walking Ormia can orient toward sound sources with a precision as fine as 1° (Mason et al., 2001).

In the future, it will be interesting to investigate whether Drosophila can also discriminate between sound source locations separated by these small angles. However, successful phonotaxis should not require a fly to precisely identify a sound source location. When a walking insect encounters a lateralized sound, it can simply turn toward the sound until it is no longer lateralized (Bailey and

Stephen, 1984). As long as the fly can detect small deviations from the midline in sound source position, it does not need to precisely localize the sound in order to approach it.

63

Sound versus wind

Drosophila sense the particle velocity component of a sound wave – i.e., the movement of air that accompanies each sound cycle (Göpfert and Robert, 2002; Robert and Hoy, 2007). Wind is also simply the movement of air. However, there are three key differences between sound and wind.

First, air particle velocities are lower in sound: a female fly experiences an air speed on the order of 0.1 cm/s as she listens to a male courtship song (Bennet-Clark, 1971; Morley et al., 2018), whereas air speeds more than 100× larger are typical of atmospheric conditions in natural environments where Drosophila are active (Budick and Dickinson, 2006). Second, wind is a spectrally-broadband non-harmonic stimulus, whereas sound stimuli are narrowband and typically harmonic (Robert and Hoy, 2007). Third, steady wind will produce a large sustained displacement of the antennae, whereas sound produces zero net displacement of the antennae. A corollary of this last point is that wind generates substantial bulk displacement of air particles, whereas sound does not.

These differences between sound and wind are evidently decisive, because walking Drosophila treat wind and sound differently. Here we show that walking Drosophila make systematic turns in response to sounds. However, walking Drosophila do not make systematic turns in response to wind, except when odor is present (Álvarez-Salvado et al., 2018; Bell and Wilson, 2016; Steck et al., 2012). Thus, the fly’s behavior clearly discriminates sound from wind. Johnston’s organ neurons also discriminate sound from wind (Yorozu et al., 2009), although it should be noted that many neurons participate in encoding both kinds of stimuli (Mamiya and Dickinson, 2015; Patella and Wilson, 2018).

64

Phonotaxis in courtship

During courtship, phonotaxis could help a male locate a female. Specifically, a male may turn toward the song of a competing male who is standing near a female (Boekhoff-Falk and Eberl,

2014). A male may also turn toward the sounds that females make during courtship (Ejima and

Griffith, 2008; Ewing and Bennet-Clark, 1968).

On the other hand, phonotaxis may cause females to turn in response to male song. The female is generally facing away from a pursuing male (Hall, 1994; Morley et al., 2018). We find that the generic behavioral response to a sound in the back hemifield is to turn away from the sound.

Turning away would fit the observed trend toward “female coyness” during courtship: even virgin females typically display continual mild rejection behaviors in response to male pursuit (Hall,

1994). A male will often initiate song dozens of times before copulation begins (Zhang et al.,

2016). Female coyness ensures that females mate only with males who are fit enough to maintain pursuit in the face of mild rejection. Ultimately, song tends to cause virgin females to slow down, but only if the male sings for a long time (Talyn and Dowse, 2004; Coen et al., 2014). In short, negative phonotaxis to sound sources in the back hemifield may not simply be a “perceptual error”: it may be an adaptive trait causing females to select fitter males.

Phonotaxis in exploration

Drosophila have a set of basic rules for exploring arbitrary visual objects. For example, one rule is to prioritize close visual objects over distant ones (Götz, 1994; Schuster, 1996; Schuster et al.,

2002). Another rule is to orient toward visual objects in the front hemifield, while ignoring visual objects in the back hemifield (Horn and Wehner, 1975), or turning away from objects in the back hemifield (Mronz and Strauss, 2008). If objects behind the fly were not deprioritized, then the fly could become permanently “captured” by any object it approached (Bülthoff et al., 1982). Thus,

65 the “front-not-back” rule promotes visual exploration, because it allows the fly to avoid recapture by an unrewarding object it has just turned away from.

Our results suggest a similarity between visually-guided walking and sound-guided walking.

Namely, we show that flies turn toward sound objects in the front hemifield, but they turn away from sound objects in the back hemifield. Thus, vision and hearing both use a simple “front-not- back” rule. The potential utility of this rule is the same in both cases: it allows the fly to avoid being recaptured by an unrewarding object it has just turned away from. This again makes the point that negative phonotaxis to sound sources behind the fly may not simply be an “error”, because it may be adaptive in some situations.

Neural basis of phonotaxis

In crickets, inter-aural comparisons begin at the level of cells postsynaptic to peripheral auditory afferents. These cells receive antagonistic input from the two ears (Selverston et al., 1985). By analogy, we might imagine that inter-antennal comparisons could occur at the very first stage of auditory processing in the Drosophila brain. Indeed, the first auditory relay in the Drosophila brain contains many interhemispheric projections. This relay is called the antennal mechanosensory and motor center, or AMMC (Matsuo et al., 2016).

However, a recent pan-neuronal calcium imaging study showed that the AMMC is unresponsive to vibration of the contralateral antenna; rather, AMMC vibration responses are strictly unilateral

(Patella and Wilson, 2018). By contrast, vibration responses in the brain’s secondary auditory center (the wedge) are driven by both ipsi- and contralateral antennae. This result suggests that inter-antennal vibration comparisons might begin within the brain’s secondary auditory center.

66

An interesting – and complicating – consideration is that the mechanical resonant frequency of the antennae depends on stimulus intensity (Göpfert and Robert, 2002). Recall that rotating the azimuthal angle of a sound source generally produces anticorrelated changes in the effective sound intensity at the two antennae (Figure 1D; Morley et al., 2012). Therefore, rotating the azimuthal angle of a sound source should generally produce anticorrelated changes in the frequency tuning of the two antennae (Morley et al., 2018). Future work will be needed to understand how this might affect the neural implementation of inter-antennal vibration comparisons.

From phonotaxis to navigation

Ultimately, sound localization cues must be integrated with other sensory cues that provide spatial guidance for walking flies. These guidance cues include visual objects (Horn and Wehner, 1975;

Robie et al., 2010; Schuster et al., 2002), global visual motion signals (Götz and Wenking, 1973;

Katsov and Clandinin, 2008; Strauss et al., 1997), tactile guidance cues (Ramdya et al., 2015), wind direction cues (Bell and Wilson, 2016; Steck et al., 2012), and instantaneous samples of olfactory spatial gradients (Borst and Heisenberg, 1982; Gaudry et al., 2013).

Meanwhile, sensory guidance cues must also be integrated with the fly’s internal representation of its heading direction state (Seelig and Jayaraman, 2015). Ultimately, steering decisions must be governed by flexible “policies” dictating the current preferred heading direction, depending on idiothetic coordinates (Kim and Dickinson, 2017; Neuser et al., 2008; Strauss and Pichler, 1998) and the priorization of guidance cues (Bülthoff et al., 1982; Robie et al., 2017; Schuster et al.,

2002). Describing the contributions of individual sensory guidance cues is a step toward understanding navigation as a whole.

67

References

Álvarez-Salvado, E., Licata, A., Connor, E. G., McHugh, M. K., King, B. M. N., Stavropoulos, N., Crimaldi, J. P. and Nagel, K. I. (2018). Elementary sensory-motor transformations underlying olfactory navigation in walking fruit-flies. bioRxiv, doi: http://dx.doi.org/10.1101/307660.

Ashida, G. and Carr, C. E. (2011). Sound localization: Jeffress and beyond. Curr. Opin. Neurobiol. 21, 745-51.

Atkins, G., Ligman, S., Burghardt, F. and Stout, J. F. (1984). Changes in phonotaxis by the female cricket Acheta domesticus L. after killing identified acoustic interneurons. J. Comp. Physiol. [A]. 154, 795-804.

Atkins, G. and Pollack, G. S. (1987). Response properties of prothoracic, interganglionic, sound- activated interneurons in the cricket Teleogryllus oceanicus. J. Comp. Physiol. [A]. 161, 681- 693.

Azevedo, A. W. and Wilson, R. I. (2017). Active mechanisms of vibration encoding and frequency filtering in central mechanosensory neurons. Neuron 96, 446-460 e9.

Bailey, W. J. and Stephen, R. O. (1984). Auditory acuity in the orientation behaviour of the bushcricket Pachysagella australis walker (Orthoptera, Tettigonidae, Saginae). Anim. Behav. 32, 816-829.

Bailey, W. J. and Thomson, P. (1977). Acoustic orientation in the cricket Teleogryllus oceanicus (Le Guillou). J. Exp. Biol. 67, 61-75.

Bell, J. S. and Wilson, R. I. (2016). Behavior reveals selective summation and max pooling among olfactory processing channels. Neuron 91, 425-38.

Bennet-Clark, H. C. (1971). Acoustics of insect song. Nature 234, 255-259.

Boekhoff-Falk, G. and Eberl, D. F. (2014). The Drosophila auditory system. WIREs Dev. Biol. 3, 179-91.

Borst, A. and Heisenberg, M. (1982). Osmotropotaxis in Drosophila melanogaster. J. Comp. Physiol. [A]. 147, 479-484.

Brodfuehrer, P. D. and Hoy, R. R. (1990). Ultrasound sensitive neurons in the cricket brain. J. Comp. Physiol. [A]. 166, 651-62.

Buchanan, S. M., Kain, J. S. and de Bivort, B. L. (2015). Neuronal control of locomotor handedness in Drosophila. Proc. Natl. Acad. Sci. USA 112, 6700-5.

Budick, S. A. and Dickinson, M. H. (2006). Free-flight responses of Drosophila melanogaster to attractive odors. J. Exp. Biol. 209, 3001-17.

Bülthoff, H., Götz, K. and Herre, M. (1982). Recurrent inversion of visual orientation in the walking fly, Drosophila melanogaster. J. Comp. Physiol. [A]. 148, 471-481.

68

Bussell, J. J., Yapici, N., Zhang, S. X., Dickson, B. J. and Vosshall, L. B. (2014). Abdominal- B neurons control Drosophila virgin female receptivity. Curr. Biol. 24, 1584-1595.

Clemens, J., Girardin, C. C., Coen, P., Guan, X.-J., Dickson, B. J. and Murthy, M. (2015). Connecting neural codes with behavior in the auditory system of Drosophila. Neuron 89, 629- 44.

Coen, P., Clemens, J., Weinstein, A. J., Pacheco, D. A., Deng, Y. and Murthy, M. (2014). Dynamic sensory cues shape song structure in Drosophila. Nature 507, 233-7.

Effertz, T., Wiek, R. and Göpfert, M. C. (2011). NompC TRP channel is essential for Drosophila sound receptor function. Curr. Biol. 21, 592-7.

Ejima, A. and Griffith, L. C. (2008). Courtship initiation is stimulated by acoustic signals in Drosophila melanogaster. PLoS One 3, e3246.

Ewing, A. W. and Bennet-Clark, H. C. (1968). The courtship songs of Drosophila. Behaviour 31, 288-301.

Fabre, C. C., Hedwig, B., Conduit, G., Lawrence, P. A., Goodwin, S. F. and Casal, J. (2012). Substrate-borne vibratory communication during courtship in Drosophila melanogaster. Curr. Biol. 22, 2180-5.

Frye, M. A. and Dickinson, M. H. (2004). Motor output reflects the linear superposition of visual and olfactory inputs in Drosophila. J. Exp. Biol. 207, 123-31.

Gaudry, Q., Hong, E. J., Kain, J., de Bivort, B. and Wilson, R. I. (2013). Asymmetric neurotransmitter release at primary afferent enables rapid odor lateralization in Drosophila. Nature 493, 424-8.

Göpfert, M. C. and Robert, D. (2002). The mechanical basis of Drosophila audition. J. Exp. Biol. 205, 1199-208.

Götz, K. (1994). Exploratory strategies in Drosophila. In Neural basis of behavioral adaptations, Fortschritte der Zoologie, eds. K. Schildberger and N. Elsner), pp. 47-59. Stuttgart: Fischer.

Götz, K. G. and Wenking, H. (1973). Visual control of locomotion in the walking fruitfly Drosophila. J. Comp. Physiol. [A]. 85, 235-266.

Grothe, B., Pecka, M. and McAlpine, D. (2010). Mechanisms of sound localization in mammals. Physiol Rev 90, 983-1012.

Hall, J. C. (1994). The mating of a fly. Science 264, 1702-14.

Hedwig, B. and Poulet, J. F. (2004). Complex auditory behaviour emerges from simple reactive steering. Nature 430, 781-5.

Horn, E. and Wehner, R. (1975). The mechanism of visual pattern fixation in the walking fly, Drosophila melanogaster. J. Comp. Physiol. [A]. 101, 39-56.

69

Horseman, G. and Huber, F. (1994a). Sound localisation in crickets. I. Contralateral inhibition of an ascending auditory interneuron (AN1) in the cricket Gryllus bimaculatus. J. Comp. Physiol. [A]. 175, 389–398.

Horseman, G. and Huber, F. (1994b). Sound localization in crickets. II. Modeling the role of a simple neural network in the prothoracic ganglion. J. Comp. Physiol. [A]. 175, 399-413.

Kamikouchi, A., Inagaki, H. K., Effertz, T., Hendrich, O., Fiala, A., Göpfert, M. C. and Ito, K. (2009). The neural basis of Drosophila gravity-sensing and hearing. Nature 458, 165-71.

Katsov, A. Y. and Clandinin, T. R. (2008). Motion processing streams in Drosophila are behaviorally specialized. Neuron 59, 322-335.

Kim, I. S. and Dickinson, M. H. (2017). Idiothetic path integration in the fruit fly Drosophila melanogaster. Curr. Biol. 27, 2227-2238 e3.

Kinsler, L. E. and Frey, A. R. (1962). Fundamentals of acoustics. New York: Wiley.

Latimer, W. and Lewis, D. B. (1986). Song harmonic content as a parameter determining acoustic orientation behavior in the cricket Teleogryllus oceanicus (Le Guillou). J. Comp. Physiol. [A]. 158, 583-591.

Lehnert, B. P., Baker, A. E., Gaudry, Q., Chiang, A. S. and Wilson, R. I. (2013). Distinct roles of TRP channels in auditory transduction and amplification in Drosophila. Neuron 77, 115-28.

Mamiya, A. and Dickinson, M. H. (2015). Antennal mechanosensory neurons mediate wing motor reflexes in flying Drosophila. J. Neurosci. 35, 7977-91.

Mamiya, A., Straw, A. D., Tomasson, E. and Dickinson, M. H. (2011). Active and passive antennal movements during visually guided steering in flying Drosophila. J. Neurosci. 31, 6900-14.

Marsat, G. and Pollack, G. S. (2005). Effect of the temporal pattern of contralateral inhibition on sound localization cues. J. Neurosci. 25, 6137-44.

Mason, A. C., Oshinsky, M. L. and Hoy, R. R. (2001). Hyperacute directional hearing in a microscale auditory system. Nature 410, 686-90.

Matsuo, E., Seki, H., Asai, T., Morimoto, T., Miyakawa, H., Ito, K. and Kamikouchi, A. (2016). Organization of projection neurons and local neurons of the primary auditory center in the fruit fly Drosophila melanogaster. J Comp. Neurol. 524, 1099-164.

Menda, G., Bar, H. Y., Arthur, B. J., Rivlin, P. K., Wyttenbach, R. A., Strawderman, R. L. and Hoy, R. R. (2011). Classical conditioning through auditory stimuli in Drosophila: methods and models. J. Exp. Biol. 214, 2864-70.

Michelsen, A. (1992). Hearing and sound communication in small animals: evolutionary adaptations to the laws of physics. In The Evolutionary Biology of Hearing, eds. D. B. Webster R. R. Fay and A. N. Popper), pp. 61-77. New York: Springer-Verlag.

70

Middlebrooks, J. C. (2015). Sound localization. Handb. Clin. Neurol. 129, 99-116.

Miles, R. N., Robert, D. and Hoy, R. R. (1995). Mechanically coupled ears for directional hearing in the parasitoid fly Ormia ochracea. J. Acoust. Soc. Am. 98, 3059-70.

Moiseff, A., Pollack, G. S. and Hoy, R. R. (1978). Steering responses of flying crickets to sound and ultrasound: Mate attraction and predator avoidance. Proc. Natl. Acad. Sci. USA 75, 4052- 6.

Molina, J. and Stumpner, A. (2005). Effects of pharmacological treatment and photoinactivation on the directional responses of an insect neuron. J. Exp. Zool. A Comp. Exp. Biol. 303, 1085- 103.

Morley, E. L., Steinmann, T., Casas, J. and Robert, D. (2012). Directional cues in Drosophila melanogaster audition: structure of acoustic flow and inter-antennal velocity differences. J. Exp. Biol. 215, 2405-13.

Morley, E. L., Jonsson, T. and Robert, D. (2018) Auditory sensitivity, spatial dynamics, and amplitude of courtship song in Drosophila melanogaster. J. Acous. Soc. Am. 144, 734-739.

Mronz, M. and Strauss, R. (2008). Visual motion integration controls attractiveness of objects in walking flies and a mobile robot. In 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3559-3564. Nice, France: IEEE.

Murthy, M. (2010). Unraveling the auditory system of Drosophila. Curr. Opin. Neurobiol. 20, 281-7.

Neuser, K., Triphan, T., Mronz, M., Poeck, B. and Strauss, R. (2008). Analysis of a spatial orientation memory in Drosophila. Nature 453, 1244-7.

Patella, P. and Wilson, R. I. (2018). Functional maps of mechanosensory features in the Drosophila brain. Curr. Biol. 28, 1189-1203 e5.

Pollack, G. (1982). Sexual differences in cricket calling song recognition. J. Comp. Physiol. [A]. 146, 217-221.

Ramdya, P., Lichocki, P., Cruchet, S., Frisch, L., Tse, W., Floreano, D. and Benton, R. (2015). Mechanosensory interactions drive collective behaviour in Drosophila. Nature 519, 233-6.

Rayleigh, L. (1876). On our perception of the direction of a source of sound. Proceedings of the Musical Association 2nd sess., 75-84.

Rayleigh, L. (1907). On our perception of sound direction. Philos. Mag. 13, 214-32.

Rheinlaender, J. and Römer, H. (1980). Bilateral coding of sound direction in the CNS of the bushcricket Tettigonia viridissima L. (Orthoptera, Tettigoniidae). J. Comp. Physiol. [A]. 140, 101-111.

71

Robert, D. (2005). Directional hearing in insects. In Sound Source Localization, Springer Handbook of Auditory Research, vol. 25, eds. A. N. Popper and R. R. Fay), pp. 6-35. New York: Springer.

Robert, D. and Hoy, R. R. (1998). The evolutionary innovation of tympanal hearing in Diptera. In Comparative Hearing: Insects, eds. R. R. Hoy A. N. Popper and R. R. Fay), pp. 197-227. New York: Springer-Verlag.

Robert, D. and Hoy, R. R. (2007). Auditory systems in insects. In Invertebrate Neurobiology, (ed. G. North), pp. 155-184. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press.

Robert, D., Miles, R. N. and Hoy, R. R. (1998). Tympanal mechanics in the parasitoid fly Ormia ochracea: intertympanal coupling during mechanical vibration. J. Comp. Physiol. [A]. 183, 443- 452.

Robie, A. A., Hirokawa, J., Edwards, A. W., Umayam, L. A., Lee, A., Phillips, M. L., Card, G. M., Korff, W., Rubin, G. M., Simpson, J. H. et al. (2017). Mapping the neural substrates of behavior. Cell 170, 393-406 e28.

Robie, A. A., Straw, A. D. and Dickinson, M. H. (2010). Object preference by walking fruit flies, Drosophila melanogaster, is mediated by vision and graviperception. J. Exp. Biol. 213, 2494- 506.

Ronacher, B., von Helversen, D. and von Helversen, O. (1986). Routes and stations in the processing of auditory directional information in the CNS of a grasshopper, as revealed by surgical experiments. J. Comp. Physiol. [A]. 158, 363-374.

Salcedo, E., Huber, A., Henrich, S., Chadwell, L. V., Chou, W. H., Paulsen, R. and Britt, S. G. (1999). Blue- and green-absorbing visual pigments of Drosophila: ectopic expression and physiological characterization of the R8 photoreceptor cell-specific Rh5 and Rh6 rhodopsins. J. Neurosci. 19, 10716-26.

Schildberger, K. and Hörner, M. (1988). The function of auditory neurons in cricket phonotaxis. I. Influence of hyperpolarization of identified neurons on sound localization. J. Comp. Physiol. [A]. 163, 621–631.

Schildberger, K. and Kleindienst, H.-U. (1989). Sound localization in intact and one-eared crickets. J. Comp. Physiol. [A]. 165, 615-626.

Schmitz, B., Scharstein, H. and Wendler, G. (1982). Phonotaxis in Gryllus campestris L. (Orthoptera, Gryllidae). J. Comp. Physiol. [A]. 148, 431-444.

Schnupp, J., Nelken, I. and King, A. (2011). Neural basis of sound localization. In Auditory neuroscience: making sense of sound, pp. 177-221. Cambridge, MA: MIT Press.

Schuster, S. (1996). Objektbezogene Suchstrategien bei der Fliege Drosophila. Ph.D. thesis. University of Tübingen.

Schuster, S., Strauss, R. and Gotz, K. G. (2002). Virtual-reality techniques resolve the visual cues used by fruit flies to evaluate object distances. Curr. Biol. 12, 1591-4. 72

Seelig, J. D. and Jayaraman, V. (2015). Neural dynamics for landmark orientation and angular path integration. Nature 521, 186-91.

Selverston, A. I., Kleindienst, H. U. and Huber, F. (1985). Synaptic connectivity between cricket auditory interneurons as studied by selective photoinactivation. J. Neurosci. 5, 1283-92.

Steck, K., Veit, D., Grandy, R., Badia, S. B., Mathews, Z., Verschure, P., Hansson, B. S. and Knaden, M. (2012). A high-throughput behavioral paradigm for Drosophila olfaction - The Flywalk. Sci. Rep. 2, 361.

Strauss, R. and Pichler, J. (1998). Persistence of orientation toward a temporarily invisible landmark in Drosophila melanogaster. J. Comp. Physiol. [A]. 182, 411-23.

Strauss, R., Schuster, S. and Gotz, K. G. (1997). Processing of artificial visual feedback in the walking fruit fly Drosophila melanogaster. J. Exp. Biol. 200, 1281-96.

Talyn, B.C. and Dowse, H.B. (2004). The role of courtship song in sexual selection and species recognition by female Drosophila melanogaster. Animal Behav. 68, 1165-1180.

Tauber, E. and Eberl, D. F. (2002). The effect of male competition on the courtship song of Drosophila melanogaster. J. Insect Behav. 15, 109-120.

Yorozu, S., Wong, A., Fischer, B. J., Dankert, H., Kernan, M. J., Kamikouchi, A., Ito, K. and Anderson, D. J. (2009). Distinct sensory representations of wind and near-field sound in the Drosophila brain. Nature 458, 201-5.

Zhang, S. X., Rogulja, D. and Crickmore, M. A. (2016). Dopaminergic circuitry underlying mating drive. Neuron 91, 168-81.

73

CHAPTER 3: CONCLUSIONS AND FUTURE DIRECTIONS

In the introduction to this thesis I discussed some of the open questions concerning the algorithms that animals use to localize sounds and the neural circuits that implement these algorithms. In this discussion, I will consider how we might address some of those open questions using Drosophila as a model system now that we have some understanding of sound localization behavior in

Drosophila. I will also discuss some follow-up behavioral experiments that could expand our understanding of sound localization in Drosophila.

A discussion of open questions about sound localization

What is the range of possible mechanisms for implementing sound localization?

Our behavioral data suggest that Drosophila determine the azimuthal angle of a sound source by comparing only the amplitude of movement of the two antennae. We did not find evidence that

Drosophila use phase cues to locate sound sources, but it is possible that phase cues are used in contexts other than those used in this study. The comparison of the amplitude of movements of the two antennae could be implemented in several different ways. One factor that influences how comparisons are implemented is how the amplitude of antennal movement is encoded in neural activity. The amplitude of antennal movement could be encoded in the rate or latency of spiking within particular JONs or in the population activity of JONs. The encoding by JONs may then be transformed before comparisons are made: a spike rate encoding in JONs might be transformed into a latency code in second order neurons. The subsequent comparison could then be made via spike rate comparisons, spike timing comparisons or via a more complex pattern matching algorithm.

74

Finding neurons that perform comparisons

How might we find ‘comparator neurons’ that perform comparisons between the two antennae in order to encode the azimuthal angle of a sound source? There are several possible approaches.

One approach would be to perform a behavioral genetics screen where neurons are activated or inactivated and the effect of that manipulation on phonotaxis is assessed. The problem with this approach is that is hard to distinguish manipulations that simply deafen the fly or disrupt proper motor control from manipulations that affect comparator neurons.

A second approach, which would be possible once the full connectome of the fly is available, would be to identify possible comparator neurons based on their connectivity. We would look for neurons that receive input, possible indirectly, from both antennae. However, this approach has two drawbacks. First, it is often difficult to match up neurons found using EM reconstruction with genetically identified neurons imaged with light microscopy. Second, this approach is likely to return many neurons that turn out not to be comparator neurons. To demonstrate this, consider the

AMMC (the brain area that receives input from JONs). There are many commissural connections between the AMMC regions in each hemisphere and so by looking at the connectome we would identify possible comparator neurons in the AMMC (Chiang et al., 2011; Kamikouchi et al., 2006).

However, a recent study has shown using pan-neuronal imaging that activity in the AMMC correlates only with movement of the ipsilateral antenna (Patella and Wilson, 2018). Note that the activity seen in the AMMC for that part of the study did not come from JONs because the calcium indicator was expressed in all neurons except JONs. The same study found that activity in the

WED (a brain region postsynaptic to the AMMC) did correlate with movement of both the ipsilateral antenna and the contralateral antenna (Patella and Wilson, 2018). Thus, the WED may be the first site where there is substantial integration of information from left and right JONs.

75

A third approach is to perform a physiology screen using either calcium imaging or electrophysiology. We expect comparator neurons to encode the azimuthal angle of a sound source at least to some extent independently of other parameters of the stimulus, such as intensity. Note that this is not the case for JONs. JON activity is dependent on the azimuthal angle of a sound source in the same way that the movement of the antenna is. A quiet sound from a preferred direction (i.e. perpendicular to the long axis of the antenna) can evoke the same activity in JONs as a loud sound coming from a non-preferred direction, as long as these sounds evoke the same amplitude of movement of the ipsilateral antenna. Comparator neurons avoid this intensity dependence by comparing the movement of the two antennae. Therefore, one way to look for comparator neurons is to look for neurons that have similar tuning to the azimuthal angle of a sound source across a range of intensities. We can identify these neurons by recording their activity while delivering sounds from a range of angles with a range of intensities.

Investigating the neural mechanisms of comparisons

If comparator neurons are found, it will be interesting to investigate the mechanism by which comparisons are implemented. For example, if the amplitude of antennal movement is encoded via a rate code, comparator neurons might integrate excitation from one antenna and inhibition from the other. Note that for the encoding of azimuthal angle by this neuron to be intensity independent, the inhibitory input must have a divisive effect.

On the other hand, if the magnitude of antennal movement is encoded with a latency code, comparator neurons might receive temporally precise excitation and inhibition from the two antennae. If the delay length for the two antennae was identical, excitation would only precede inhibition if the amplitude of movement of the ‘excitatory’ antenna was larger than the ‘inhibitory’ antenna so that JONs spiked earlier on the excitatory side. In this way, sound location could be

76 encoded by whether excitation was able to ‘avoid’ inhibition.

These are just two examples for the mechanisms by which comparisons might be implemented. It will be interesting to see whether comparisons for Drosophila phonotaxis are implemented via the same mechanisms as those examples that have been studied in other species.

Implicit vs. explicit comparison

It is also possible that there are no neurons that explicitly compare the input from the two antennae.

Instead comparisons might be performed implicitly: vibration of each antenna might increase stride length of the legs on the same side of the body as that antenna. Differences in the magnitude of antennal vibrations would produce different stride lengths and the fly would turn. For example, when a sound comes from the right, the left antenna vibrates most, the left legs make longer strides and the fly turns right. In essence, the right-left comparison would be implemented by the body’s mechanics, rather than the nervous system. However, given that there are comparator neurons in all animals known to localize sounds, it seems more likely that comparator neurons exist.

How does comparison of two or more values work over a wide range of values?

We have shown that flies can locate sounds with diverse spectro-temporal content. Different JONs are tuned to different frequencies within the range of frequencies that we tested (Clemens et al.,

2018; Ishikawa et al., 2017; Kamikouchi et al., 2009; Matsuo et al., 2014; Patella and Wilson,

2018; Yorozu et al., 2009). This diversity in frequency tuning is preserved in downstream auditory areas: the AMMC and the WED. There are several ways that flies might make comparisons between the two antennae across a range of frequencies. One possibility is that there are neurons that perform comparisons for each frequency band. The WED is divided into orderly frequency- specific strips, with each strip receiving input from both antennae (Patella and Wilson, 2018). So inter-antennal comparison within frequency bands could occur there. Another possibility is that

77 there is pooling across frequencies bands before comparison.

The situation is more complicated for the broadband stimuli that are common in nature. The resonant frequency of the antenna changes with sound intensity (Göpfert and Robert, 2002).

Therefore, if there is a broadband stimulus and the two antennae are vibrating at different amplitudes, we expect the frequency spectrum of the vibration of the two antennae to be different and we expect different sub-populations of JONs to be active (Morley, 2011). In this case, it may be easier to pool across frequencies before comparison.

What is the role of each brain hemisphere in comparisons that involve both the left and right sides of the body?

One of the open questions raised in the introduction was what is the contribution of each brain hemisphere in sound localization? Does each brain hemisphere locate sounds in different regions of space? Or are the same comparisons being performed in both hemispheres and if so, why? This question is difficult to tackle in vertebrates or other invertebrates, besides Drosophila, because we cannot easily label genetically identifiable neurons. However, recent work in the Wilson lab has shown that in Drosophila it is possible to study the contribution of the same neuron in each hemisphere to behavior (S. Rayshubskiy & R. I. Wilson, unpublished observations). And so, in the near future, it may be possible to better understand the role of each hemisphere in sound localization.

Future behavioral experiments

Phonotaxis in closed loop

For all of the experiments described in this thesis, the flies were in open loop. That means a fly’s behavior didn’t affect the stimulus. As a fly made a turn, the speaker remained in the same location

78 and so we might expect that it appears to the fly as if the speaker is moving. We could also perform similar experiments in closed loop. In this case, we would to record the yaw velocity of the ball.

Then as the fly rotates the ball by an angle about the yaw axis, we would change the azimuthal angle of the speaker by the same angle.

From the experiments in this thesis, we might expect that flies will make turns to bring a sound source directly in front or behind of them, thereby fixating on the sound source. We can test this hypothesis by performing experiments in closed loop.

We can also perform closed-loop experiments by observing freely moving flies. We may be able to assess the importance of phonotaxis for different naturalistic behaviors by manipulating the antenna in freely moving flies. For example, we could glue the distal joint of one antenna of a female fly and see whether that disrupts normal courtship behavior.

Adaptation

Physiology studies have shown that there is adaptation in JONs (Clemens et al., 2018). We could not find evidence of adaptation in our dataset (data and analysis not shown), however, the inter- stimulus intervals for these experiments were generally long (10 to 20 seconds). In contrast, compound action potentials recorded from JONs recover from adaptation within about 30ms

(Clemens et al., 2018), although it is possible that central auditory neurons take longer to recover from adaptation. It would be interesting to test whether behavioral adaptation occurs for shorter inter-stimulus intervals. One way to test this would be to play a stimulus from a 45° angle and then, shortly after the stimulus offset, play the same stimulus from a 0° angle. We expect the first stimulus to evoke a turn to the right. If there is no adaptation to the first stimulus, we expect the second stimulus to evoke no turning or a small turn in a fly-specific direction as we saw in Figure

7. However, if there is neural adaptation, we expect the first stimulus to adapt neurons detecting

79 left antennal movement more than neurons detecting right antennal movement. Therefore, we expect the neurons that detect right antennal movement to be more active when the second stimulus is played, which should evoke a turn to the left.

As mentioned earlier, evidence of behavioral adaptation may mean it’s possible to test whether behavior is under-reporting perceptual ability. We found no evidence that flies could distinguish stimuli at 45° and -135° degrees. However, if a 45° stimulus can dishabituate a -135° stimulus, this would show that flies can use phase information to distinguish these stimuli.

80

References

Chiang, A.-S., Lin, C.-Y., Chuang, C.-C., Chang, H.-M., Hsieh, C.-H., Yeh, C.-W., Shih, C.- T., Wu, J.-J., Wang, G.-T., Chen, Y.-C., et al. (2011). Three-Dimensional Reconstruction of Brain-wide Wiring Networks in Drosophila at Single-Cell Resolution. Current Biology 21, 1– 11.

Clemens, J., Ozeri-Engelhard, N. and Murthy, M. (2018). Fast intensity adaptation enhances the encoding of sound in Drosophila. Nature Communications 9, 134.

Göpfert, M. C. and Robert, D. (2002). The mechanical basis of Drosophila audition. Journal of Experimental Biology 205, 1199–1208.

Ishikawa, Y., Okamoto, N., Nakamura, M., Kim, H. and Kamikouchi, A. (2017). Anatomic and Physiologic Heterogeneity of Subgroup-A Auditory Sensory Neurons in Fruit Flies. Front Neural Circuits 11, 46.

Kamikouchi, A., Shimada, T. and Ito, K. (2006). Comprehensive classification of the auditory sensory projections in the brain of the fruit fly Drosophila melanogaster. J. Comp. Neurol. 499, 317–356.

Kamikouchi, A., Inagaki, H. K., Effertz, T., Hendrich, O., Fiala, A., Göpfert, M. C. and Ito, K. (2009). The neural basis of Drosophila gravity-sensing and hearing. Nature 458, 165–171.

Matsuo, E., Yamada, D., Ishikawa, Y., Asai, T., Ishimoto, H. and Kamikouchi, A. (2014). Identification of novel vibration- and deflection-sensitive neuronal subgroups in Johnston’s organ of the fruit fly. Front. Physiol 5, 179.

Morley, E. L. (2011). Antennal mechanics and acoustic stimulation in Drosophila melanogaster audition. PhD thesis, University of Bristol, Bristol, United Kingdom.

Patella, P. and Wilson, R. I. (2018). Functional Maps of Mechanosensory Features in the Drosophila Brain. Curr. Biol. 28, 1189-1203.e5.

Yorozu, S., Wong, A., Fischer, B. J., Dankert, H., Kernan, M. J., Kamikouchi, A., Ito, K. and Anderson, D. J. (2009). Distinct sensory representations of wind and near-field sound in the Drosophila brain. Nature 458, 201–205.

81

APPENDIX A: SUPPLEMENTARY FIGURES FOR CHAPTER 2

82

Figure S1: Speaker calibrations. (A) Each point is the mean of the peak particle velocity, measured at the fly’s position, averaged across 10 repetitions of the stimulus; these values are shown for all four speakers and each sound stimulus frequency used in this study. Note that, for all speakers, particle velocity was essentially constant for all sound stimulus frequencies. We adjusted the amplitude of the sound stimulus command waveform to achieve this outcome.

(B) Black lines are particle velocity versus time (one on each panel for each of the four speakers).

Red lines are the sine-wave voltage commands sent to the speaker amplifier, phase-shifted to match the phase of the recorded particle velocities. The similarity in shape of the black and red waveforms indicates that harmonic distortions are absent.

83

Figure S1 (Continued)

84

Figure S2: Distribution of forward velocities. (A) Histogram showing the normalized distribution of forward velocities in a set of typical experiments. Forward velocities were measured over time windows 10 ms in duration. This histogram was generated using data from the five flies in Fig. 3A; histograms were first generated for each fly, and then were averaged together after normalizing the area under each histogram to one. This bimodal distribution is typical of flies that run well on a spherical treadmill (Gaudry et al., 2013).

(B) Same as (A) but excluding trials where the forward velocity was below threshold. The trials where the average forward velocity during the pre-stimulus period was > 10 mm/s were the trials included in data analysis.

85

Figure S3: Additional single-trial examples of phonotaxis and acoustic startle behavior. Additional examples of one typical fly’s responses to sounds from the right (45°, green) or left (

45°, purple). Periodic fluctuations in lateral and forward velocity correspond to individual strides.

Thick pastel lines are the trial-averaged data for this fly. These trials and those shown in Fig. 3B are randomly selected from the same fly. Trials are numbered here in the order shown, but were in fact presented in a pseudo-random order that interleaved trials from the two speakers. Note that the fly typically briefly stops just after sound onset, and then resumes walking, often turning toward the sound as walking resumes.

86

(Continued) Figure S3

87

Figure S4: Trial-to-trial variation in forward and lateral velocity are not strongly correlated. These five columns show data for five typical flies (the five flies in Fig. 3A). In each plot, the fly’s sound-evoked lateral velocity in a single trial is plotted versus the fly’s sound-evoked decrease in forward velocity in the same trial. Note that these values are not strongly correlated on a single- trial basis. This indicates that stopping and turning are fairly independent behaviors. The lateral velocity shown here is the lateral velocity measured at stimulus offset. The decrease in forward velocity shown here was computed as the forward velocity just before stimulus onset, minus the forward velocity 120 ms after stimulus onset. We measured the lateral velocity and decrease in forward velocity at the same time points for the stripcharts shown throughout the paper (e.g. Fig.

4C,D).

88

(Continued) Figure S4

89

References

Gaudry, Q., Hong, E. J., Kain, J., de Bivort, B. L. and Wilson, R. I. (2012). Asymmetric neurotransmitter release enables rapid odour lateralization in Drosophila. Nature 493, 424–428.

90