Bayesian Multisensory Integration and Cross-Modal Spatial Links Sophie Deneve A,*, Alexandre Pouget B

Total Page:16

File Type:pdf, Size:1020Kb

Bayesian Multisensory Integration and Cross-Modal Spatial Links Sophie Deneve A,*, Alexandre Pouget B Journal of Physiology - Paris 98 (2004) 249–258 www.elsevier.com/locate/jphysparis Bayesian multisensory integration and cross-modal spatial links Sophie Deneve a,*, Alexandre Pouget b a Gatsby Computational Neuroscience Unit, Alexandra House, 17 Queen Square, London WC1N 3AR, UK Available onlineb Department of Brain and Cognitive Sciences, Meliora Hall, University of Rochester, Rochester, NY 14627, USA Abstract Our perception of the word is the result of combining information between several senses, such as vision, audition and pro- prioception. These sensory modalities use widely different frames of reference to represent the properties and locations of object. Moreover, multisensory cues come with different degrees of reliability, and the reliability of a given cue can change in different contexts. The Bayesian framework––which we describe in this review––provides an optimal solution to deal with this issue of combining cues that are not equally reliable. However, this approach does not address the issue of frames of references. We show that this problem can be solved by creating cross-modal spatial links in basis function networks. Finally, we show how the basis function approach can be combined with the Bayesian framework to yield networks that can perform optimal multisensory combination. On the basis of this theory, we argue that multisensory integration is a dialogue between sensory modalities rather that the convergence of all sensory information onto a supra-modal area. Ó 2004 Published by Elsevier Ltd. Keywords: Multisensory integration; Basis functions; Recurrent networks; Bayesian; Frames of reference; Gain modulation; Partially shifting receptive fields 1. Introduction environment or body. Thus multisensory integration cannot be a simple averaging between converging sen- Multisensory integration refers to the capacity of sory inputs. More elaborate computations are required combining information coming from different sensory to interpret neural responses corresponding to the same modalities to get a more accurate representation of the object in different sensory areas. To use an analogy in environment and body. For example, vision and touch the linguistic domain, each sensory modality uses its can be combined to estimate the shape of objects, and own language, and information cannot be shared be- viewing somebody’s lips moving can improve speech tween modalities without translation mechanisms for comprehension. This integration process is difficult for the different languages. For example, sensory modality two main reasons. First, the reliability of sensory encodes the position of objects in different frames of modalities varies widely according to the context. For reference. Visual stimuli are represented by neurons with example, in daylight, visual cues are more reliable than receptive fields on the retina, auditory stimuli by neu- auditory cues to localize objects, while the contrary is rons with receptive fields around the head, and tactile true at night. Thus, the brain should rely more on stimuli by neurons with receptive fields anchored on the auditory cues at night and more on visual cues during skin. Thus, a change in eye position or body posture will the day to estimate object positions. result in a change in the correspondence between visual, Another reason why multisensory integration is a auditory and tactile neural responses encoding the same complex issue is that each sensory modality uses a dif- object. To combine these different sensory responses, the ferent format to encode the same properties of the brain must take into account the posture and the movements of the body in space. We first review the Bayesian framework for multi- * Corresponding author. sensory integration, which provides a set of rule to E-mail addresses: [email protected] (S. Deneve), optimally combine sensory inputs with varying reliabil- [email protected] (A. Pouget). ities. We then describe several psychophysical studies 0928-4257/$ - see front matter Ó 2004 Published by Elsevier Ltd. doi:10.1016/j.jphysparis.2004.03.011 250 S. Deneve, A. Pouget / Journal of Physiology - Paris 98 (2004) 249–258 supporting the notion that multisensory integration in is, PðxÞ¼c, where c is a constant. This implies that PðxÞ the nervous system is indeed akin to a Bayesian infer- does not depend on x, in which case we can also ignore ence process. We then review evidence from psycho- it. Therefore, Eq. (1) reduces to: physics and neuropsychology that sensory inputs from PxðÞ/jr PðÞr jx ð2Þ different modalities, but originating at the same location vis vis in space, can influence one another regardless of body Once the posterior distribution is computed, an estimate posture, suggesting that there is a link, or translation of the position of the object can be obtained by recov- mechanism, between the spatial representations of dif- ering the value of x that maximizes that distribution: ferent sensory systems. Finally, we turn to neurophysi- ^xvis ¼ arg maxPxðÞjrvis ological and modeling data regarding the neural x mechanisms of spatial transformations and Bayesian This is known as the maximum a posteriori estimate, or inferences. MAP estimate for short. When we hear the object, a similar posterior distri- bution, Pðxjr Þ, and its corresponding estimate, ^x , 2. Bayesian framework for multisensory integration aud aud can be computed based on the noisy responses of auditory neuron, r . The Bayesian framework allows the optimal combi- aud What should we do when the object is heard and seen nations of multiple sources of information about a at the same time? Using the same approach, we need to quantity x [22]. We consider a specific example in which compute the estimate, ^x (‘‘bim’’ stands for bimodal), x refers to the position of an object which can be seen bim maximizing the posterior distribution, Pðxjr ; r Þ: and heard at the same time. Given noisy neural re- vis aud sponses in the visual cortex, rvis, the position of the ^xbim ¼ arg maxPxðÞjrvis; raud object is most probably near the receptive fields of the x most active cells, but this position cannot be determined To compute the posterior distribution, we use Bayes with infinite precision due to the presence of neural noise law, which under the assumption of a flat prior distri- (we use bold letter to refer to vector; thus rvis is meant to bution, reduces to: be a vector corresponding to the firing rate of a large PxðÞ/jrvis; raud PðÞrvis; raudjx ð3Þ population of visual neurons). Given the uncertainty associated with x, a good strategy is to compute the PxðÞ/jrvis; raud PðÞrvisjx PðÞraudjx ð4Þ posterior probability that the object is at position x gi- PxðÞ/jrvis; raud PxðÞjrvis PxðÞjraud ð5Þ ven the visual neural responses, PðxjrvisÞ. Using Baye’s rule, PðxjrvisÞ can be obtained by combining the distri- To go from Eqs. (3) and (4), we assumed that the noise bution of neural noise PðxjrvisÞ with prior knowledge on corrupting the visual neurons is independent from the the distribution of object position PðxÞ and the prior one corrupting the auditory neurons (which seems rea- probability of neural responses PðrvisÞ: sonable given how far apart those neurons are in the cortex). The step from Eqs. (4) and (5) is a consequence PðÞrvisjx PxðÞ PxðÞ¼jrvis ð1Þ of Eq. (2). From Eq. (5), we see that the bimodal pos- PðÞrvis terior distribution can be obtained by simply taking the This distribution is called a posterior probability be- product of the unimodal distributions. An example of cause it refers to the probability of object position after this operation is illustrated in Fig. 1. taking into account the sensory input, r (as opposed to When PðxjrvisÞ and PðxjraudÞ are Gaussian probability the prior PðxÞ which is independent of r). PðxjrvisÞ is distributions, as is the case in Fig. 1, the bimodal esti- called the noise distribution because it corresponds to mate ^xbim can be obtained by taking a linear combina- variability in neural responses for a fixed stimulus, i.e., tion of the unimodal estimates, ^xvis and ^xaud, weighted by to variability which is not accounted by the stimulus. their respective reliabilities [6,21,41]: Note several important points about Eq. (1). First, we 2 2 1=rvis 1=raud can ignore the denominator, Pðr Þ, because it is inde- ^xbim ¼ ^xvis þ ^xaud; ð6Þ vis 1=r2 þ 1=r2 1=r2 þ 1=r2 pendent of x, and x is the only variable we care about. vis aud vis aud 2 2 Second, PðrvisjxÞ can be measured experimentally by where 1=rvis and 1=raud, the reliability of the visual and repetitively presenting an object at the same position x auditory estimates respectively, are the inverse of the and measuring the variability in rvis (which is why we call variances of the visual and auditory posterior proba- this term the ‘‘noise’’ distribution). Finally, if we happen bilities. In particular, if the visual input is more reliable 2 2 to know that the object is more likely to appear in some than the auditory input (rvis is smaller than raud) then visual locations than others, we can represent this the bimodal estimate of position should be closer to the knowledge in the prior distribution PðxÞ. In this section, visual estimate and vice versa if audition is more reliable we will assume that all positions are equally likely, that than vision. S. Deneve, A. Pouget / Journal of Physiology - Paris 98 (2004) 249–258 251 P( x | r vis, r aud) The Bayesian hypothesis predicts that the distribution of bimodal estimates should be approximately a product between the unimodal estimate distributions (Eq. (5)). This approach has been applied successfully to the P( x | r vis) P( x | r ) aud estimated position of the hand from visual and propri- Probability oceptive inputs [33–35]. In these experiments, subjects were required to localize a proprioceptive target (their middle finger of their right hand without visual feed- back), a visual target, or a visual and proprioceptive ^xvivis ^xbim ^xaud Position of the object, x target (their middle finger of their right hand with visual feedback).
Recommended publications
  • Prefrontal and Posterior Parietal Contributions to the Perceptual Awareness of Touch M
    www.nature.com/scientificreports OPEN Prefrontal and posterior parietal contributions to the perceptual awareness of touch M. Rullmann1,2,5, S. Preusser1,5 & B. Pleger1,3,4* Which brain regions contribute to the perceptual awareness of touch remains largely unclear. We collected structural magnetic resonance imaging scans and neurological examination reports of 70 patients with brain injuries or stroke in S1 extending into adjacent parietal, temporal or pre-/frontal regions. We applied voxel-based lesion-symptom mapping to identify brain areas that overlap with an impaired touch perception (i.e., hypoesthesia). As expected, patients with hypoesthesia (n = 43) presented lesions in all Brodmann areas in S1 on postcentral gyrus (BA 1, 2, 3a, 3b). At the anterior border to BA 3b, we additionally identifed motor area BA 4p in association with hypoesthesia, as well as further ventrally the ventral premotor cortex (BA 6, BA 44), assumed to be involved in whole-body perception. At the posterior border to S1, we found hypoesthesia associated efects in attention-related areas such as the inferior parietal lobe and intraparietal sulcus. Downstream to S1, we replicated previously reported lesion-hypoesthesia associations in the parietal operculum and insular cortex (i.e., ventral pathway of somatosensory processing). The present fndings extend this pathway from S1 to the insular cortex by prefrontal and posterior parietal areas involved in multisensory integration and attention processes. Te primary somatosensory cortex (S1) in monkeys can be divided into four Brodmann areas: (BA) 1, 2, 3a, and 3b. Each BA consists of a somatotopically organized map that subserves distinct somatosensory functions1–3.
    [Show full text]
  • Using Multisensory Integration to Understand the Human Auditory Cortex
    Chapter 8 Using Multisensory Integration to Understand the Human Auditory Cortex Michael S. Beauchamp Abstract Accurate and meaningful parcellation of the human cortex is an essential endeavor to facilitate the collective understanding of brain functions across sensory and cognitive domains. Unlike in the visual cortex, the details of anatomical and functional mapping associated with the earliest stages of auditory processing in the cortex are still a topic of active debate. Interestingly, aspects of multisensory pro- cessing may provide a unique window to meaningfully subdivide the auditory sen- sory areas by exploring different functional properties other than the traditional tonotopic approach. In this chapter, a tour of the auditory cortical areas is first pro- vided, starting from its core area, Heschl’s gyrus, then moving onto surrounding areas. Evidence from different sources, including postmortem studies of the human auditory cortex, resting-state functional connectivity derived from the Human Connectome Project, and electrocorticographic studies, is presented to better under- stand how different subdivisions of the human auditory cortex and its surrounding areas are involved in auditory and multisensory processing. The chapter concludes with the remaining challenges to account for individual variability in functional anatomy, particularly pertaining to multisensory processing. Keywords Auditory cortex · Cross-modal · Electrocorticography · Functional anatomy · Functional connectivity · Heschl’s gyrus · Human Connectome Project · Sensory integration · Superior temporal sulcus · Temporal cortex 8.1 Introduction The human cerebrum is divided into sensory cortices specialized for the processing of a specific sensory modality, with the visual cortex located in the occipital lobe and the auditory cortex centered on Heschl’s gyrus on the plane of the superior M.
    [Show full text]
  • Visual Experience Is Necessary for the Development of Multisensory Integration
    9580 • The Journal of Neuroscience, October 27, 2004 • 24(43):9580–9584 Brief Communication Visual Experience Is Necessary for the Development of Multisensory Integration Mark T. Wallace, Thomas J. Perrault Jr, W. David Hairston, and Barry E. Stein Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina 27157 Multisensoryneuronsandtheirabilitytointegratemultisensorycuesdevelopgraduallyinthemidbrain[i.e.,superiorcolliculus(SC)].To examine the possibility that early sensory experiences might play a critical role in these maturational processes, animals were raised in the absence of visual cues. As adults, the SC of these animals were found to contain many multisensory neurons, the large majority of which were visually responsive. Although these neurons responded robustly to each of their cross-modal inputs when presented indi- vidually, they were incapable of synthesizing this information. These observations suggest that visual experiences are critical for the SC to develop the ability to integrate multisensory information and lead to the prediction that, in the absence of such experience, animals will be compromised in their sensitivity to cross-modal events. Key words: cross-modal; dark rearing; multimodal; plasticity; subcortical; superior colliculus Introduction et al., 2000). What is not evident from these studies is whether Neurons in the superior colliculus (SC) have the remarkable ca- these experiences, so critical for the normal development of indi- pacity to integrate information from the different senses, a pro- vidual sensory systems, as well as for the normal cross-modal cess that substantially enhances the salience of the initiating event topography seen in multisensory structures (King et al., 1988; (Meredith and Stein 1983, 1985, 1986; King and Palmer, 1985; Knudsen and Brainard, 1991; Benedetti and Ferro, 1995), play Wallace et al., 1996, 1998; Frens and Van Opstal 1998; Bell et al., any role in the development of the ability of the brain to synthe- 2001; Perrault et al., 2003).
    [Show full text]
  • Performing a Task Jointly Enhances the Sound-Induced Flash Illusion
    Performing a task jointly enhances the sound-induced flash illusion Basil Wahn1, Tim Rohe2, Anika Gearhart3, Alan Kingstone1, and Scott Sinnett3 Abstract Our different senses are stimulated continuously. Through multisensory integration, different sensory inputs may or may not be combined into a unitary percept. Simultaneous with this stimulation, people are frequently engaged in social interactions, but how multisensory integration and social processing combine is largely unknown. The present study investigated if, and how, the multisensory sound-induced flash illusion is affected by a social manipulation. In the sound-induced flash illusion, a participant typically receives one visual flash and two auditory beeps and she/he is required to indicate the number of flashes that were perceived. Often, the auditory beeps alter the perception of the flashes such that a participant tends to perceive two flashes instead of one flash. We tested whether performing a flash counting task with a partner (confederate), who was required to indicate the number of presented beeps, would modulate this illusion. We found that the sound-induced flash illusion was perceived significantly more often when the flash counting task was performed with the confederate compared to performing it alone. Yet, we no longer find this effect if visual access between the two individuals is prevented. These findings, combined with previous results, suggest that performing a multisensory task jointly – in this case an audiovisual task – lowers the extent to which an individual attends to visual information – which in turn affects the multisensory integration process. Keywords multisensory processing, social cognition, joint action, sound-induced flash illusion, double flash illusion Introduction In addition, a large body of research in the field of joint action has shown that individual task performance is affected Humans constantly receive input from multiple sensory by performing visual perceptual tasks jointly (Bockler¨ et al.
    [Show full text]
  • Pain Processing in Multisensory Environments
    Review article e-Neuroforum 2010 · 1:23–28 Marion Höfle1 · M. Hauck2 · A.K. Engel1 · D. Senkowski1 DOI 10.1007/s13295-010-0004-z 1 Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg 2 Department of Neurology, University Medical Center Hamburg-Eppendorf, Hamburg © Springer-Verlag 2010 Pain processing in multisensory environments Introduction It is well known that different environ- influences between visual and pain stim- mental and cognitive factors can modu- uli. These studies support the notion that Imagine yourself cutting peppers. While late activity in the so-called pain matrix, visual inputs can strongly affect the pro- you watch the pepper getting smaller, i.e., the network of brain regions involved cessing of painful stimuli. Finally, we give your mind might wander and the knife in pain processing [28]. Moreover, there is an outlook for future research on pain pro- accidentally carves the skin of your finger elaborate knowledge of the neurophysio- cessing in multisensory environments. and you feel a sharp pain. Painful events in logical mechanisms underlying pain pro- real-world situations do not occur in iso- cessing, as well as of the principle mecha- Dimensions of pain perception lation but often comprise input from ad- nisms of multisensory integration of non- ditional sensory modalities, i.e., the visual painful information (. Box 1). Surpris- The application of different pain models percept of the knife penetrating the finger. ingly, however, only few studies have sys- (. Box 2) revealed that pain is a complex The successful integration and recall of tematically addressed how stimuli from experience that can be described along painful stimuli in combination with high- other sensory modalities affect nocicep- two major dimensions, which themselves ly informative input of other sensory mo- tive processing.
    [Show full text]
  • Development of Multisensory Integration Approach Model
    International Journal of Applied Research 2016; 2(4): 629-633 ISSN Print: 2394-7500 ISSN Online: 2394-5869 Impact Factor: 5.2 Development of multisensory integration approach IJAR 2016; 2(4): 629-633 www.allresearchjournal.com model Received: 13-02-2016 Accepted: 15-03-2016 Prasannakumar S, Dr. Saminathan B Prasannakumar S Assistant Professor, North Abstract East Regional Institute of Education, Shillong, India. Every teacher expects optimum level of processing in mind of them students. The level of processing is mainly depends upon memory process. Most of the students have retrieval difficulties on past learning. Dr. Saminathan B Memory difficulties directly related to sensory integration. In these circumstances the investigator Assistant Professor, made an attempt to construct Multisensory integration Approach Model based on sensory process and Department of Education, perceptual process. Multisensory Integration approach model contains three parts the first part is Bharathidasan University, Instructional part, second part is processing part and third one is learning outcome part. Instructional Trichy, India. part is includes seven steps. There are relating new information to prior knowledge, Focusing attention to the information, developing sensory connection, Organizing the information, Expanding sensory images, Structuring the information and Practicing recall. The above instructional strategies develop stimulation, sensation, Attention, Perception, Imagery, conceptualization and memory. The learning outcome part contains from low level sensory integration (visual- Auditory) to high level sensory integration (Visual-Auditory-Tactile-Olfactory-Gustatory). The investigator regulates information processing of the students in brain through Multisensory Integration Approach Model. Keywords: Development, Multisensory Integration, Approach, Model 1. Introduction Development of models of teaching is the recent innovation in teaching.
    [Show full text]
  • Multisensory Processing and Perceptual Consciousness: Part I
    Philosophy Compass 11/2 (2016): 121–133, 10.1111/phc3.12227 Multisensory Processing and Perceptual Consciousness: Part I Robert Eamon Briscoe* Ohio University Abstract Multisensory processing encompasses all of the various ways in which the presence of information in one sensory modality can adaptively inf luence the processing of information in a different modality. In Part I of this survey article, I begin by presenting a cartography of some of the more extensively investigated forms of multisensory processing, with a special focus on two distinct types of multisensory integration. I brief ly discuss the conditions under which these different forms of multisensory processing occur as well as their important perceptual consequences and interrelations. In Part II, I then turn to examining of some of the different possible ways in which the structure of conscious perceptual experience might also be characterized as multisensory. In addition, I discuss the significance of research on multisensory processing and multisensory consciousness for philosophical attempts to individuate the senses. 1. Introduction Multisensory processing, as understood here, encompasses all of the various ways in which the presence of information in one sensory modality can adaptively inf luence the way information is processed in a different modality.1 Multisensory processes are adaptive in the sense that they function to enhance the accuracy of perception and the control of perceptually guided actions.2 Some forms of multisensory processing, for example, deal with the challenge of tracking or identifying objects and events crossmodally. When sensory signals in different modalities have been identified as having a common origin in the external environment (or the perceiver’s body), other forms of multisensory processing address the problem of integrating them and, importantly, reconciling them when they conf lict.
    [Show full text]
  • Grasping Multisensory Integration: Proprioceptive Capture After Virtual
    Grasping Multisensory Integration: Proprioceptive Capture after Virtual Object Interactions Johannes Lohmann ([email protected]) Department of Computer Science, Cognitive Modeling, Sand 14 Tubingen,¨ 72076 Germany Jakob Gutschow¨ ([email protected]) Department of Computer Science, Cognitive Modeling, Sand 14 Tubingen,¨ 72076 Germany Martin V. Butz ([email protected]) Department of Computer Science, Cognitive Modeling, Sand 14 Tubingen,¨ 72076 Germany Abstract vious modal encodings (Thelen, Talsma, & Murray, 2015). According to most recent theories of multisensory integration, Quak, London, and Talsma (2015) suggest that task require- weighting of different modalities depends on the reliability of ments typically determine whether a unimodal or a complex, the involved sensory estimates. Top-down modulations have multisensory representation is formed. been studied to a lesser degree. Furthermore, it is still debated whether working memory maintains multisensory information Our aim in the present study was two-fold. First, we in a distributed modal fashion, or in terms of an integrated rep- wanted to investigate whether multisensory integration is resentation. To investigate whether multisensory integration modulated by task relevance. Second, we wanted to probe is modulated by task relevance and to probe the nature of the working memory encodings, we combined an object interac- the nature of the stored representations. To investigate these tion task with a size estimation task in an immersive virtual questions, we combined an object interaction task involving reality. During the object interaction, we induced multisen- multisensory conflict with a size estimation task. We let par- sory conflict between seen and felt grip aperture. Both, visual and proprioceptive size estimation showed a clear modulation ticipants perform a grasp-and-carry task in an immersive vir- by the experimental manipulation.
    [Show full text]
  • Multimodal Sensorimotor Integration of Visual and Kinaesthetic Afferents Modulates Motor Circuits in Humans
    brain sciences Article Multimodal Sensorimotor Integration of Visual and Kinaesthetic Afferents Modulates Motor Circuits in Humans Volker R. Zschorlich 1,* , Frank Behrendt 2 and Marc H. E. de Lussanet 3 1 Department of Movement Science, University of Rostock, Ulmenstraße 69, 18057 Rostock, Germany 2 Reha Rheinfelden, Research Department, Salinenstrasse 98, CH-4310 Rheinfelden, Switzerland; [email protected] 3 Department of Movement Science, and OCC Center for Cognitive and Behavioral Neuroscience, University of Münster, Horstmarer Landweg 62b, 48149 Münster, Germany; [email protected] * Correspondence: [email protected] Abstract: Optimal motor control requires the effective integration of multi-modal information. Visual information of movement performed by others even enhances potentials in the upper motor neurons through the mirror-neuron system. On the other hand, it is known that motor control is intimately associated with afferent proprioceptive information. Kinaesthetic information is also generated by passive, external-driven movements. In the context of sensory integration, it is an important question how such passive kinaesthetic information and visually perceived movements are integrated. We studied the effects of visual and kinaesthetic information in combination, as well as isolated, on sensorimotor integration, compared to a control condition. For this, we measured the change in the excitability of the motor cortex (M1) using low-intensity Transcranial magnetic stimulation (TMS). We hypothesised that both visual motoneurons and kinaesthetic motoneurons enhance the excitability of motor responses. We found that passive wrist movements increase the motor excitability, suggesting that kinaesthetic motoneurons do exist. The kinaesthetic influence on the motor threshold was even stronger than the visual information.
    [Show full text]
  • Development of Multisensory Integration from the Perspective of the Individual Neuron
    REVIEWS Development of multisensory integration from the perspective of the individual neuron Barry E. Stein, Terrence R. Stanford and Benjamin A. Rowland Abstract | The ability to use cues from multiple senses in concert is a fundamental aspect of brain function. It maximizes the brain’s use of the information available to it at any given moment and enhances the physiological salience of external events. Because each sense conveys a unique perspective of the external world, synthesizing information across senses affords computational benefits that cannot otherwise be achieved. Multisensory integration not only has substantial survival value but can also create unique experiences that emerge when signals from different sensory channels are bound together. However, neurons in a newborn’s brain are not capable of multisensory integration, and studies in the midbrain have shown that the development of this process is not predetermined. Rather, its emergence and maturation critically depend on cross-modal experiences that alter the underlying neural circuit in such a way that optimizes multisensory integrative capabilities for the environment in which the animal will function. Multisensory The environment is rife with events that emit multiple decisions can be made through the synthesis of their dif- 2–4 A process (behaviour) or entity types of energy (for example, electromagnetic radiation ferent sensory signals . This process, called multisensory (neuron or circuit) that and pressure waves), but all can contain information integration, increases the collective impact of biologically incorporates information about food, shelter, mates and/or danger. Even small significant signals on the brain and enables the organism derived from more than one enhancements in the ability to detect such signals and to achieve performance capabilities that it could not oth- sensory modality.
    [Show full text]
  • Aspects of Multisensory Perception: the Integration of Visual and Auditory Information in Musical Experiences
    Aspects of Multisensory Perception: The Integration of Visual and Auditory Information in Musical Experiences By: DONALD A. HODGES, W. DAVID HAIRSTON, AND JONATHAN H. BURDETTE Hodges, D., Burdette, J. & Hairston, D. Aspects of multisensory perception: The integration of visual and auditory information processing in musical experiences. In G. Avanzini, L. Lopez, S. Koelsch, & M. Majno (Eds.), 175-185. The Neurosciences and Music II: From Perception to Performance, Annals of the New York Academy of Sciences, Vol. 1060. Made available courtesy of New York Academy of Sciences: http://www.nyas.org/Publications/Annals/Default.aspx The definitive version is available at www.blackwell-synergy.com and on a mirror site hosted by HighWire at http://www.nyas.org/default.aspx ***Note: Figures may be missing from this format of the document Abstract: One of the requirements for being a successful musical conductor is to be able to locate sounds instantaneously in time and space. Because this requires the integration of auditory and visual information, the purpose of this study was to examine multisensory processing in conductors and a matched set of control subjects. Subjects participated in a series of behavioral tasks, including pitch discrimination, temporal-order judgment (TOJ), and target localization. Additionally, fMRI scans were done on a subset of subjects who performed a multisensory TOJ task. Analyses of behavioral data indicate that, in the auditory realm, conductors were more accurate in both pitch discrimination and TOJs as well as in locating targets In space. Furthermore, these same subjects also demonstrated a benefit from the combination of auditory and visual information that was not observed in control subjects when locating visual targets.
    [Show full text]
  • The Multisensory Basis of the Self: from Body to Identity to Others
    The Quarterly Journal of Experimental Psychology ISSN: 1747-0218 (Print) 1747-0226 (Online) Journal homepage: http://www.tandfonline.com/loi/pqje20 The multisensory basis of the self: from body to identity to others Manos Tsakiris To cite this article: Manos Tsakiris (2016): The multisensory basis of the self: from body to identity to others, The Quarterly Journal of Experimental Psychology, DOI: 10.1080/17470218.2016.1181768 To link to this article: http://dx.doi.org/10.1080/17470218.2016.1181768 Accepted author version posted online: 21 Apr 2016. Submit your article to this journal Article views: 5 View related articles View Crossmark data Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=pqje20 Download by: [141.255.84.33] Date: 25 April 2016, At: 22:22 Multisensory Basis of the Self Publisher: Taylor & Francis & The Experimental Psychology Society Journal: The Quarterly Journal of Experimental Psychology DOI: 10.1080/17470218.2016.1181768 The multisensory basis of the self: from body to identity to others Manos Tsakiris Lab of Action & Body, Department of Psychology, Royal Holloway University of London, Egham, Surrey, UK *corresponding author: Tsakiris, M. ([email protected]) Acknowledgements: Downloaded by [141.255.84.33] at 22:22 25 April 2016 MT was supported by the European Research Council (ERC-2010-StG-262853) under the FP7, and ESRC Research Grant ES/K013378/1 Word count: 6,108 (excl. references) 1 Multisensory Basis of the Self Abstract By grounding the self in the body, experimental psychology has taken the body as the starting point for a science of the self.
    [Show full text]