Exploring Crossmodal Perceptual Enhancement and Integration in a Sequence-Reproducing Task with Cognitive Priming Feng Feng, Puhong Li, Tony Stockman
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Vision Perceptually Restores Auditory Spectral Dynamics in Speech
Vision Perceptually Restores Auditory Spectral Dynamics in Speech John Plass1,2, David Brang1, Satoru Suzuki2,3, Marcia Grabowecky2,3 1Department of Psychology, University of Michigan, Ann Arbor, MI, USA 2Department of Psychology, Northwestern University, Evanston, IL, USA 3Interdepartmental Neuroscience Program, Northwestern University, Evanston, IL, USA Abstract Visual speech facilitates auditory speech perception,[1–3] but the visual cues responsible for these effects and the crossmodal information they provide remain unclear. Because visible articulators shape the spectral content of auditory speech,[4–6] we hypothesized that listeners may be able to extract spectrotemporal information from visual speech to facilitate auditory speech perception. To uncover statistical regularities that could subserve such facilitations, we compared the resonant frequency of the oral cavity to the shape of the oral aperture during speech. We found that the time-frequency dynamics of oral resonances could be recovered with unexpectedly high precision from the shape of the mouth during speech. Because both auditory frequency modulations[7–9] and visual shape properties[10] are neurally encoded as mid-level perceptual features, we hypothesized that this feature-level correspondence would allow for spectrotemporal information to be recovered from visual speech without reference to higher order (e.g., phonemic) speech representations. Isolating these features from other speech cues, we found that speech-based shape deformations improved sensitivity for corresponding frequency modulations, suggesting that the perceptual system exploits crossmodal correlations in mid-level feature representations to enhance speech perception. To test whether this correspondence could be used to improve comprehension, we selectively degraded the spectral or temporal dimensions of auditory sentence spectrograms to assess how well visual speech facilitated comprehension under each degradation condition. -
Early, Low-Level Auditory-Somatosensory Multisensory Interactions Impact Reaction Time Speed
ORIGINAL RESEARCH ARTICLE published: 11 March 2009 INTEGRATIVE NEUROSCIENCE doi: 10.3389/neuro.07.002.2009 Early, low-level auditory-somatosensory multisensory interactions impact reaction time speed Holger F. Sperdin1, Céline Cappe1, John J. Foxe 2,3 and Micah M. Murray1,4* 1 The Functional Electrical Neuroimaging Laboratory, Neuropsychology and Neurorehabilitation Service and Radiology Service, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland 2 The Cognitive Neurophysiology Laboratory, Program in Cognitive Neuroscience and Schizophrenia, The Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA 3 Program in Cognitive Neuroscience, Departments of Psychology and Biology, City College of the City University of New York, New York, NY, USA 4 The EEG Brain Mapping Core, Centre for Biomedical Imaging, Lausanne and Geneva, Switzerland Edited by: Several lines of research have documented early-latency non-linear response interactions Mark Wallace, Vanderbilt University, between audition and touch in humans and non-human primates. That these effects have been USA obtained under anesthesia, passive stimulation, as well as speeded reaction time tasks would Reviewed by: Christoph Kayser, Max Planck Institute suggest that some multisensory effects are not directly infl uencing behavioral outcome. We for Biological Cybernetics, Germany investigated whether the initial non-linear neural response interactions have a direct bearing Daniel Senkowski, Department of on the speed of reaction times. Electrical neuroimaging analyses were applied to event-related Neurophysiology and Pathophysiology, potentials in response to auditory, somatosensory, or simultaneous auditory–somatosensory University Medical Center Hamburg-Eppendorf, Germany multisensory stimulation that were in turn averaged according to trials leading to fast and *Correspondence: slow reaction times (using a median split of individual subject data for each experimental Micah M. -
A Dissertation Entitled the Effects of an Orton-Gillingham-Based
A Dissertation entitled The Effects of an Orton-Gillingham-based Reading Intervention on Students with Emotional/Behavior Disorders by James Breckinridge Davis, M.Ed. Submitted to the Graduate Faculty as partial fulfillment of the requirements for the Doctor of Philosophy Degree in Special Education. Richard Welsch, Ph.D., Committee Chair Laurie Dinnebeil, Ph.D., Committee Member Edward Cancio, Ph.D., Committee Member Lynne Hamer, Ph.D., Committee Member Dr. Patricia R. Komuniecki, Ph.D., Dean College of Graduate Studies The University of Toledo December 2011 Copyright. 2011, James Breckinridge Davis This document is copyrighted material. Under copyright law, no parts of this document may be reproduced without the expressed permission of the author. An Abstract of The Effects of an Orton-Gillingham-based Reading Intervention on Students with Emotional/Behavior Disorders by James Breckinridge Davis, M.Ed. Submitted to the Graduate Faculty as partial fulfillment of the requirements for the Doctor of Philosophy Degree in Special Education. The University of Toledo December 2011 This study was performed with 4 male students enrolled in a specialized public school for students with emotional/behavior disorders (E/BD). All of the students participated in a 16-week, one-to-one, multisensory reading intervention. The study was a single subject, multiple baseline design. The independent variable was an Orton- Gillingham-based reading intervention for 45 minute sessions. The dependent variable was the students‘ performance on daily probes of words read correctly and the use of pre- and post-test measures on the Dynamic Indicator of Basic Early Literacy Skills (DIBELS). The intervention consisted of 6 different parts: (a) visual, (b) auditory, (c) blending, (d) introduction of a new skill, (e) oral reading, and (f) 10-point probe. -
Using a Multi-Sensory Teaching Approach to Impact Learning and Community in a Second Grade Classroom
Rowan University Rowan Digital Works Theses and Dissertations 7-13-2011 Using a multi-sensory teaching approach to impact learning and community in a second grade classroom Melissa Stoffers Follow this and additional works at: https://rdw.rowan.edu/etd Part of the Elementary Education and Teaching Commons Recommended Citation Stoffers, Melissa, "Using a multi-sensory teaching approach to impact learning and community in a second grade classroom" (2011). Theses and Dissertations. 110. https://rdw.rowan.edu/etd/110 This Thesis is brought to you for free and open access by Rowan Digital Works. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of Rowan Digital Works. For more information, please contact [email protected]. USING A MULTI-SENSORY TEACHING APPROACH TO IMPACT LEARNING AND COMMUNITY IN A SECOND GRADE CLASSROOM by Melissa A. Stoffers A Thesis Submitted to the Department of Teacher Education College of Education In partial fulfillment of the requirement For the degree of Master of Science in Teaching at Rowan University June 16, 2011 Thesis Chair: Susan Browne, Ed.D. © 2011 Melissa A. Stoffers Acknowledgements I would like to thank my family for their endless support and encouragement – particularly my husband who listened and read every word of this thesis several times over without complaint. Without them, I do not know where I would be. I would also like to thank Dr. Susan Browne for her guidance and instruction in assisting me through this writing process. ii Abstract Melissa Stoffers USING A MULTI-SENSORY TEACHING APPROACH TO IMPACT LEARNING AND COMMUNITY IN A SECOND GRADE CLASSROOM 2010/11 Susan Browne, Ed.D. -
Multisensory / Multimodal Learning
Multisensory / Multimodal Learning Multisensory learning uses several of the five senses at once during teaching to help learners: • Enrich their understanding of topics, • Retain and recall information more easily, and • Maintain attention and concentration in the classroom. It is different from learning styles, which assume learners can be classified into auditory, visual, or kinaesthetic learners (as in the VARK methodology). Instead, it is learning which uses any combination of the senses of sight (visual), hearing (auditory), smell, touch (tactile), movement (kinaesthetic), and taste. Enriching understanding Imagine trying to explain the beach to someone who has never seen it before, just using words. Would they be able to fully understand what the beach is like? If you showed them a picture, then they would understand a little more. If you played an audio track of the sound of the waves, that would increase their understanding further. Finally, if you brought some sand and some salt water for them to look at, touch, taste, and smell, the picture would start to be more complete. This is multisensory learning: engaging multiple senses to help gain fuller understanding of a concept or topic. Retaining and recalling information Research on the brain and learning tells us that information received by the brain from each of these senses is stored differently, so multisensory learning means more parts of the brain are involved in storing the information. This helps the brain to access the information more easily when trying to remember it. Maintaining attention If a learner is participating in a listening activity in a lesson, their other senses are not turned off. -
Introduction
SPECOM'2006, St. Petersburg, 25-29 June 2006 Crossmodal Integration and McGurk-Effect in Synthetic Audiovisual Speech Katja Grauwinkel1 & Sascha Fagel2 1Department of Computer Sciences and Media, TFH Berlin University of Applied Sciences, Germany [email protected] 2Institute for Speech and Communication, Berlin University of Technology, Germany [email protected] synthesiser with a 3-dimensional animated head. The Abstract embedded modules are a phonetic articulation module, an This paper presents the results of a study investigating audio synthesis module, a visual articulation module, and a crossmodal processing of audiovisually synthesised speech face module. The phonetic articulation module creates the stimuli. The perception of facial gestures has a great influence phonetic information, which consists of an appropriate phone on the interpretation of a speech signal. Not only chain on the one hand and – as prosodic information – phone paralinguistic information of the speakers emotional state or and pause durations and a fundamental frequency course on motivation can be obtained. Especially if the acoustic signal is the other hand. From this data, the audio synthesis module unclear, e.g. because of background noise or reverberation, generates the audio signal and the visual articulation module watching the facial gestures can enhance speech intelligibility. generates motion information. The audio signal and the On the other hand, visual information that is incongruent to motion information are merged by the face module to create auditory information can also reduce the efficiency of acoustic the complete animation. The MBROLA speech synthesiser speech features, even if the acoustic signal is of good quality. [8] is embedded in the audio synthesis module to generate The way how two modalities interact with each other audible speech. -
The Future of Learning Multisensory Experiences: Visual, Audio, Smell and Taste Senses
The Future of Learning Multisensory Experiences: Visual, Audio, Smell and Taste Senses Aleksandra Klašnja-Milićević1(&), Zoran Marošan2, Mirjana Ivanović1, Ninoslava Savić2, and Boban Vesin3 1 University of Novi Sad, Faculty of Sciences, Department of Mathematics and Informatics, Novi Sad, Serbia {akm,mira}@dmi.uns.ac.rs 2 Department of Informatics, Novi Sad School of Business, Novi Sad, Serbia {zomar,ninas}@uns.ac.rs 3 Department of Computer and Information Science, Trondheim, Norway [email protected] Abstract. This paper describes an investigative study about the sense of smell, taste, hearing and vision to assist process of learning. This will serve to start the design and construction of a multisensory human-computer interface for dif- ferent educational applications. The most important part is understanding the ways in which learners’ senses process learning and memorize information and in which relation these activities are. Though sensory systems and interfaces have developed significantly over the last few decades, there are still unfulfilled challenges in understanding multi- sensory experiences in Human Computer Interaction. The researchers generally rely on vision and listening, some of them focus on touch, but the taste and smell senses remain uninvestigated. Understanding the ways in which human’s senses influence the process, learning effects and memorizing information may be important to e-learning system and its higher functionalities. In order to analyze these questions, we carried out several usability studies with students to see if visual or audio experiences, different smells or tastes can assist and support memorizing information. Obtained results have shown improvement of learning abilities when using different smells, tastes and virtual reality facilities. -
Improving Working Memory in Science Learning Through Effective Multisensory Integration Approach
INTERNATIONAL JOURNAL OF MIND, BRAIN & COGNITION VOL. 9, NO. 1-2 JAN-DEC 2018 Improving Working Memory in Science Learning through Effective Multisensory Integration Approach S. PRASANNAKUMAR NCERT, Meghalaya, India ABSTRACT Sensory integration takes place in the central nervous system where complex interactions such as co-ordination, attention, arousal levels, autonomic functioning, emotions, memory and higher level cognitive functions are carried out. Sensory integration gets information through the senses, puts it together with prior knowledge, information and memories already stored in the brain to make a meaningful response. Multi-sensory learning, as the name implies, is the process of learning a new subject matter through the use of two or more senses. This may include combining visual, auditory, tactile or kinaesthetic, olfactory and gustatory sensation. By activating brain regions associated with touch, flavour, audition and vision, they indicate a direct relationship between perceptual knowledge and sensory brain mechanisms. The investigator developed content based working memory test and pre assessment of the working memory of the students. Then researcher has developed multisensory integration model and taught though this model of science subject and find out post assessment of working memory in science. Finally investigator evaluated multisensory integration model on enhancing working memory in Science among the students. Keywords: Improving, working memory, learning, science, multisensory integration approach. 84 S. PRASANNAKUMAR INTRODUCTION “Memory is the process of maintaining information over time.” “Memory is the means by which we draw on our past experiences in order to use this information in the present’ (Sternberg 1999). Memory is our ability to encode, store and retain subsequently recall information and past experiences in the human brain. -
The Kiki-Bouba Paradigm : Where Senses Meet and Greet
240 Invited Review Article The Kiki-Bouba paradigm : where senses meet and greet Aditya Shukla Cognitive Psychologist, The OWL, Pune. E-mail – [email protected] ABSTRACT Humans have the ability to think in abstract ways. Experiments over the last 90 years have shown that stimuli from the external world can be evaluated on an abstract spectrum with ‘Kiki’ on one end and ‘Bouba’ on the other. People are in concordance with each other with respect to calling a jagged-edgy- sharp bordered two dimensional shape ‘Kiki’ and a curvy-smooth-round two dimensional shape ‘Bouba’.. The proclivity of this correspondence is ubiquitous. Moreover, the Kiki-Bouba phenomenon seems to represent a non-arbitrary abstract connection between 2 or more stimuli. Studies have shown that cross- modal associations between and withinaudioception, opthalmoception, tactioception and gustatoception can be demonstrated by using Kiki-Bouba as a cognitive ‘convergence point’. This review includes a critical assessment of the methods, findings, limitations, plausible explanations and future directions involving the Kiki-Bouba effect. Applications include creatingtreatments and training methods to compensate for poor abstract thinking abilities caused by disorders like schizophrenia and autism, for example. Continuing research in this area would help building a universal model of consciousness that fully incorporates cross-sensory perception at the biological and cognitive levels. Key words:Kiki-Bouba, crossmodal correspondence, multisensory perception, abstraction, ideasthesia, symbolism, schizophrenia, autism. (Paper received – 1st September 2016, Review completed – 10th September 2016, Accepted – 12th September 2016) INTRODUCTION We often describe objects in the environment in complex ways. These descriptions contain analogies, metaphors, emotional effect and structural and functional details about the objects. -
Determining the Effectiveness of a Multisensory Approach to Teach
i DETERMINING THE EFFECTIVENESS OF A MULTISENSORY APPROACH TO TEACH THE ALPHABET AND PHONEMIC AWARENESS MASTERY IN KINDERGARTEN CHILDREN A Dissertation Submitted to the Faculty of Argosy University Campus College of Education In Partial Fulfillment of The Requirements for the Degree of Doctor of Education by Charlene Alexandra Wrighton October 2010 ii DETERMINING THE EFFECTIVENESS OF A MULTISENSORY APPROACH TO TEACH THE ALPHABET AND PHONEMIC AWARENESS MASTERY IN KINDERGARTEN CHILDREN Copyright ©2010 Charlene Alexandra Wrighton All rights reserved iii DETERMINING THE EFFECTIVENESS OF A MULTISENSORY APPROACH TO TEACH THE ALPHABET AND PHONEMIC AWARENESS MASTERY IN KINDERGARTEN CHILDREN A Dissertation Submitted to the Faculty of Argosy University Campus in Partial Fulfillment of the Requirements for the Degree of Doctor of Education by Charlene Alexandra Wrighton Argosy University October 2010 Dissertation Committee Approval: ___________________________________ __________________________________ Dissertation Chair: Dr. Scott Griffith Date ___________________________________ Committee Member: Dr. Aniello Malvetti ___________________________________ __________________________________ Committee Member: Dr. Barbara Cole Program Chair: Ardella Dailey iv DETERMINING THE EFFECTIVENESS OF A MULTISENSORY APPROACH TO TEACH THE ALPHABET AND PHONEMIC AWARENESS MASTERY IN KINDERGARTEN CHILDREN Abstract of Dissertation Submitted to the Faculty of Argosy University Campus College of Education in Partial Fulfillment of the Requirements for the Degree of -
PERSPECTIVES on Language and Literacy
Fall Edition 2013 PERSPECTIVES on Language and Literacy A Quarterly Publication of The International Dyslexia Association Volume 39, No. 4 Technology and Dyslexia Part 1 11 Reading and Assistive Technology: Why the Reader’s Profile Matters Karen Erickson 15 Executive Function Skills and Assistive Technology Cheryl Temple 19 The Changing Roles of Assistive Technology Teams in Public School Settings Denise C. DeCoste 29 Disciplinary Literacy and Technology for Students with Learning Disabilities Cynthia M. Okolo and Rachel A. Kopke 36 Assistive Technology and Writing Dave L. Edyburn 42 Mobile Instructional and Assistive Technology for Literacy David C. Winters and Elaine A. Cheesman How do you ACHIEVE LITERACY for life? Prevention/ Early Intervention Intervention Intensive Wilson Reading System® SECOND EDITION Build a solid foundation Close the reading gap Reach the most challenged in reading and spelling for struggling readers readers (grades 2–12 for beginning readers (grades 4–12 and adult) and adult) (grades K–3) Apply the principles of Implementation Science to program implementation Wilson® Programs and teacher support. Support Common Core Wilson Literacy Teams State Standards partner with school districts to ensure successful implementation and sustainability of Wilson® Programs. Put Wilson to work in your Prevention, Intervention and Intensive settings and get the results you’re looking for. To receive a catalog or learn more call 800-899-8454 or visit www.wilsonlanguage.com. Celebrating 25 years of Wilson Reading System® Fall Edition 2013 PERSPECTIVES on Language and Literacy A Quarterly Publication of The International Dyslexia Association Volume 39, No. 4 IDA PURPOSE STATEMENT Technology and Dyslexia—Part 1 The purpose of IDA is to pursue and provide the most comprehensive Theme Editors’ introduction 7 range of information and services David H. -
Multimodal Sensorimotor Integration of Visual and Kinaesthetic Afferents Modulates Motor Circuits in Humans
brain sciences Article Multimodal Sensorimotor Integration of Visual and Kinaesthetic Afferents Modulates Motor Circuits in Humans Volker R. Zschorlich 1,* , Frank Behrendt 2 and Marc H. E. de Lussanet 3 1 Department of Movement Science, University of Rostock, Ulmenstraße 69, 18057 Rostock, Germany 2 Reha Rheinfelden, Research Department, Salinenstrasse 98, CH-4310 Rheinfelden, Switzerland; [email protected] 3 Department of Movement Science, and OCC Center for Cognitive and Behavioral Neuroscience, University of Münster, Horstmarer Landweg 62b, 48149 Münster, Germany; [email protected] * Correspondence: [email protected] Abstract: Optimal motor control requires the effective integration of multi-modal information. Visual information of movement performed by others even enhances potentials in the upper motor neurons through the mirror-neuron system. On the other hand, it is known that motor control is intimately associated with afferent proprioceptive information. Kinaesthetic information is also generated by passive, external-driven movements. In the context of sensory integration, it is an important question how such passive kinaesthetic information and visually perceived movements are integrated. We studied the effects of visual and kinaesthetic information in combination, as well as isolated, on sensorimotor integration, compared to a control condition. For this, we measured the change in the excitability of the motor cortex (M1) using low-intensity Transcranial magnetic stimulation (TMS). We hypothesised that both visual motoneurons and kinaesthetic motoneurons enhance the excitability of motor responses. We found that passive wrist movements increase the motor excitability, suggesting that kinaesthetic motoneurons do exist. The kinaesthetic influence on the motor threshold was even stronger than the visual information.