Top-Down Predictions of Familiarity and Congruency in Audio-Visual Speech Perception at Neural Level
This is a self-archived version of an original article. This version may differ from the original in pagination and typographic details. Author(s): Kolozsvári, Orsolya B.; Xu, Weiyong; Leppänen, Paavo H. T.; Hämäläinen, Jarmo A. Title: Top-Down Predictions of Familiarity and Congruency in Audio-Visual Speech Perception at Neural Level Year: 2019 Version: Published version Copyright: © The Authors, 2019. Rights: CC BY 4.0 Rights url: https://creativecommons.org/licenses/by/4.0/ Please cite the original version: Kolozsvári, O. B., Xu, W., Leppänen, P. H. T., & Hämäläinen, J. A. (2019). Top-Down Predictions of Familiarity and Congruency in Audio-Visual Speech Perception at Neural Level. Frontiers in Human Neuroscience, 13, 243. 10.3389/fnhum.2019.00243 fnhum-13-00243 July 11, 2019 Time: 17:36 # 1 ORIGINAL RESEARCH published: 12 July 2019 doi: 10.3389/fnhum.2019.00243 Top-Down Predictions of Familiarity and Congruency in Audio-Visual Speech Perception at Neural Level Orsolya B. Kolozsvári1,2*, Weiyong Xu1,2, Paavo H. T. Leppänen1,2 and Jarmo A. Hämäläinen1,2 1 Department of Psychology, University of Jyväskylä, Jyväskylä, Finland, 2 Jyväskylä Centre for Interdisciplinary Brain Research (CIBR), University of Jyväskylä, Jyväskylä, Finland During speech perception, listeners rely on multimodal input and make use of both auditory and visual information. When presented with speech, for example syllables, the differences in brain responses to distinct stimuli are not, however, caused merely by the acoustic or visual features of the stimuli. The congruency of the auditory and visual information and the familiarity of a syllable, that is, whether it appears in the listener’s native language or not, also modulates brain responses.
[Show full text]