What can computational models learn from human selective attention? A review from an audiovisual crossmodal perspective Di Fu 1,2,3, Cornelius Weber 3, Guochun Yang 1,2, Matthias Kerzel 3, Weizhi 4 3 1,2 1,2, 3 Nan , Pablo Barros , Haiyan Wu , Xun Liu ∗, and Stefan Wermter 1CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China 2Department of Psychology, University of Chinese Academy of Sciences, Beijing, China 3Department of Informatics, University of Hamburg, Hamburg, Germany 4Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, Guangzhou, China Correspondence*: Xun Liu
[email protected] ABSTRACT Selective attention plays an essential role in information acquisition and utilization from the environment. In the past 50 years, research on selective attention has been a central topic in cognitive science. Compared with unimodal studies, crossmodal studies are more complex but necessary to solve real-world challenges in both human experiments and computational modeling. Although an increasing number of findings on crossmodal selective attention have shed light on humans’ behavioral patterns and neural underpinnings, a much better understanding is still necessary to yield the same benefit for computational intelligent agents. This article reviews studies of selective attention in unimodal visual and auditory and crossmodal audiovisual setups from the multidisciplinary perspectives of psychology and cognitive neuroscience, and evaluates different ways to simulate analogous mechanisms in computational models and robotics. We discuss the gaps between these fields in this interdisciplinary review and provide insights about how to use psychological findings and theories in artificial intelligence from different perspectives.