Bayesian fusion and multimodal DCM for EEG and fMRI Huilin Wei1,2, Amirhossein Jafarian2, Peter Zeidman2, Vladimir Litvak2, Adeel Razi2,3,4,5, Dewen Hu1*, Karl J. Friston2* 1College of Intelligence Science and Technology, National University of Defense Technology, Changsha, Hunan, People's Republic of China 2Wellcome Centre for Human Neuroimaging, UCL Institute of Neurology, University College London, London, United Kingdom 3Turner Institute for Brain and Mental Health, School of Psychological Sciences, Monash University, Clayton, Australia 4Monash Biomedical Imaging, Monash University, Clayton, Australia 5Department of Electronic Engineering, NED University of Engineering and Technology, Karachi, Pakistan * Correspondence to: Prof. Karl J. Friston, Wellcome Centre for Human Neuroimaging, UCL Institute of Neurology, University College London, 12 Queen Square, London, WC1N 3BG, United Kingdom. E-mail address:
[email protected] (K.J. Friston). Prof. Dewen Hu, College of Intelligence Science and Technology, National University of Defense Technology, 109 Deya Road, Changsha, Hunan, 410073, People's Republic of China. E-mail address:
[email protected] (D.W. Hu). Abstract This paper asks whether integrating multimodal EEG and fMRI data offers a better characterisation of functional brain architectures than either modality alone. This evaluation rests upon a dynamic causal model that generates both EEG and fMRI data from the same neuronal dynamics. We introduce the use of Bayesian fusion to provide informative (empirical) neuronal priors – derived from dynamic causal modelling (DCM) of EEG data – for subsequent DCM of fMRI data. To illustrate this procedure, we generated synthetic EEG and fMRI timeseries for a mismatch negativity (or auditory oddball) paradigm, using biologically plausible model parameters (i.e., posterior expectations from a DCM of empirical, open access, EEG data).