Plos Computational Biology 2018
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Bayesian Model Reduction
Bayesian model reduction Karl Friston, Thomas Parr and Peter Zeidman The Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, UK. Correspondence: Peter Zeidman The Wellcome Centre for Human Neuroimaging Institute of Neurology 12 Queen Square, London, UK WC1N 3AR [email protected] Summary This paper reviews recent developments in statistical structure learning; namely, Bayesian model reduction. Bayesian model reduction is a method for rapidly computing the evidence and parameters of probabilistic models that differ only in their priors. In the setting of variational Bayes this has an analytical solution, which finesses the problem of scoring large model spaces in model comparison or structure learning. In this technical note, we review Bayesian model reduction and provide the relevant equations for several discrete and continuous probability distributions. We provide worked examples in the context of multivariate linear regression, Gaussian mixture models and dynamical systems (dynamic causal modelling). These examples are accompanied by the Matlab scripts necessary to reproduce the results. Finally, we briefly review recent applications in the fields of neuroimaging and neuroscience. Specifically, we consider structure learning and hierarchical or empirical Bayes that can be regarded as a metaphor for neurobiological processes like abductive reasoning. Keywords: Bayesian methods, empirical Bayes, model comparison, neurobiology, structure learning 1. Introduction Over the past years, Bayesian model comparison and structure learning have become key issues in the neurosciences (Collins and Frank, 2013, Friston et al., 2017a, Salakhutdinov et al., 2013, Tenenbaum et al., 2011, Tervo et al., 2016, Zorzi et al., 2013); particularly, in optimizing models of neuroimaging time series (Friston et al., 2015, Woolrich et al., 2009) and as a fundamental problem that our brains have to solve (Friston et al., 2017a, Schmidhuber, 1991, Schmidhuber, 2010). -
Optimizing Data for Modeling Neuronal Responses
TECHNOLOGY REPORT published: 10 January 2019 doi: 10.3389/fnins.2018.00986 Optimizing Data for Modeling Neuronal Responses Peter Zeidman 1*†, Samira M. Kazan 1†, Nick Todd 2, Nikolaus Weiskopf 1,3, Karl J. Friston 1 and Martina F. Callaghan 1 1 Wellcome Centre for Human Neuroimaging, UCL Institute of Neurology, University College London, London, United Kingdom, 2 Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States, 3 Department of Neurophysics, Max Planck Institute for Human Cognition and Brain Sciences, Leipzig, Germany In this technical note, we address an unresolved challenge in neuroimaging statistics: how to determine which of several datasets is the best for inferring neuronal responses. Comparisons of this kind are important for experimenters when choosing an imaging protocol—and for developers of new acquisition methods. However, the hypothesis that one dataset is better than another cannot be tested using conventional statistics (based on likelihood ratios), as these require the data to be the same under each hypothesis. Edited by: Here we present Bayesian data comparison (BDC), a principled framework for evaluating Daniel Baumgarten, UMIT–Private Universität für the quality of functional imaging data, in terms of the precision with which neuronal Gesundheitswissenschaften, connectivity parameters can be estimated and competing models can be disambiguated. Medizinische Informatik und Technik, For each of several candidate datasets, neuronal responses are modeled using Bayesian Austria (probabilistic) forward models, such as General Linear Models (GLMs) or Dynamic Casual Reviewed by: Alberto Sorrentino, Models (DCMs). Next, the parameters from subject-specific models are summarized at Università di Genova, Italy the group level using a Bayesian GLM. -
A Tutorial on Group Effective Connectivity Analysis, Part 2: Second Level Analysis with PEB
A tutorial on group effective connectivity analysis, part 2: second level analysis with PEB Peter Zeidman*1, Amirhossein Jafarian1, Mohamed L. Seghier2, Vladimir Litvak1, Hayriye Cagnan3, Cathy J. Price1, Karl J. Friston1 1 Wellcome Centre for Human Neuroimaging 12 Queen Square London WC1N 3AR 2 Cognitive Neuroimaging Unit, ECAE, Abu Dhabi, UAE 3 Nuffield Department of Clinical Neurosciences Level 6 West Wing John Radcliffe Hospital Oxford OX3 9DU * Corresponding author Conflicts of interest: None Abstract This tutorial provides a worked example of using Dynamic Causal Modelling (DCM) and Parametric Empirical Bayes (PEB) to characterise inter-subject variability in neural circuitry (effective connectivity). This involves specifying a hierarchical model with two or more levels. At the first level, state space models (DCMs) are used to infer the effective connectivity that best explains a subject’s neuroimaging timeseries (e.g. fMRI, MEG, EEG). Subject-specific connectivity parameters are then taken to the group level, where they are modelled using a General Linear Model (GLM) that partitions between-subject variability into designed effects and additive random effects. The ensuing (Bayesian) hierarchical model conveys both the estimated connection strengths and their uncertainty (i.e., posterior covariance) from the subject to the group level; enabling hypotheses to be tested about the commonalities and differences across subjects. This approach can also finesse parameter estimation at the subject level, by using the group-level parameters as empirical priors. We walk through this approach in detail, using data from a published fMRI experiment that characterised individual differences in hemispheric lateralization in a semantic processing task. The preliminary subject specific DCM analysis is covered in detail in a companion paper. -
Second Level Analysis with PEB
NeuroImage 200 (2019) 12–25 Contents lists available at ScienceDirect NeuroImage journal homepage: www.elsevier.com/locate/neuroimage A guide to group effective connectivity analysis, part 2: Second level analysis with PEB Peter Zeidman a,*, Amirhossein Jafarian a, Mohamed L. Seghier b, Vladimir Litvak a, Hayriye Cagnan c, Cathy J. Price a, Karl J. Friston a a Wellcome Centre for Human Neuroimaging, 12 Queen Square, London, WC1N 3AR, UK b Cognitive Neuroimaging Unit, ECAE, Abu Dhabi, United Arab Emirates c Nuffield Department of Clinical Neurosciences, Level 6, West Wing, John Radcliffe Hospital, Oxford, OX3 9DU, UK ABSTRACT This paper provides a worked example of using Dynamic Causal Modelling (DCM) and Parametric Empirical Bayes (PEB) to characterise inter-subject variability in neural circuitry (effective connectivity). It steps through an analysis in detail and provides a tutorial style explanation of the underlying theory and assumptions (i.e, priors). The analysis procedure involves specifying a hierarchical model with two or more levels. At the first level, state space models (DCMs) are used to infer the effective connectivity that best explains a subject's neuroimaging timeseries (e.g. fMRI, MEG, EEG). Subject-specific connectivity parameters are then taken to the group level, where they are modelled using a General Linear Model (GLM) that partitions between-subject variability into designed effects and additive random effects. The ensuing (Bayesian) hierarchical model conveys both the estimated connection strengths and their uncertainty (i.e., posterior covariance) from the subject to the group level; enabling hypotheses to be tested about the commonalities and differences across subjects. This approach can also finesse parameter estimation at the subject level, by using the group-level parameters as empirical priors.