Episodic Memory and the Hippocampus

Total Page:16

File Type:pdf, Size:1020Kb

Episodic Memory and the Hippocampus The Neural Processes Underpinning Episodic Memory Demis Hassabis Submitted for PhD in Cognitive Neuroscience February 2009 University College London Supervisor: Eleanor A. Maguire 1 Declaration: I, Demis Hassabis, confirm that the work presented in this thesis is my own. Where information has been derived from other sources, I confirm that this has been indicated in the thesis. Signed: Date: 2 Abstract Episodic memory is the memory for our personal past experiences. Although numerous functional magnetic resonance imaging (fMRI) studies investigating its neural basis have revealed a consistent and distributed network of associated brain regions, surprisingly little is known about the contributions individual brain areas make to the recollective experience. In this thesis I address this fundamental issue by employing a range of different experimental techniques including neuropsychological testing, virtual reality environments, whole brain and high spatial resolution fMRI, and multivariate pattern analysis. Episodic memory recall is widely agreed to be a reconstructive process, one that is known to be critically reliant on the hippocampus. I therefore hypothesised that the same neural machinery responsible for reconstruction might also support ‘constructive’ cognitive functions such as imagination. To test this proposal, patients with focal damage to the hippocampus bilaterally were asked to imagine new experiences and were found to be impaired relative to matched control participants. Moreover, driving this deficit was a lack of spatial coherence in their imagined experiences, pointing to a role for the hippocampus in binding together the disparate elements of a scene. A subsequent fMRI study involving healthy participants compared the recall of real memories with the construction of imaginary memories. This revealed a fronto-temporo- parietal network in common to both tasks that included the hippocampus, ventromedial prefrontal, retrosplenial and parietal cortices. Based on these results I advanced the notion that this network might support the process of ‘scene construction’, defined as the generation and maintenance of a complex and coherent spatial context. Furthermore, I argued that this scene construction network might underpin other important cognitive functions besides episodic memory and imagination, such as navigation and thinking about the future. It is has been proposed that spatial context may act as the scaffold around which episodic memories are built. Given the hippocampus appears to play a critical role in imagination by supporting the creation of a rich coherent spatial scene, I sought to explore the nature of this hippocampal spatial code in a novel way. By combining high 3 spatial resolution fMRI with multivariate pattern analysis techniques it proved possible to accurately determine where a subject was located in a virtual reality environment based solely on the pattern of activity across hippocampal voxels. For this to have been possible, the hippocampal population code must be large and non-uniform. I then extended these techniques to the domain of episodic memory by showing that individual memories could be accurately decoded from the pattern of activity across hippocampal voxels, thus identifying individual memory traces. I consider these findings together with other recent advances in the episodic memory field, and present a new perspective on the role of the hippocampus in episodic recollection. I discuss how this new (and preliminary) framework compares with current prevailing theories of hippocampal function, and suggest how it might account for some previously contradictory data. 4 To Alexander and Arthur The best in me ‘It's a poor sort of memory that only works backwards’ (Lewis Carroll, the White Queen to Alice in “Through the Looking-Glass”) 5 Acknowledgements The work in this thesis could not have been accomplished without the generous help of a number of people. I wish to thank Eleanor Maguire, my principal supervisor, for her invaluable support and advice throughout my PhD, and for being so encouraging of new ideas. I want to also thank Geraint Rees, my second supervisor, for his guidance and infectious enthusiasm. Then there are my friends and colleagues to thank from UCL: my partner in crime Dharshan Kumaran; Dean Mobbs for opening the door to exciting collaborations; Hugo Spiers and Benedetto de Martino for interesting science discussions; Carlton Chu and Martin Chadwick for work on multivariate methods; Jennifer Summerfield and Katherine Woollett for help on scene construction; Neil Burgess and Caswell Barry for useful theoretical discussions; Nikolaus Weiskopf and John Ashburner for their technical advice; and the support staff at the FIL, particularly Peter Aston, David Bradbury, Janice Glensman and Ric Davis for help with scanning and computers. Thanks also to the Brain Research Trust and the Wellcome Trust for funding the work in this thesis. Finally, I would like to thank my mother and father for their spiritual and intellectual encouragement during my formative years without which I would not have been here today doing this PhD. And of course, most importantly, thanks to Teresa Niccoli who puts up with my craziness somehow and without whose love and support none of this could have happened. 6 Contents 1. Episodic Memory and the Hippocampus................................................................15 1.1 Introduction............................................................................................................16 1.2 Episodic memory...................................................................................................18 1.3 Reconstructive theories of episodic memory recall ...............................................21 1.4 The brain network underpinning episodic memory................................................23 1.4.1 The hippocampus ......................................................................................................... 23 1.4.2 The episodic memory network...................................................................................... 27 1.5 Hippocampal population codes .............................................................................29 1.5.1 Information in the brain................................................................................................. 30 1.5.2 The cell assembly hypothesis....................................................................................... 31 1.5.3 Invariant representations in the hippocampus.............................................................. 33 1.5.4 Properties of the hippocampal population code ........................................................... 34 1.5.4.1 Sparse codes......................................................................................................... 34 1.5.4.2 Topographical organisation ................................................................................... 35 1.6 Thesis overview............................................................................................................... 36 2. Theories of Hippocampal Function.........................................................................39 2.1 Introduction............................................................................................................40 2.2 The Marr model .....................................................................................................41 2.3 Declarative theory..................................................................................................42 2.4 Multiple trace theory ..............................................................................................46 2.5 Cognitive map theory.............................................................................................50 2.6 Relational theory....................................................................................................55 2.7 Summary ...............................................................................................................60 3. Methods .....................................................................................................................62 3.1 Introduction............................................................................................................63 3.2 The biophysics of magnetic resonance imaging (MRI)..........................................63 3.2.1 Magnetic fields.............................................................................................................. 63 3.2.2 Generating an MR signal.............................................................................................. 65 3.2.3 Tomographic image formation...................................................................................... 67 3.2.4 Scanning parameters ................................................................................................... 67 3.3 The BOLD signal ...................................................................................................69 3.3.1 The BOLD effect........................................................................................................... 69 7 3.3.2 The neurophysiology of the BOLD signal..................................................................... 70 3.4 Resolution of fMRI.................................................................................................72 3.5 Analysis of fMRI data.............................................................................................73
Recommended publications
  • Lecture Note on Deep Learning and Quantum Many-Body Computation
    Lecture Note on Deep Learning and Quantum Many-Body Computation Jin-Guo Liu, Shuo-Hui Li, and Lei Wang∗ Institute of Physics, Chinese Academy of Sciences Beijing 100190, China November 23, 2018 Abstract This note introduces deep learning from a computa- tional quantum physicist’s perspective. The focus is on deep learning’s impacts to quantum many-body compu- tation, and vice versa. The latest version of the note is at http://wangleiphy.github.io/. Please send comments, suggestions and corrections to the email address in below. ∗ [email protected] CONTENTS 1 introduction2 2 discriminative learning4 2.1 Data Representation 4 2.2 Model: Artificial Neural Networks 6 2.3 Cost Function 9 2.4 Optimization 11 2.4.1 Back Propagation 11 2.4.2 Gradient Descend 13 2.5 Understanding, Visualization and Applications Beyond Classification 15 3 generative modeling 17 3.1 Unsupervised Probabilistic Modeling 17 3.2 Generative Model Zoo 18 3.2.1 Boltzmann Machines 19 3.2.2 Autoregressive Models 22 3.2.3 Normalizing Flow 23 3.2.4 Variational Autoencoders 25 3.2.5 Tensor Networks 28 3.2.6 Generative Adversarial Networks 29 3.3 Summary 32 4 applications to quantum many-body physics and more 33 4.1 Material and Chemistry Discoveries 33 4.2 Density Functional Theory 34 4.3 “Phase” Recognition 34 4.4 Variational Ansatz 34 4.5 Renormalization Group 35 4.6 Monte Carlo Update Proposals 36 4.7 Tensor Networks 37 4.8 Quantum Machine Leanring 38 4.9 Miscellaneous 38 5 hands on session 39 5.1 Computation Graph and Back Propagation 39 5.2 Deep Learning Libraries 41 5.3 Generative Modeling using Normalizing Flows 42 5.4 Restricted Boltzmann Machine for Image Restoration 43 5.5 Neural Network as a Quantum Wave Function Ansatz 43 6 challenges ahead 45 7 resources 46 BIBLIOGRAPHY 47 1 1 INTRODUCTION Deep Learning (DL) ⊂ Machine Learning (ML) ⊂ Artificial Intelli- gence (AI).
    [Show full text]
  • AI Computer Wraps up 4-1 Victory Against Human Champion Nature Reports from Alphago's Victory in Seoul
    The Go Files: AI computer wraps up 4-1 victory against human champion Nature reports from AlphaGo's victory in Seoul. Tanguy Chouard 15 March 2016 SEOUL, SOUTH KOREA Google DeepMind Lee Sedol, who has lost 4-1 to AlphaGo. Tanguy Chouard, an editor with Nature, saw Google-DeepMind’s AI system AlphaGo defeat a human professional for the first time last year at the ancient board game Go. This week, he is watching top professional Lee Sedol take on AlphaGo, in Seoul, for a $1 million prize. It’s all over at the Four Seasons Hotel in Seoul, where this morning AlphaGo wrapped up a 4-1 victory over Lee Sedol — incidentally, earning itself and its creators an honorary '9-dan professional' degree from the Korean Baduk Association. After winning the first three games, Google-DeepMind's computer looked impregnable. But the last two games may have revealed some weaknesses in its makeup. Game four totally changed the Go world’s view on AlphaGo’s dominance because it made it clear that the computer can 'bug' — or at least play very poor moves when on the losing side. It was obvious that Lee felt under much less pressure than in game three. And he adopted a different style, one based on taking large amounts of territory early on rather than immediately going for ‘street fighting’ such as making threats to capture stones. This style – called ‘amashi’ – seems to have paid off, because on move 78, Lee produced a play that somehow slipped under AlphaGo’s radar. David Silver, a scientist at DeepMind who's been leading the development of AlphaGo, said the program estimated its probability as 1 in 10,000.
    [Show full text]
  • Accelerators and Obstacles to Emerging ML-Enabled Research Fields
    Life at the edge: accelerators and obstacles to emerging ML-enabled research fields Soukayna Mouatadid Steve Easterbrook Department of Computer Science Department of Computer Science University of Toronto University of Toronto [email protected] [email protected] Abstract Machine learning is transforming several scientific disciplines, and has resulted in the emergence of new interdisciplinary data-driven research fields. Surveys of such emerging fields often highlight how machine learning can accelerate scientific discovery. However, a less discussed question is how connections between the parent fields develop and how the emerging interdisciplinary research evolves. This study attempts to answer this question by exploring the interactions between ma- chine learning research and traditional scientific disciplines. We examine different examples of such emerging fields, to identify the obstacles and accelerators that influence interdisciplinary collaborations between the parent fields. 1 Introduction In recent decades, several scientific research fields have experienced a deluge of data, which is impacting the way science is conducted. Fields such as neuroscience, psychology or social sciences are being reshaped by the advances in machine learning (ML) and the data processing technologies it provides. In some cases, new research fields have emerged at the intersection of machine learning and more traditional research disciplines. This is the case for example, for bioinformatics, born from collaborations between biologists and computer scientists in the mid-80s [1], which is now considered a well-established field with an active community, specialized conferences, university departments and research groups. How do these transformations take place? Do they follow similar paths? If yes, how can the advances in such interdisciplinary fields be accelerated? Researchers in the philosophy of science have long been interested in these questions.
    [Show full text]
  • The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design
    The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design Jeffrey Dean Google Research [email protected] Abstract The past decade has seen a remarkable series of advances in machine learning, and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas, including computer vision, speech recognition, language translation, and natural language understanding tasks. This paper is a companion paper to a keynote talk at the 2020 International Solid-State Circuits Conference (ISSCC) discussing some of the advances in machine learning, and their implications on the kinds of computational devices we need to build, especially in the post-Moore’s Law-era. It also discusses some of the ways that machine learning may also be able to help with some aspects of the circuit design process. Finally, it provides a sketch of at least one interesting direction towards much larger-scale multi-task models that are sparsely activated and employ much more dynamic, example- and task-based routing than the machine learning models of today. Introduction The past decade has seen a remarkable series of advances in machine learning (ML), and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas [LeCun et al. 2015]. Major areas of significant advances ​ ​ include computer vision [Krizhevsky et al. 2012, Szegedy et al. 2015, He et al. 2016, Real et al. 2017, Tan ​ ​ ​ ​ ​ ​ and Le 2019], speech recognition [Hinton et al.
    [Show full text]
  • The Power and Promise of Computers That Learn by Example
    Machine learning: the power and promise of computers that learn by example MACHINE LEARNING: THE POWER AND PROMISE OF COMPUTERS THAT LEARN BY EXAMPLE 1 Machine learning: the power and promise of computers that learn by example Issued: April 2017 DES4702 ISBN: 978-1-78252-259-1 The text of this work is licensed under the terms of the Creative Commons Attribution License which permits unrestricted use, provided the original author and source are credited. The license is available at: creativecommons.org/licenses/by/4.0 Images are not covered by this license. This report can be viewed online at royalsociety.org/machine-learning Cover image © shulz. 2 MACHINE LEARNING: THE POWER AND PROMISE OF COMPUTERS THAT LEARN BY EXAMPLE Contents Executive summary 5 Recommendations 8 Chapter one – Machine learning 15 1.1 Systems that learn from data 16 1.2 The Royal Society’s machine learning project 18 1.3 What is machine learning? 19 1.4 Machine learning in daily life 21 1.5 Machine learning, statistics, data science, robotics, and AI 24 1.6 Origins and evolution of machine learning 25 1.7 Canonical problems in machine learning 29 Chapter two – Emerging applications of machine learning 33 2.1 Potential near-term applications in the public and private sectors 34 2.2 Machine learning in research 41 2.3 Increasing the UK’s absorptive capacity for machine learning 45 Chapter three – Extracting value from data 47 3.1 Machine learning helps extract value from ‘big data’ 48 3.2 Creating a data environment to support machine learning 49 3.3 Extending the lifecycle
    [Show full text]
  • ARCHITECTS of INTELLIGENCE for Xiaoxiao, Elaine, Colin, and Tristan ARCHITECTS of INTELLIGENCE
    MARTIN FORD ARCHITECTS OF INTELLIGENCE For Xiaoxiao, Elaine, Colin, and Tristan ARCHITECTS OF INTELLIGENCE THE TRUTH ABOUT AI FROM THE PEOPLE BUILDING IT MARTIN FORD ARCHITECTS OF INTELLIGENCE Copyright © 2018 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. Acquisition Editors: Ben Renow-Clarke Project Editor: Radhika Atitkar Content Development Editor: Alex Sorrentino Proofreader: Safis Editing Presentation Designer: Sandip Tadge Cover Designer: Clare Bowyer Production Editor: Amit Ramadas Marketing Manager: Rajveer Samra Editorial Director: Dominic Shakeshaft First published: November 2018 Production reference: 2201118 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK ISBN 978-1-78913-151-2 www.packt.com Contents Introduction ........................................................................ 1 A Brief Introduction to the Vocabulary of Artificial Intelligence .......10 How AI Systems Learn ........................................................11 Yoshua Bengio .....................................................................17 Stuart J.
    [Show full text]
  • Technology Innovation and the Future of Air Force Intelligence Analysis
    C O R P O R A T I O N LANCE MENTHE, DAHLIA ANNE GOLDFELD, ABBIE TINGSTAD, SHERRILL LINGEL, EDWARD GEIST, DONALD BRUNK, AMANDA WICKER, SARAH SOLIMAN, BALYS GINTAUTAS, ANNE STICKELLS, AMADO CORDOVA Technology Innovation and the Future of Air Force Intelligence Analysis Volume 2, Technical Analysis and Supporting Material RR-A341-2_cover.indd All Pages 2/8/21 12:20 PM For more information on this publication, visit www.rand.org/t/RRA341-2 Library of Congress Cataloging-in-Publication Data is available for this publication. ISBN: 978-1-9774-0633-0 Published by the RAND Corporation, Santa Monica, Calif. © Copyright 2021 RAND Corporation R® is a registered trademark. Cover: U.S. Marine Corps photo by Cpl. William Chockey; faraktinov, Adobe Stock. Limited Print and Electronic Distribution Rights This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited. Permission is given to duplicate this document for personal use only, as long as it is unaltered and complete. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial use. For information on reprint and linking permissions, please visit www.rand.org/pubs/permissions. The RAND Corporation is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more prosperous. RAND is nonprofit, nonpartisan, and committed to the public interest. RAND’s publications do not necessarily reflect the opinions of its research clients and sponsors.
    [Show full text]
  • The Brilliant World of Two Point Hospital Is Coming to Console Late 2019!
    THE BRILLIANT WORLD OF TWO POINT HOSPITAL IS COMING TO CONSOLE LATE 2019! Including two expansions and a fully revamped user experience for console gaming London, England – 23 July 2019 - SEGA® Europe Ltd. and Two Point Studios™ are thrilled to announce that after an exceedingly successful PC launch in August 2018, the critically acclaimed management sim, Two Point Hospital is coming to Sony PlayStation® 4, Microsoft Xbox® One and Nintendo™ Switch, both physically and digitally, late 2019. Celebrate this news with Dr. Clive Marmalade as he gives you a tour around Two Point County: https://youtu.be/GjIk16rGrYU In the atypical and engrossing world of Two Point Hospital you, the hospital administrator, must deal with many tricky situations on a daily basis, taking on varied challenges and demonstrating your ability to build, cure and improve in the hardest and strangest circumstances. Running a hospital empire can be – often literally – a messy business. So, make sure to train your staff and deal with their wide-ranging personality traits if you want to defeat the healthcare competition of Two Point County, and don't worry if you can't save every patient, ghost capturing janitors are here to help! Since its successful PC release, Two Point Hospital has seen a wide-range of new features added to the game thanks, in part, to a lot of great community feedback. Many of these features, such as Interior Designer, copy-paste room layouts and character customisation will all be included for the console launch later this year. Players will even be able to experience the recently added expansions ‘Bigfoot’, set in a snowy region and ‘Pebberley Island’, where you tackle new challenges in the tropics.
    [Show full text]
  • Gilaie-Dotan, Sharon; Rees, Geraint; Butterworth, Brian and Cappelletti, Marinella
    Gilaie-Dotan, Sharon; Rees, Geraint; Butterworth, Brian and Cappelletti, Marinella. 2014. Im- paired Numerical Ability Affects Supra-Second Time Estimation. Timing & Time Perception, 2(2), pp. 169-187. ISSN 2213-445X [Article] http://research.gold.ac.uk/23637/ The version presented here may differ from the published, performed or presented work. Please go to the persistent GRO record above for more information. If you believe that any material held in the repository infringes copyright law, please contact the Repository Team at Goldsmiths, University of London via the following email address: [email protected]. The item will be removed from the repository while any claim is being investigated. For more information, please contact the GRO team: [email protected] Timing & Time Perception 2 (2014) 169–187 brill.com/time Impaired Numerical Ability Affects Supra-Second Time Estimation Sharon Gilaie-Dotan 1,∗, Geraint Rees 1,2, Brian Butterworth 1 and Marinella Cappelletti 1,3,∗ 1 UCL Institute of Cognitive Neuroscience, 17 Queen Square, London, WC1N 3AR, UK 2 Wellcome Trust Centre for Neuroimaging, University College London, 12 Queen Square, London WC1N 3BG, UK 3 Psychology Department, Goldsmiths College, University of London, UK Received 24 September 2013; accepted 11 March 2014 Abstract It has been suggested that the human ability to process number and time both rely on common magni- tude mechanisms, yet for time this commonality has mainly been investigated in the sub-second rather than longer time ranges. Here we examined whether number processing is associated with timing in time ranges greater than a second. Specifically, we tested long duration estimation abilities in adults with a devel- opmental impairment in numerical processing (dyscalculia), reasoning that any such timing impairment co-occurring with dyscalculia may be consistent with joint mechanisms for time estimation and num- ber processing.
    [Show full text]
  • Neural Correlates of Consciousness
    Ann. N.Y. Acad. Sci. ISSN 0077-8923 ANNALS OF THE NEW YORK ACADEMY OF SCIENCES Issue: The Year in Cognitive Neuroscience Neural correlates of consciousness Geraint Rees UCL Institute of Cognitive Neuroscience and Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom Address for correspondence: Professor Geraint Rees, UCL Institute of Cognitive Neuroscience, 17 Queen Square, London WC1N 3AR, UK. [email protected] Jon Driver’s scientific work was characterized by an innovative combination of new methods for studying mental processes in the human brain in an integrative manner. In our collaborative work, he applied this approach to the study of attention and awareness, and their relationship to neural activity in the human brain. Here I review Jon’s scientific work that relates to the neural basis of human consciousness, relating our collaborative work to a broader scientific context. I seek to show how his insights led to a deeper understanding of the causal connections between distant brain structures that are now believed to characterize the neural underpinnings of human consciousness. Keywords: fMRI; vision; consciousness; awareness; neglect; extinction for noninvasive measurement of human brain Introduction activity such as functional magnetic resonance Our awareness of the external world is central to imaging (fMRI), positron emission tomography, our everyday lives. People consistently and univer- electroencephalography (EEG), and magnetoen- sally use verbal and nonverbal reports to indicate cephalography (MEG) can reveal the neural sub- that they have subjective experiences that reflect the strates of sensory processing in the human brain, sensory properties of objects in the world around and together we used these approaches to explore them.
    [Show full text]
  • Sega Sammy Holdings Integrated Report 2019
    SEGA SAMMY HOLDINGS INTEGRATED REPORT 2019 Challenges & Initiatives Since fiscal year ended March 2018 (fiscal year 2018), the SEGA SAMMY Group has been advancing measures in accordance with the Road to 2020 medium-term management strategy. In fiscal year ended March 2019 (fiscal year 2019), the second year of the strategy, the Group recorded results below initial targets for the second consecutive fiscal year. As for fiscal year ending March 2020 (fiscal year 2020), the strategy’s final fiscal year, we do not expect to reach performance targets, which were an operating income margin of at least 15% and ROA of at least 5%. The aim of INTEGRATED REPORT 2019 is to explain to stakeholders the challenges that emerged while pursuing Road to 2020 and the initiatives we are taking in response. Rapidly and unwaveringly, we will implement initiatives to overcome challenges identified in light of feedback from shareholders, investors, and other stakeholders. INTEGRATED REPORT 2019 1 Introduction Cultural Inheritance Innovative DNA The headquarters of SEGA shortly after its foundation This was the birthplace of milestone innovations. Company credo: “Creation is Life” SEGA A Host of World and Industry Firsts Consistently Innovative In 1960, we brought to market the first made-in-Japan jukebox, SEGA 1000. After entering the home video game console market in the 1980s, The product name was based on an abbreviation of the company’s SEGA remained an innovator. Representative examples of this innova- name at the time: Service Games Japan. Moreover, this is the origin of tiveness include the first domestically produced handheld game the company name “SEGA.” terminal with a color liquid crystal display (LCD) and Dreamcast, which In 1966, the periscope game Periscope became a worldwide hit.
    [Show full text]
  • Phenomenal Consciousness As Scientific Phenomenon? a Critical Investigation of the New Science of Consciousness
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by D-Scholarship@Pitt PHENOMENAL CONSCIOUSNESS AS SCIENTIFIC PHENOMENON? A CRITICAL INVESTIGATION OF THE NEW SCIENCE OF CONSCIOUSNESS by Justin M. Sytsma BS in Computer Science, University of Minnesota, 2003 BS in Neuroscience, University of Minnesota, 1999 MA in Philosophy, University of Pittsburgh, 2008 MA in History and Philosophy of Science, University of Pittsburgh, 2008 Submitted to the Graduate Faculty of School of Arts & Sciences in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Pittsburgh 2010 UNIVERSITY OF PITTSBURGH SCHOOL OF ARTS & SCIENCES This dissertation was presented by Justin M. Sytsma It was defended on August 5, 2010 and approved by Peter Machamer, PhD, Professor, History and Philosophy of Science Anil Gupta, PhD, Distinguished Professor, Philosophy Jesse Prinz, PhD, Distinguished Professor, City University of New York Graduate Center Dissertation Co-Director: Edouard Machery, PhD, Associate Professor, History and Philosophy of Science Dissertation Co-Director: Kenneth Schaffner, PhD, Distinguished University Professor, History and Philosophy of Science ii Copyright © by Justin Sytsma 2010 iii PHENOMENAL CONSCIOUSNESS AS SCIENTIFIC PHENOMENON? A CRITICAL INVESTIGATION OF THE NEW SCIENCE OF CONSCIOUSNESS Justin Sytsma, PhD University of Pittsburgh, 2010 Phenomenal consciousness poses something of a puzzle for philosophy of science. This puzzle arises from two facts: It is common for philosophers (and some scientists) to take its existence to be phenomenologically obvious and yet modern science arguably has little (if anything) to tell us about it. And, this is despite over 20 years of work targeting phenomenal consciousness in what I call the new science of consciousness.
    [Show full text]