Interview with a Neuroscientist – Dr. Blake Richards
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Memory-Prediction Framework for Pattern
Memory–Prediction Framework for Pattern Recognition: Performance and Suitability of the Bayesian Model of Visual Cortex Saulius J. Garalevicius Department of Computer and Information Sciences, Temple University Room 303, Wachman Hall, 1805 N. Broad St., Philadelphia, PA, 19122, USA [email protected] Abstract The neocortex learns sequences of patterns by storing This paper explores an inferential system for recognizing them in an invariant form in a hierarchical neural network. visual patterns. The system is inspired by a recent memory- It recalls the patterns auto-associatively when given only prediction theory and models the high-level architecture of partial or distorted inputs. The structure of stored invariant the human neocortex. The paper describes the hierarchical representations captures the important relationships in the architecture and recognition performance of this Bayesian world, independent of the details. The primary function of model. A number of possibilities are analyzed for bringing the neocortex is to make predictions by comparing the the model closer to the theory, making it uniform, scalable, knowledge of the invariant structure with the most recent less biased and able to learn a larger variety of images and observed details. their transformations. The effect of these modifications on The regions in the hierarchy are connected by multiple recognition accuracy is explored. We identify and discuss a number of both conceptual and practical challenges to the feedforward and feedback connections. Prediction requires Bayesian approach as well as missing details in the theory a comparison between what is happening (feedforward) that are needed to design a scalable and universal model. and what you expect to happen (feedback). -
Unsupervised Anomaly Detection in Time Series with Recurrent Neural Networks
DEGREE PROJECT IN TECHNOLOGY, FIRST CYCLE, 15 CREDITS STOCKHOLM, SWEDEN 2019 Unsupervised anomaly detection in time series with recurrent neural networks JOSEF HADDAD CARL PIEHL KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE Unsupervised anomaly detection in time series with recurrent neural networks JOSEF HADDAD, CARL PIEHL Bachelor in Computer Science Date: June 7, 2019 Supervisor: Pawel Herman Examiner: Örjan Ekeberg School of Electrical Engineering and Computer Science Swedish title: Oövervakad avvikelsedetektion i tidsserier med neurala nätverk iii Abstract Artificial neural networks (ANN) have been successfully applied to a wide range of problems. However, most of the ANN-based models do not attempt to model the brain in detail, but there are still some models that do. An example of a biologically constrained ANN is Hierarchical Temporal Memory (HTM). This study applies HTM and Long Short-Term Memory (LSTM) to anomaly detection problems in time series in order to compare their performance for this task. The shape of the anomalies are restricted to point anomalies and the time series are univariate. Pre-existing implementations that utilise these networks for unsupervised anomaly detection in time series are used in this study. We primarily use our own synthetic data sets in order to discover the networks’ robustness to noise and how they compare to each other regarding different characteristics in the time series. Our results shows that both networks can handle noisy time series and the difference in performance regarding noise robustness is not significant for the time series used in the study. LSTM out- performs HTM in detecting point anomalies on our synthetic time series with sine curve trend but a conclusion about the overall best performing network among these two remains inconclusive. -
(Title of the Thesis)*
THE OPTIMALITY OF DECISION MAKING DURING MOTOR LEARNING by Joshua Brent Moskowitz A thesis submitted to the Department of Psychology in conformity with the requirements for the degree of Master of Science Queen’s University Kingston, Ontario, Canada (June, 2016) Copyright ©Joshua B. Moskowitz, 2016 Abstract In our daily lives, we often must predict how well we are going to perform in the future based on an evaluation of our current performance and an assessment of how much we will improve with practice. Such predictions can be used to decide whether to invest our time and energy in learning and, if we opt to invest, what rewards we may gain. This thesis investigated whether people are capable of tracking their own learning (i.e. current and future motor ability) and exploiting that information to make decisions related to task reward. In experiment one, participants performed a target aiming task under a visuomotor rotation such that they initially missed the target but gradually improved. After briefly practicing the task, they were asked to selected rewards for hits and misses applied to subsequent performance in the task, where selecting a higher reward for hits came at a cost of receiving a lower reward for misses. We found that participants made decisions that were in the direction of optimal and therefore demonstrated knowledge of future task performance. In experiment two, participants learned a novel target aiming task in which they were rewarded for target hits. Every five trials, they could choose a target size which varied inversely with reward value. Although participants’ decisions deviated from optimal, a model suggested that they took into account both past performance, and predicted future performance, when making their decisions. -
Neuromorphic Architecture for the Hierarchical Temporal Memory Abdullah M
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 1 Neuromorphic Architecture for the Hierarchical Temporal Memory Abdullah M. Zyarah, Student Member, IEEE, Dhireesha Kudithipudi, Senior Member, IEEE, Neuromorphic AI Laboratory, Rochester Institute of Technology Abstract—A biomimetic machine intelligence algorithm, that recognition and classification [3]–[5], prediction [6], natural holds promise in creating invariant representations of spatiotem- language processing, and anomaly detection [7], [8]. At a poral input streams is the hierarchical temporal memory (HTM). higher abstraction, HTM is basically a memory based system This unsupervised online algorithm has been demonstrated on several machine-learning tasks, including anomaly detection. that can be trained on a sequence of events that vary over time. Significant effort has been made in formalizing and applying the In the algorithmic model, this is achieved using two core units, HTM algorithm to different classes of problems. There are few spatial pooler (SP) and temporal memory (TM), called cortical early explorations of the HTM hardware architecture, especially learning algorithm (CLA). The SP is responsible for trans- for the earlier version of the spatial pooler of HTM algorithm. forming the input data into sparse distributed representation In this article, we present a full-scale HTM architecture for both spatial pooler and temporal memory. Synthetic synapse design is (SDR) with fixed sparsity, whereas the TM learns sequences proposed to address the potential and dynamic interconnections and makes predictions [9]. occurring during learning. The architecture is interweaved with A few research groups have implemented the first generation parallel cells and columns that enable high processing speed for Bayesian HTM. Kenneth et al., in 2007, implemented the the HTM. -
Memory-Prediction Framework for Pattern Recognition: Performance
Memory–Prediction Framework for Pattern Recognition: Performance and Suitability of the Bayesian Model of Visual Cortex Saulius J. Garalevicius Department of Computer and Information Sciences, Temple University Room 303, Wachman Hall, 1805 N. Broad St., Philadelphia, PA, 19122, USA [email protected] Abstract The neocortex learns sequences of patterns by storing This paper explores an inferential system for recognizing them in an invariant form in a hierarchical neural network. visual patterns. The system is inspired by a recent memory- It recalls the patterns auto-associatively when given only prediction theory and models the high-level architecture of partial or distorted inputs. The structure of stored invariant the human neocortex. The paper describes the hierarchical representations captures the important relationships in the architecture and recognition performance of this Bayesian world, independent of the details. The primary function of model. A number of possibilities are analyzed for bringing the neocortex is to make predictions by comparing the the model closer to the theory, making it uniform, scalable, knowledge of the invariant structure with the most recent less biased and able to learn a larger variety of images and observed details. their transformations. The effect of these modifications on The regions in the hierarchy are connected by multiple recognition accuracy is explored. We identify and discuss a number of both conceptual and practical challenges to the feedforward and feedback connections. Prediction requires Bayesian approach as well as missing details in the theory a comparison between what is happening (feedforward) that are needed to design a scalable and universal model. and what you expect to happen (feedback). -
Flexible Corticospinal Control of Muscles
Flexible Corticospinal Control of Muscles Najja J. Marshall Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy under the Executive Committee of the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2021 © 2021 Najja J. Marshall All Rights Reserved Abstract Flexible Corticospinal Control of Muscles Najja J. Marshall The exceptional abilities of top-tier athletes – from Simone Biles’ dizzying gymnastics to LeBron James’ gravity-defying bounds – can easily lead one to forget to marvel at the exceptional breadth of everyday movements. Whether holding a cup of coffee, reaching out to grab a falling object, or cycling at a quick clip, every motor action requires activating multiple muscles with the appropriate intensity and timing to move each limb or counteract the weight of an object. These actions are planned and executed by the motor cortex, which transmits its intentions to motoneurons in the spinal cord, which ultimately drive muscle contractions. A central problem in neuroscience is precisely how neural activity in cortex and the spinal cord gives rise to this diverse range of behaviors. At the level of spinal cord, this problem is considered to be well understood. A foundational tenet in motor control asserts that motoneurons are controlled by a single input to which they respond in a reliable and predictable manner to drive muscle activity, akin to the way that depressing a gas pedal by the same degree accelerates a car to a predictable speed. Theories of how motor cortex flexibly generates different behaviors are less firmly developed, but the available evidence indicates that cortical neurons are coordinated in a similarly simplistic, well-preserved manner. -
The Neuroscience of Human Intelligence Differences
Edinburgh Research Explorer The neuroscience of human intelligence differences Citation for published version: Deary, IJ, Penke, L & Johnson, W 2010, 'The neuroscience of human intelligence differences', Nature Reviews Neuroscience, vol. 11, pp. 201-211. https://doi.org/10.1038/nrn2793 Digital Object Identifier (DOI): 10.1038/nrn2793 Link: Link to publication record in Edinburgh Research Explorer Document Version: Peer reviewed version Published In: Nature Reviews Neuroscience Publisher Rights Statement: This is an author's accepted manuscript of the following article: Deary, I. J., Penke, L. & Johnson, W. (2010), "The neuroscience of human intelligence differences", in Nature Reviews Neuroscience 11, p. 201-211. The final publication is available at http://dx.doi.org/10.1038/nrn2793 General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact [email protected] providing details, and we will remove access to the work immediately and investigate your claim. Download date: 02. Oct. 2021 Nature Reviews Neuroscience in press The neuroscience of human intelligence differences Ian J. Deary*, Lars Penke* and Wendy Johnson* *Centre for Cognitive Ageing and Cognitive Epidemiology, Department of Psychology, University of Edinburgh, Edinburgh EH4 2EE, Scotland, UK. All authors contributed equally to the work. -
Cold Spring Harbor Symposia on Quantitative Biology, Volume LXXIX: Cognition
This is a free sample of content from Cold Spring Harbor Symposia on Quantitative Biology, Volume LXXIX: Cognition. Click here for more information on how to buy the book. COLD SPRING HARBOR SYMPOSIA ON QUANTITATIVE BIOLOGY VOLUME LXXIX Cognition symposium.cshlp.org Symposium organizers and Proceedings editors: Cori Bargmann (The Rockefeller University), Daphne Bavelier (University of Geneva, Switzerland, and University of Rochester), Terrence Sejnowski (The Salk Institute for Biological Studies), and David Stewart and Bruce Stillman (Cold Spring Harbor Laboratory) COLD SPRING HARBOR LABORATORY PRESS 2014 © 2014 by Cold Spring Harbor Laboratory Press. All rights reserved. This is a free sample of content from Cold Spring Harbor Symposia on Quantitative Biology, Volume LXXIX: Cognition. Click here for more information on how to buy the book. COLD SPRING HARBOR SYMPOSIA ON QUANTITATIVE BIOLOGY VOLUME LXXIX # 2014 by Cold Spring Harbor Laboratory Press International Standard Book Number 978-1-621821-26-7 (cloth) International Standard Book Number 978-1-621821-27-4 (paper) International Standard Serial Number 0091-7451 Library of Congress Catalog Card Number 34-8174 Printed in the United States of America All rights reserved COLD SPRING HARBOR SYMPOSIA ON QUANTITATIVE BIOLOGY Founded in 1933 by REGINALD G. HARRIS Director of the Biological Laboratory 1924 to 1936 Previous Symposia Volumes I (1933) Surface Phenomena XXXIX (1974) Tumor Viruses II (1934) Aspects of Growth XL (1975) The Synapse III (1935) Photochemical Reactions XLI (1976) Origins -
Toolmaking and the Origin of Normative Cognition
Preprint, 14 April 2020 Toolmaking and the Origin of Normative Cognition Jonathan Birch Department of Philosophy, Logic and Scientific Method, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, UK. [email protected] http://personal.lse.ac.uk/birchj1 11,947 words 1 Abstract: We are all guided by thousands of norms, but how did our capacity for normative cognition evolve? I propose there is a deep but neglected link between normative cognition and practical skill. In modern humans, complex motor skills and craft skills, such as skills related to toolmaking and tool use, are guided by internally represented norms of correct performance. Moreover, it is plausible that core components of human normative cognition evolved in response to the distinctive demands of transmitting complex motor skills and craft skills, especially skills related to toolmaking and tool use, through social learning. If this is correct, the expansion of the normative domain beyond technique to encompass more abstract norms of reciprocity, ritual, kinship and fairness involved the elaboration of a basic platform for the guidance of skilled action by technical norms. This article motivates and defends this “skill hypothesis” for the origin of normative cognition and sets out various ways in which it could be empirically tested. Key words: normative cognition, skill, cognitive control, norms, evolution 2 1. The Skill Hypothesis We are all guided by thousands of norms, often without being able to articulate the norms in question. A “norm”, as I will use the term here, is just any standard of correct or appropriate behaviour. -
Wellcome Investigators March 2011
Wellcome Trust Investigator Awards Feedback from Expert Review Group members 28 March 2011 1 Roughly 7 months between application and final outcome The Expert Review Groups 1. Cell and Developmental Biology 2. Cellular and Molecular Neuroscience (Zaf Bashir) 3. Cognitive Neuroscience and Mental Health 4. Genetics, Genomics and Population Research (George Davey Smith) 5. Immune Systems in Health and Disease (David Wraith) 6. Molecular Basis of Cell Function 7. Pathogen Biology and Disease Transmission 8. Physiology in Health and Disease (Paul Martin) 9. Population and Public Health (Rona Campbell) 2 Summary Feedback from ERG Panels • The bar is very high across all nine panels • Track record led - CV must demonstrate a substantial impact of your research (e.g. high impact journals, record of ground breaking research, clear upward trajectory etc) To paraphrase Walport ‘to support scientists with the best track records, obviously appropriate to the stage in their career’ • Notable esteem factors (but note ‘several FRSs were not shortlisted’) • Your novelty of your research vision is CRUCIAL. Don’t just carry on doing more of the same • The Trust is not averse to risk (but what about ERG panel members?) • Success rate for short-listing for interview is ~15-25% at Senior Investigator level (3-5 proposals shortlisted from each ERG) • Fewer New Investigator than Senior Investigator applications – an opportunity? • There are fewer applications overall for the second round, but ‘the bar will not be lowered’ The Challenge UoB has roughly 45 existing -
Why Dream? a Conjecture on Dreaming
Why dream? A Conjecture on Dreaming Marshall Stoneham London Centre for Nanotechnology University College London Gower Street, London WC1E 6BT, UK PACS classifications: 87.19.-j Properties of higher organisms; 87.19.L- Neuroscience; 87.18.-h Biological complexity Abstract I propose that the need for sleep and the occurrence of dreams are intimately linked to the physical processes underlying the continuing replacement of cells and renewal of biomolecules during the lives of higher organisms. Since one major function of our brains is to enable us to react to attack, there must be truly compelling reasons for us to take them off-line for extended periods, as during sleep. I suggest that most replacements occur during sleep. Dreams are suggested as part of the means by which the new neural circuitry (I use the word in an informal sense) is checked. 1. Introduction Sleep is a remarkably universal phenomenon among higher organisms. Indeed, many of them would die without sleep. Whereas hibernation, a distinct phenomenon, is readily understood in terms of energy balance, the reasons for sleep and for associated dreaming are by no means fully established. Indeed, some of the standard ideas are arguably implausible [1]. This present note describes a conjecture as to the role of dreaming. The ideas, in turn, point to a further conjecture about the need for sleep. I have tried to avoid any controversial assumptions. All I assume about the brain is that it comprises a complex and (at least partly) integrated network of components, and has several functions, including the obvious ones of control of motion, analysis of signals from sensors, computing (in the broadest sense), memory, and possibly other functions. -
On Intelligence As Memory
Artificial Intelligence 169 (2005) 181–183 www.elsevier.com/locate/artint Book review Jeff Hawkins and Sandra Blakeslee, On Intelligence, Times Books, 2004. On intelligence as memory Jerome A. Feldman International Computer Science Institute, Berkeley, CA 94704-1198, USA Available online 3 November 2005 On Intelligence by Jeff Hawkins with Sandra Blakeslee has been inspirational for non- scientists as well as some of our most distinguished biologists, as can be seen from the web site (http://www.onintelligence.org). The book is engagingly written as a first person memoir of one computer engineer’s search for enlightenment on how human intelligence is computed by our brains. The central insight is important—much of our intelligence comes from the ability to recognize complex situations and to predict their possible outcomes. There is something fundamental about the brain and neural computation that makes us intelligent and AI should be studying it. Hawkins actually understates the power of human associative memory. Because of the massive parallelism and connectivity, the brain essentially reconfigures itself to be constantly sensitive to the current context and goals [1]. For example, when you are planning to buy some kind of car, you start noticing them. The book is surely right that better AI systems would follow if we could develop programs that were more like human memory. For whatever reason, memory as such is no longer studied much in AI—the Russell and Norvig [3] text has one index item for memory and that refers to semantics. From a scientific AI/Cognitive Science perspective, the book fails to tackle most of the questions of interest.