A Neural Network Trained for Prediction Mimics Diverse Features of Biological Neurons and Perception
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Memory-Prediction Framework for Pattern
Memory–Prediction Framework for Pattern Recognition: Performance and Suitability of the Bayesian Model of Visual Cortex Saulius J. Garalevicius Department of Computer and Information Sciences, Temple University Room 303, Wachman Hall, 1805 N. Broad St., Philadelphia, PA, 19122, USA [email protected] Abstract The neocortex learns sequences of patterns by storing This paper explores an inferential system for recognizing them in an invariant form in a hierarchical neural network. visual patterns. The system is inspired by a recent memory- It recalls the patterns auto-associatively when given only prediction theory and models the high-level architecture of partial or distorted inputs. The structure of stored invariant the human neocortex. The paper describes the hierarchical representations captures the important relationships in the architecture and recognition performance of this Bayesian world, independent of the details. The primary function of model. A number of possibilities are analyzed for bringing the neocortex is to make predictions by comparing the the model closer to the theory, making it uniform, scalable, knowledge of the invariant structure with the most recent less biased and able to learn a larger variety of images and observed details. their transformations. The effect of these modifications on The regions in the hierarchy are connected by multiple recognition accuracy is explored. We identify and discuss a number of both conceptual and practical challenges to the feedforward and feedback connections. Prediction requires Bayesian approach as well as missing details in the theory a comparison between what is happening (feedforward) that are needed to design a scalable and universal model. and what you expect to happen (feedback). -
Unsupervised Anomaly Detection in Time Series with Recurrent Neural Networks
DEGREE PROJECT IN TECHNOLOGY, FIRST CYCLE, 15 CREDITS STOCKHOLM, SWEDEN 2019 Unsupervised anomaly detection in time series with recurrent neural networks JOSEF HADDAD CARL PIEHL KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE Unsupervised anomaly detection in time series with recurrent neural networks JOSEF HADDAD, CARL PIEHL Bachelor in Computer Science Date: June 7, 2019 Supervisor: Pawel Herman Examiner: Örjan Ekeberg School of Electrical Engineering and Computer Science Swedish title: Oövervakad avvikelsedetektion i tidsserier med neurala nätverk iii Abstract Artificial neural networks (ANN) have been successfully applied to a wide range of problems. However, most of the ANN-based models do not attempt to model the brain in detail, but there are still some models that do. An example of a biologically constrained ANN is Hierarchical Temporal Memory (HTM). This study applies HTM and Long Short-Term Memory (LSTM) to anomaly detection problems in time series in order to compare their performance for this task. The shape of the anomalies are restricted to point anomalies and the time series are univariate. Pre-existing implementations that utilise these networks for unsupervised anomaly detection in time series are used in this study. We primarily use our own synthetic data sets in order to discover the networks’ robustness to noise and how they compare to each other regarding different characteristics in the time series. Our results shows that both networks can handle noisy time series and the difference in performance regarding noise robustness is not significant for the time series used in the study. LSTM out- performs HTM in detecting point anomalies on our synthetic time series with sine curve trend but a conclusion about the overall best performing network among these two remains inconclusive. -
Neuromorphic Architecture for the Hierarchical Temporal Memory Abdullah M
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 1 Neuromorphic Architecture for the Hierarchical Temporal Memory Abdullah M. Zyarah, Student Member, IEEE, Dhireesha Kudithipudi, Senior Member, IEEE, Neuromorphic AI Laboratory, Rochester Institute of Technology Abstract—A biomimetic machine intelligence algorithm, that recognition and classification [3]–[5], prediction [6], natural holds promise in creating invariant representations of spatiotem- language processing, and anomaly detection [7], [8]. At a poral input streams is the hierarchical temporal memory (HTM). higher abstraction, HTM is basically a memory based system This unsupervised online algorithm has been demonstrated on several machine-learning tasks, including anomaly detection. that can be trained on a sequence of events that vary over time. Significant effort has been made in formalizing and applying the In the algorithmic model, this is achieved using two core units, HTM algorithm to different classes of problems. There are few spatial pooler (SP) and temporal memory (TM), called cortical early explorations of the HTM hardware architecture, especially learning algorithm (CLA). The SP is responsible for trans- for the earlier version of the spatial pooler of HTM algorithm. forming the input data into sparse distributed representation In this article, we present a full-scale HTM architecture for both spatial pooler and temporal memory. Synthetic synapse design is (SDR) with fixed sparsity, whereas the TM learns sequences proposed to address the potential and dynamic interconnections and makes predictions [9]. occurring during learning. The architecture is interweaved with A few research groups have implemented the first generation parallel cells and columns that enable high processing speed for Bayesian HTM. Kenneth et al., in 2007, implemented the the HTM. -
Keith A. Schneider Orcid.Org/0000-0001-7120-3380 [email protected] | (302) 300–7043 77 E
Keith A. Schneider orcid.org/0000-0001-7120-3380 [email protected] | (302) 300–7043 77 E. Delaware Ave., Room 233, Newark, DE 19716 Education PhD Brain & Cognitive Sciences, University of Rochester, Rochester, NY 2002 MA Brain & Cognitive Sciences, University of Rochester, Rochester, NY 2000 MA Astronomy, Boston University, Boston, MA 1996 BS Physics, California Institute of Technology, Pasadena, CA 1994 Positions Professor, Department of Psychological & Brain Sciences, University of 2020– Delaware, Newark, DE Director, Center for Biomedical & Brain Imaging, University of Delaware, 2016– Newark DE Associate Professor, Department of Psychological & Brain Sciences, 2016–2020 University of Delaware, Newark, DE Director, York MRI Facility, York University, Toronto, ON, Canada 2010–2016 Associate Professor, Department of Biology and Centre for Vision 2010–2016 Research, York University, Toronto, ON, Canada Assistant Professor, Department of Psychological Sciences, University of 2009–2010 Missouri, Columbia, MO Assistant Professor (Research), Rochester Center for Brain Imaging and 2005–2008 Center for Visual Science, University of Rochester, Rochester, NY Postdoctoral Fellow, Princeton University, Princeton, NJ 2002–2005 Postdoctoral Fellow, University of Rochester, Rochester, NY 2002 Funding NIH/NEI 1R01EY028266. 2018–2023. “Directly testing the magnocellular hypothesis of dyslexia”. $1,912,669. PI. NIH COBRE 2P20GM103653-06. 2017–2022. “Renewal of Delaware Center for Neuroscience Research”. $3,372,118. Co-PI (Core Director). NVIDIA GPU Grant. 2016. Titan X Pascal hardware. $1200. PI. NSERC Discovery Grant (Canada). 2012–2016. “Structural and functional imaging of the human thalamus”. $135,000. PI. Schneider 1 NSERC CREATE (Canada). 2011–2016. “Vision Science and Applications”. $1,650,000. One of nine co-PIs. -
Memory-Prediction Framework for Pattern Recognition: Performance
Memory–Prediction Framework for Pattern Recognition: Performance and Suitability of the Bayesian Model of Visual Cortex Saulius J. Garalevicius Department of Computer and Information Sciences, Temple University Room 303, Wachman Hall, 1805 N. Broad St., Philadelphia, PA, 19122, USA [email protected] Abstract The neocortex learns sequences of patterns by storing This paper explores an inferential system for recognizing them in an invariant form in a hierarchical neural network. visual patterns. The system is inspired by a recent memory- It recalls the patterns auto-associatively when given only prediction theory and models the high-level architecture of partial or distorted inputs. The structure of stored invariant the human neocortex. The paper describes the hierarchical representations captures the important relationships in the architecture and recognition performance of this Bayesian world, independent of the details. The primary function of model. A number of possibilities are analyzed for bringing the neocortex is to make predictions by comparing the the model closer to the theory, making it uniform, scalable, knowledge of the invariant structure with the most recent less biased and able to learn a larger variety of images and observed details. their transformations. The effect of these modifications on The regions in the hierarchy are connected by multiple recognition accuracy is explored. We identify and discuss a number of both conceptual and practical challenges to the feedforward and feedback connections. Prediction requires Bayesian approach as well as missing details in the theory a comparison between what is happening (feedforward) that are needed to design a scalable and universal model. and what you expect to happen (feedback). -
Prediction and Postdiction
BEHAVIORAL AND BRAIN SCIENCES (2008) 31, 179–239 Printed in the United States of America doi: 10.1017/S0140525X08003804 Visual prediction: Psychophysics and neurophysiology of compensation for time delays Romi Nijhawan Department of Psychology, University of Sussex, Falmer, East Sussex, BN1 9QH, United Kingdom [email protected] http://www.sussex.ac.uk/psychology/profile116415.html Abstract: A necessary consequence of the nature of neural transmission systems is that as change in the physical state of a time-varying event takes place, delays produce error between the instantaneous registered state and the external state. Another source of delay is the transmission of internal motor commands to muscles and the inertia of the musculoskeletal system. How does the central nervous system compensate for these pervasive delays? Although it has been argued that delay compensation occurs late in the motor planning stages, even the earliest visual processes, such as phototransduction, contribute significantly to delays. I argue that compensation is not an exclusive property of the motor system, but rather, is a pervasive feature of the central nervous system (CNS) organization. Although the motor planning system may contain a highly flexible compensation mechanism, accounting not just for delays but also variability in delays (e.g., those resulting from variations in luminance contrast, internal body temperature, muscle fatigue, etc.), visual mechanisms also contribute to compensation. Previous suggestions of this notion of “visual prediction” led to a lively debate producing re-examination of previous arguments, new analyses, and review of the experiments presented here. Understanding visual prediction will inform our theories of sensory processes and visual perception, and will impact our notion of visual awareness. -
The Neuroscience of Human Intelligence Differences
Edinburgh Research Explorer The neuroscience of human intelligence differences Citation for published version: Deary, IJ, Penke, L & Johnson, W 2010, 'The neuroscience of human intelligence differences', Nature Reviews Neuroscience, vol. 11, pp. 201-211. https://doi.org/10.1038/nrn2793 Digital Object Identifier (DOI): 10.1038/nrn2793 Link: Link to publication record in Edinburgh Research Explorer Document Version: Peer reviewed version Published In: Nature Reviews Neuroscience Publisher Rights Statement: This is an author's accepted manuscript of the following article: Deary, I. J., Penke, L. & Johnson, W. (2010), "The neuroscience of human intelligence differences", in Nature Reviews Neuroscience 11, p. 201-211. The final publication is available at http://dx.doi.org/10.1038/nrn2793 General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact [email protected] providing details, and we will remove access to the work immediately and investigate your claim. Download date: 02. Oct. 2021 Nature Reviews Neuroscience in press The neuroscience of human intelligence differences Ian J. Deary*, Lars Penke* and Wendy Johnson* *Centre for Cognitive Ageing and Cognitive Epidemiology, Department of Psychology, University of Edinburgh, Edinburgh EH4 2EE, Scotland, UK. All authors contributed equally to the work. -
The Flash-Lag Effect and Related Mislocalizations: Findings, Properties, and Theories
Psychological Bulletin © 2013 American Psychological Association 2014, Vol. 140, No. 1, 308–338 0033-2909/14/$12.00 DOI: 10.1037/a0032899 The Flash-Lag Effect and Related Mislocalizations: Findings, Properties, and Theories Timothy L. Hubbard Texas Christian University If an observer sees a flashed (briefly presented) object that is aligned with a moving target, the perceived position of the flashed object usually lags the perceived position of the moving target. This has been referred to as the flash-lag effect, and the flash-lag effect has been suggested to reflect how an observer compensates for delays in perception that are due to neural processing times and is thus able to interact with dynamic stimuli in real time. Characteristics of the stimulus and of the observer that influence the flash-lag effect are reviewed, and the sensitivity or robustness of the flash-lag effect to numerous variables is discussed. Properties of the flash-lag effect and how the flash-lag effect might be related to several other perceptual and cognitive processes and phenomena are considered. Unresolved empirical issues are noted. Theories of the flash-lag effect are reviewed, and evidence inconsistent with each theory is noted. The flash-lag effect appears to involve low-level perceptual processes and high-level cognitive processes, reflects the operation of multiple mechanisms, occurs in numerous stimulus dimensions, and occurs within and across multiple modalities. It is suggested that the flash-lag effect derives from more basic mislocalizations of the moving target or flashed object and that understanding and analysis of the flash-lag effect should focus on these more basic mislocalizations rather than on the relationship between the moving target and the flashed object. -
Shimojo's C.V
Curriculum Vitae of Shinsuke Shimojo October 2, 2020 Name: Shinsuke Shimojo, Ph.D. Address: 837 San Rafael Terrace, Pasadena, CA 91105 Phone: (626) 403-7417 (home) (626) 395-3324 (office) Fax: (626) 792-8583 (office) E-Mail: [email protected] Birthplace: Tokyo Birthdate: April 1, 1955 Nationality: Japanese Education: Degree Year Field of Study University of Tokyo B.A. 1978 University of Tokyo M.A. 1980 Experimental Psychology Massachusetts Institute Ph.D. 1985 Experimental of Technology Psychology Professional Experience: 1981-1982 Visiting Scholar, Department of Psychology, Massachusetts Institute of Technology, Cambridge, MA. 1982-1983 Research Affiliate, Department of Psychology, Massachusetts Institute of Technology, Cambridge, MA. 1983-1985 Teaching Assistant, Department of Psychology, Massachusetts Institute of Technology, Cambridge, MA. 1986-1987 Postdoctoral Fellow, Department of Ophthalmology, Nagoya University, Nagoya, Japan. 1986-1989 Postdoctoral Fellow Smith-Kettlewell Eye Research Institute, San Francisco, CA. 1989-1997 Associate Professor, Department of Psychology / Department of Life Sciences (Psychology), Graduate School of Arts & Sciences, University of Tokyo, Tokyo, Japan. 1989-1993 Fellow, Department of Psychology, Harvard University, Cambridge, MA. Shimojo, page 2 of 55 Professional Experience (con’t.): 1993-1994 Visiting Scientist, Dept. of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA. 1997-1998 Associate Professor, Division of Biology / Computation & Neural Systems, California -
Poster Session 1
Part IX Poster Session 1 99 SCALING DEPRESSION WITH PSYCHOPHYSICAL SCALING: A R COMPARISON BETWEEN THE BORG CR SCALE⃝ (CR100, R CENTIMAX⃝) AND PHQ-9 ON A NON-CLINICAL SAMPLE Elisabet Borg and Jessica Sundell Department of Psychology, Stockholm University, SE-10691 Stockholm, Sweden <[email protected]> Abstract A non-clinical sample (n =71) answered an online survey containing the Patient Health Questionnaire-9 (PHQ-9), that rates the frequency of symptoms of depression (DSM-IV). R R The questions were also adapted for the Borg CR Scale⃝ (CR100, centiMax⃝) (0–100), a general intensity scale with verbal anchors from a minimal to a maximal intensity placed in agreement with a numerical scale to give ratio data). The cut-off score of PHQ-9 10 was found to correspond to 29cM. Cronbach’s alpha for both scales was high (0.87)≥ and ≥ the correlation between the scales was r =0.78 (rs =0.69). Despite restriction of range, the cM- scale works well for scaling depression with added possibilities for valuable data analysis. According to the World Health Organisation (2016), depression is a growing health prob- lem around the globe. Since it is one of the most prevalent emotional disorders, can cause considerable impairment in an individual’s capacity to deal with his or her regular duties, and at its worst is a major risk factor for suicide attempt and suicide, correct and early diagnosis and treatment is very important (e.g., APA, 2013, Ferrari et al., 2013; Richards, 2011). Major depressive disorder (MDD) is characterized by distinct episodes of at least two weeks duration involving changes in affects, in cognition and neurovegetative func- tions, and described as being characterized by sadness, loss of pleasure or interest, fatigue, low self- esteem and self-blame, disturbed appetite and sleep, in addition to poor concen- tration (DSM- V, APA, 2013; WHO, 2016). -
On Intelligence As Memory
Artificial Intelligence 169 (2005) 181–183 www.elsevier.com/locate/artint Book review Jeff Hawkins and Sandra Blakeslee, On Intelligence, Times Books, 2004. On intelligence as memory Jerome A. Feldman International Computer Science Institute, Berkeley, CA 94704-1198, USA Available online 3 November 2005 On Intelligence by Jeff Hawkins with Sandra Blakeslee has been inspirational for non- scientists as well as some of our most distinguished biologists, as can be seen from the web site (http://www.onintelligence.org). The book is engagingly written as a first person memoir of one computer engineer’s search for enlightenment on how human intelligence is computed by our brains. The central insight is important—much of our intelligence comes from the ability to recognize complex situations and to predict their possible outcomes. There is something fundamental about the brain and neural computation that makes us intelligent and AI should be studying it. Hawkins actually understates the power of human associative memory. Because of the massive parallelism and connectivity, the brain essentially reconfigures itself to be constantly sensitive to the current context and goals [1]. For example, when you are planning to buy some kind of car, you start noticing them. The book is surely right that better AI systems would follow if we could develop programs that were more like human memory. For whatever reason, memory as such is no longer studied much in AI—the Russell and Norvig [3] text has one index item for memory and that refers to semantics. From a scientific AI/Cognitive Science perspective, the book fails to tackle most of the questions of interest. -
Motion Extrapolation in Visual Processing: Lessons from 25 Years of Flash-Lag Debate
5698 • The Journal of Neuroscience, July 22, 2020 • 40(30):5698–5705 Viewpoints Motion Extrapolation in Visual Processing: Lessons from 25 Years of Flash-Lag Debate Hinze Hogendoorn Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, 3010 Victoria, Australia Because of the delays inherent in neural transmission, the brain needs time to process incoming visual information. If these delays were not somehow compensated, we would consistently mislocalize moving objects behind their physical positions. Twenty-five years ago, Nijhawan used a perceptual illusion he called the flash-lag effect (FLE) to argue that the brain’s visual system solves this computational challenge by extrapolating the position of moving objects (Nijhawan, 1994). Although motion extrapolation had been proposed a decade earlier (e.g., Finke et al., 1986), the proposal that it caused the FLE and functioned to compensate for computational delays was hotly debated in the years that followed, with several alternative interpretations put forth to explain the effect. Here, I argue, 25 years later, that evidence from behavioral, computational, and particularly recent functional neuroimaging studies converges to support the existence of motion extrapolation mecha- nisms in the visual system, as well as their causal involvement in the FLE. First, findings that were initially argued to chal- lenge the motion extrapolation model of the FLE have since been explained, and those explanations have been tested and corroborated by more recent findings. Second, motion extrapolation explains the spatial shifts observed in several FLE condi- tions that cannot be explained by alternative (temporal) models of the FLE. Finally, neural mechanisms that actually perform motion extrapolation have been identified at multiple levels of the visual system, in multiple species, and with multiple dif- ferent methods.