Emergent Intelligence from Neuromorphic Complexity and Synthetic Synapses in Nanowire Networks

Total Page:16

File Type:pdf, Size:1020Kb

Emergent Intelligence from Neuromorphic Complexity and Synthetic Synapses in Nanowire Networks Emergent intelligence from neuromorphic complexity and synthetic synapses in nanowire networks AIA2019, ESO, Garching 25 July 2019 Zdenka Kuncic School of Physics and Sydney Nano Institute The University of Sydney Page 1 Outline 1) Background: AI in historical context 2) Introduction: Neuromorphic computing 3) Synthetic neural networks with complex topology and emergent dynamics 4) Training & learning in hardware The University of Sydney Page 2 AI in historical context § 1940s: Alan Turing first proposes “brain- inspired” machine intelligence § 1950s: Frank Rosenblatt (Cornell) proposes “perceptron” neuron model § 1960s: Marvin Minsky (MIT) argues for multi-layer (feedforward) network § 1970s: AI winter § 1980s: resurgence § 1990s: Carver Mead (Caltech) pioneers “neuromorphic engineering” The University of Sydney Page 3 AI in historical context Proc. IEEE 1990 The University of Sydney Page 4 Deep learning in 1997 IBM Gary DeepBlue Kasparov The University of Sydney Image Credit: Bernard H. Humm in Applied Artificial Intelligence Page 5 AI in the 21st C The University of Sydney Page 6 AI “holy grail”: general intelligence § How to realise more brain-like information processing? Machine computation Human thinking Deterministic Non-deterministic Accurate Creative Repetitive Adaptive Static Dynamic The University of Sydney Page 7 Neuron Review Neuroscience-Inspired Artificial Intelligence Demis Hassabis,1 ,2,* Dharshan Kumaran,1 ,3 Christopher Summerfield,1 ,4 and Matthew Botvinick1 ,2 1DeepMind, 5 New Street Square, London, UK 2Gatsby Computational Neuroscience Unit, 25 Howland Street, London, UK 3Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK 4Department of Experimental Psychology, University of Oxford, Oxford, UK *Correspondence: [email protected] http://dx.doi.org/10.1016/j.neuron.2017.06.011 The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields. The University of Sydney Page 8 In recent years, rapid progress has been made in the related tively. For example, if an algorithm is not quite attaining the level fields of neuroscience and artificial intelligence (AI). At the of performance required or expected, but we observe it is core to dawn of the computer age, work on AI was inextricably inter- the functioning of the brain, then we can surmise that redoubled twined with neuroscience and psychology, and many of the early engineering efforts geared to making it work in artificial systems pioneers straddled both fields, with collaborations between are likely to pay off. these disciplines proving highly productive (Churchland and Of course from a practical standpoint of building an AI Sejnowski, 1988; Hebb, 1949; Hinton et al., 1986; Hopfield, system, we need not slavishly enforce adherence to biological 1982; McCulloch and Pitts, 1943; Turing, 1950). However, plausibility. From an engineering perspective, what works is more recently, the interaction has become much less common- ultimately all that matters. For our purposes then, biological place, as both subjects have grown enormously in complexity plausibility is a guide, not a strict requirement. What we are and disciplinary boundaries have solidified. In this review, we interested in is a systems neuroscience-level understanding argue for the critical and ongoing importance of neuroscience of the brain, namely the algorithms, architectures, functions, in generating ideas that will accelerate and guide AI research and representations it utilizes. This roughly corresponds to (see Hassabis commentary in Brooks et al., 2012). the top two levels of the three levels of analysis that Marr We begin with the premise that building human-level general famously stated are required to understand any complex bio- AI (or ‘‘Turing-powerful’’ intelligent systems; Turing, 1936) is a logical system (Marr and Poggio, 1976): the goals of the sys- daunting task, because the search space of possible solutions tem (the computational level) and the process and computa- is vast and likely only very sparsely populated. We argue that tions that realize this goal (the algorithmic level). The precise this therefore underscores the utility of scrutinizing the inner mechanisms by which this is physically realized in a biological workings of the human brain— the only existing proof that substrate are less relevant here (the implementation level). such an intelligence is even possible. Studying animal cognition Note this is where our approach to neuroscience-inspired AI and its neural implementation also has a vital role to play, as it differs from other initiatives, such as the Blue Brain Project can provide a window into various important aspects of higher- (Markram, 2006) or the field of neuromorphic computing sys- level general intelligence. tems (Esser et al., 2016), which attempt to closely mimic or The benefits to developing AI of closely examining biological directly reverse engineer the specifics of neural circuits (albeit intelligence are two-fold. First, neuroscience provides a rich with different goals in mind). By focusing on the computational source of inspiration for new types of algorithms and architec- and algorithmic levels, we gain transferrable insights into gen- tures, independent of and complementary to the mathematical eral mechanisms of brain function, while leaving room to and logic-based methods and ideas that have largely dominated accommodate the distinctive opportunities and challenges traditional approaches to AI. For example, were a new facet of that arise when building intelligent machines in silico. biological computation found to be critical to supporting a cogni- The following sections unpack these points by considering the tive function, then we would consider it an excellent candidate past, present, and future of the AI-neuroscience interface. for incorporation into artificial systems. Second, neuroscience Before beginning, we offer a clarification. Throughout this article, can provide validation of AI techniques that already exist. If a we employ the terms ‘‘neuroscience’’ and ‘‘AI.’’ We use these known algorithm is subsequently found to be implemented in terms in the widest possible sense. When we say neuroscience, the brain, then that is strong support for its plausibility as an in- we mean to include all fields that are involved with the study of tegral component of an overall general intelligence system. the brain, the behaviors that it generates, and the mechanisms Such clues can be critical to a long-term research program by which it does so, including cognitive neuroscience, systems when determining where to allocate resources most produc- neuroscience and psychology. When we say AI, we mean work Neuron 95, July 19, 2017 ª 2017 Published by Elsevier Inc. 245 Neuromorphic computing § Beyond Von Neumann neuromorphic chip architecture IBM TrueNorth chips Intel Loihi chip The University of Sydney Page 9 The University of Sydney Page 10 Neuromorphic computing § Human Brain Project: neuromorphic chips and electronic circuitry for AI applications https://www.humanbrainproject.eu/ en/silicon-brains/ The University of Sydney Page 11 An alternate approach towards brain-like intelligence, beyond silicon? The University of Sydney Page 12 Neurodynamics + nanotechnology § The brain is a complex physical system whose structure + function are intricately linked à emergent phenomena § Bottom-up self-assembly of nano-materials creates bio- mimetic structures à neural network-like electronic circuitry The University of Sydney Page 13 Neuromorphic nanowire networks The University of Sydney Demis et al. Nanotech. 2015 Page 14 nanowire network Neuromorphic nanowire networks § Nanowires self-assemble into a complex, densely interconnected network, with a topology similar to a biological neural network biological neural network The University of Sydney Page 15 nanowire network Neuromorphic nanowire networks § Nanowires self-assemble into a complex, densely interconnected network, like neurons § When electrically stimulated, junctions respond like “synthetic synapses” biological neural network The University of Sydney Page 16 Neuromorphic nanowire networks § Key features: topology of network structure, adaptive synthetic synapses nanowire network (simulated graph representation) The University of Sydney Ullman, Science 2019 Page 17 Biological neural network models The University of Sydney Lynn & Bassett, Nat. Rev. Phys. 2019 Page 18 Network topology and connectivity (1011 neurons) A “small world network” (between ordered and random) is proposed to be optimal for synchronizing neural activity between different regions in the human brain. The University of Sydney Image credit: A. Loeffler (PhD student) Page 19 Neuromorphic nanowire networks - modelling § Synthetic synapses: V(!) = IR(!) , d!/dt = V à state variable !(t) depends on history à memory of past states § Kirchoff’s circuit laws: ∑# $# = 0 , ∑( )( = 0 § Use adjacency matrix (structural connectivity) to solve network dynamics
Recommended publications
  • Lecture Note on Deep Learning and Quantum Many-Body Computation
    Lecture Note on Deep Learning and Quantum Many-Body Computation Jin-Guo Liu, Shuo-Hui Li, and Lei Wang∗ Institute of Physics, Chinese Academy of Sciences Beijing 100190, China November 23, 2018 Abstract This note introduces deep learning from a computa- tional quantum physicist’s perspective. The focus is on deep learning’s impacts to quantum many-body compu- tation, and vice versa. The latest version of the note is at http://wangleiphy.github.io/. Please send comments, suggestions and corrections to the email address in below. ∗ [email protected] CONTENTS 1 introduction2 2 discriminative learning4 2.1 Data Representation 4 2.2 Model: Artificial Neural Networks 6 2.3 Cost Function 9 2.4 Optimization 11 2.4.1 Back Propagation 11 2.4.2 Gradient Descend 13 2.5 Understanding, Visualization and Applications Beyond Classification 15 3 generative modeling 17 3.1 Unsupervised Probabilistic Modeling 17 3.2 Generative Model Zoo 18 3.2.1 Boltzmann Machines 19 3.2.2 Autoregressive Models 22 3.2.3 Normalizing Flow 23 3.2.4 Variational Autoencoders 25 3.2.5 Tensor Networks 28 3.2.6 Generative Adversarial Networks 29 3.3 Summary 32 4 applications to quantum many-body physics and more 33 4.1 Material and Chemistry Discoveries 33 4.2 Density Functional Theory 34 4.3 “Phase” Recognition 34 4.4 Variational Ansatz 34 4.5 Renormalization Group 35 4.6 Monte Carlo Update Proposals 36 4.7 Tensor Networks 37 4.8 Quantum Machine Leanring 38 4.9 Miscellaneous 38 5 hands on session 39 5.1 Computation Graph and Back Propagation 39 5.2 Deep Learning Libraries 41 5.3 Generative Modeling using Normalizing Flows 42 5.4 Restricted Boltzmann Machine for Image Restoration 43 5.5 Neural Network as a Quantum Wave Function Ansatz 43 6 challenges ahead 45 7 resources 46 BIBLIOGRAPHY 47 1 1 INTRODUCTION Deep Learning (DL) ⊂ Machine Learning (ML) ⊂ Artificial Intelli- gence (AI).
    [Show full text]
  • Neural Network Transformation and Co-Design Under Neuromorphic Hardware Constraints
    NEUTRAMS: Neural Network Transformation and Co-design under Neuromorphic Hardware Constraints Yu Ji ∗, YouHui Zhang∗‡, ShuangChen Li†,PingChi†, CiHang Jiang∗,PengQu∗,YuanXie†, WenGuang Chen∗‡ Email: [email protected], [email protected] ∗Department of Computer Science and Technology, Tsinghua University, PR.China † Department of Electrical and Computer Engineering, University of California at Santa Barbara, USA ‡ Center for Brain-Inspired Computing Research, Tsinghua University, PR.China Abstract—With the recent reincarnations of neuromorphic (CMP)1 with a Network-on-Chip (NoC) has emerged as a computing comes the promise of a new computing paradigm, promising platform for neural network simulation, motivating with a focus on the design and fabrication of neuromorphic chips. many recent studies of neuromorphic chips [4]–[10]. The A key challenge in design, however, is that programming such chips is difficult. This paper proposes a systematic methodology design of neuromorphic chips is promising to enable a new with a set of tools to address this challenge. The proposed computing paradigm. toolset is called NEUTRAMS (Neural network Transformation, One major issue is that neuromorphic hardware usually Mapping and Simulation), and includes three key components: places constraints on NN models (such as the maximum fan- a neural network (NN) transformation algorithm, a configurable in/fan-out of a single neuron, the limited range of synaptic clock-driven simulator of neuromorphic chips and an optimized runtime tool that maps NNs onto the target hardware for weights) that it can support, and the hardware types of neurons better resource utilization. To address the challenges of hardware or activation functions are usually simpler than the software constraints on implementing NN models (such as the maximum counterparts.
    [Show full text]
  • AI Computer Wraps up 4-1 Victory Against Human Champion Nature Reports from Alphago's Victory in Seoul
    The Go Files: AI computer wraps up 4-1 victory against human champion Nature reports from AlphaGo's victory in Seoul. Tanguy Chouard 15 March 2016 SEOUL, SOUTH KOREA Google DeepMind Lee Sedol, who has lost 4-1 to AlphaGo. Tanguy Chouard, an editor with Nature, saw Google-DeepMind’s AI system AlphaGo defeat a human professional for the first time last year at the ancient board game Go. This week, he is watching top professional Lee Sedol take on AlphaGo, in Seoul, for a $1 million prize. It’s all over at the Four Seasons Hotel in Seoul, where this morning AlphaGo wrapped up a 4-1 victory over Lee Sedol — incidentally, earning itself and its creators an honorary '9-dan professional' degree from the Korean Baduk Association. After winning the first three games, Google-DeepMind's computer looked impregnable. But the last two games may have revealed some weaknesses in its makeup. Game four totally changed the Go world’s view on AlphaGo’s dominance because it made it clear that the computer can 'bug' — or at least play very poor moves when on the losing side. It was obvious that Lee felt under much less pressure than in game three. And he adopted a different style, one based on taking large amounts of territory early on rather than immediately going for ‘street fighting’ such as making threats to capture stones. This style – called ‘amashi’ – seems to have paid off, because on move 78, Lee produced a play that somehow slipped under AlphaGo’s radar. David Silver, a scientist at DeepMind who's been leading the development of AlphaGo, said the program estimated its probability as 1 in 10,000.
    [Show full text]
  • Accelerators and Obstacles to Emerging ML-Enabled Research Fields
    Life at the edge: accelerators and obstacles to emerging ML-enabled research fields Soukayna Mouatadid Steve Easterbrook Department of Computer Science Department of Computer Science University of Toronto University of Toronto [email protected] [email protected] Abstract Machine learning is transforming several scientific disciplines, and has resulted in the emergence of new interdisciplinary data-driven research fields. Surveys of such emerging fields often highlight how machine learning can accelerate scientific discovery. However, a less discussed question is how connections between the parent fields develop and how the emerging interdisciplinary research evolves. This study attempts to answer this question by exploring the interactions between ma- chine learning research and traditional scientific disciplines. We examine different examples of such emerging fields, to identify the obstacles and accelerators that influence interdisciplinary collaborations between the parent fields. 1 Introduction In recent decades, several scientific research fields have experienced a deluge of data, which is impacting the way science is conducted. Fields such as neuroscience, psychology or social sciences are being reshaped by the advances in machine learning (ML) and the data processing technologies it provides. In some cases, new research fields have emerged at the intersection of machine learning and more traditional research disciplines. This is the case for example, for bioinformatics, born from collaborations between biologists and computer scientists in the mid-80s [1], which is now considered a well-established field with an active community, specialized conferences, university departments and research groups. How do these transformations take place? Do they follow similar paths? If yes, how can the advances in such interdisciplinary fields be accelerated? Researchers in the philosophy of science have long been interested in these questions.
    [Show full text]
  • The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design
    The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design Jeffrey Dean Google Research [email protected] Abstract The past decade has seen a remarkable series of advances in machine learning, and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas, including computer vision, speech recognition, language translation, and natural language understanding tasks. This paper is a companion paper to a keynote talk at the 2020 International Solid-State Circuits Conference (ISSCC) discussing some of the advances in machine learning, and their implications on the kinds of computational devices we need to build, especially in the post-Moore’s Law-era. It also discusses some of the ways that machine learning may also be able to help with some aspects of the circuit design process. Finally, it provides a sketch of at least one interesting direction towards much larger-scale multi-task models that are sparsely activated and employ much more dynamic, example- and task-based routing than the machine learning models of today. Introduction The past decade has seen a remarkable series of advances in machine learning (ML), and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas [LeCun et al. 2015]. Major areas of significant advances ​ ​ include computer vision [Krizhevsky et al. 2012, Szegedy et al. 2015, He et al. 2016, Real et al. 2017, Tan ​ ​ ​ ​ ​ ​ and Le 2019], speech recognition [Hinton et al.
    [Show full text]
  • The Power and Promise of Computers That Learn by Example
    Machine learning: the power and promise of computers that learn by example MACHINE LEARNING: THE POWER AND PROMISE OF COMPUTERS THAT LEARN BY EXAMPLE 1 Machine learning: the power and promise of computers that learn by example Issued: April 2017 DES4702 ISBN: 978-1-78252-259-1 The text of this work is licensed under the terms of the Creative Commons Attribution License which permits unrestricted use, provided the original author and source are credited. The license is available at: creativecommons.org/licenses/by/4.0 Images are not covered by this license. This report can be viewed online at royalsociety.org/machine-learning Cover image © shulz. 2 MACHINE LEARNING: THE POWER AND PROMISE OF COMPUTERS THAT LEARN BY EXAMPLE Contents Executive summary 5 Recommendations 8 Chapter one – Machine learning 15 1.1 Systems that learn from data 16 1.2 The Royal Society’s machine learning project 18 1.3 What is machine learning? 19 1.4 Machine learning in daily life 21 1.5 Machine learning, statistics, data science, robotics, and AI 24 1.6 Origins and evolution of machine learning 25 1.7 Canonical problems in machine learning 29 Chapter two – Emerging applications of machine learning 33 2.1 Potential near-term applications in the public and private sectors 34 2.2 Machine learning in research 41 2.3 Increasing the UK’s absorptive capacity for machine learning 45 Chapter three – Extracting value from data 47 3.1 Machine learning helps extract value from ‘big data’ 48 3.2 Creating a data environment to support machine learning 49 3.3 Extending the lifecycle
    [Show full text]
  • ARCHITECTS of INTELLIGENCE for Xiaoxiao, Elaine, Colin, and Tristan ARCHITECTS of INTELLIGENCE
    MARTIN FORD ARCHITECTS OF INTELLIGENCE For Xiaoxiao, Elaine, Colin, and Tristan ARCHITECTS OF INTELLIGENCE THE TRUTH ABOUT AI FROM THE PEOPLE BUILDING IT MARTIN FORD ARCHITECTS OF INTELLIGENCE Copyright © 2018 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. Acquisition Editors: Ben Renow-Clarke Project Editor: Radhika Atitkar Content Development Editor: Alex Sorrentino Proofreader: Safis Editing Presentation Designer: Sandip Tadge Cover Designer: Clare Bowyer Production Editor: Amit Ramadas Marketing Manager: Rajveer Samra Editorial Director: Dominic Shakeshaft First published: November 2018 Production reference: 2201118 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK ISBN 978-1-78913-151-2 www.packt.com Contents Introduction ........................................................................ 1 A Brief Introduction to the Vocabulary of Artificial Intelligence .......10 How AI Systems Learn ........................................................11 Yoshua Bengio .....................................................................17 Stuart J.
    [Show full text]
  • Technology Innovation and the Future of Air Force Intelligence Analysis
    C O R P O R A T I O N LANCE MENTHE, DAHLIA ANNE GOLDFELD, ABBIE TINGSTAD, SHERRILL LINGEL, EDWARD GEIST, DONALD BRUNK, AMANDA WICKER, SARAH SOLIMAN, BALYS GINTAUTAS, ANNE STICKELLS, AMADO CORDOVA Technology Innovation and the Future of Air Force Intelligence Analysis Volume 2, Technical Analysis and Supporting Material RR-A341-2_cover.indd All Pages 2/8/21 12:20 PM For more information on this publication, visit www.rand.org/t/RRA341-2 Library of Congress Cataloging-in-Publication Data is available for this publication. ISBN: 978-1-9774-0633-0 Published by the RAND Corporation, Santa Monica, Calif. © Copyright 2021 RAND Corporation R® is a registered trademark. Cover: U.S. Marine Corps photo by Cpl. William Chockey; faraktinov, Adobe Stock. Limited Print and Electronic Distribution Rights This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited. Permission is given to duplicate this document for personal use only, as long as it is unaltered and complete. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial use. For information on reprint and linking permissions, please visit www.rand.org/pubs/permissions. The RAND Corporation is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more prosperous. RAND is nonprofit, nonpartisan, and committed to the public interest. RAND’s publications do not necessarily reflect the opinions of its research clients and sponsors.
    [Show full text]
  • Deep Learning with Tensorflow 2 and Keras Second Edition
    Deep Learning with TensorFlow 2 and Keras Second Edition Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API Antonio Gulli Amita Kapoor Sujit Pal BIRMINGHAM - MUMBAI Deep Learning with TensorFlow 2 and Keras Second Edition Copyright © 2019 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. Commissioning Editor: Amey Varangaonkar Acquisition Editors: Yogesh Deokar, Ben Renow-Clarke Acquisition Editor – Peer Reviews: Suresh Jain Content Development Editor: Ian Hough Technical Editor: Gaurav Gavas Project Editor: Janice Gonsalves Proofreader: Safs Editing Indexer: Rekha Nair Presentation Designer: Sandip Tadge First published: April 2017 Second edition: December 2019 Production reference: 2130320 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK. ISBN 978-1-83882-341-2 www.packt.com packt.com Subscribe to our online digital library for full access to over 7,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career.
    [Show full text]
  • A Sparsity-Driven Backpropagation-Less Learning Framework Using Populations of Spiking Growth Transform Neurons
    ORIGINAL RESEARCH published: 28 July 2021 doi: 10.3389/fnins.2021.715451 A Sparsity-Driven Backpropagation-Less Learning Framework Using Populations of Spiking Growth Transform Neurons Ahana Gangopadhyay and Shantanu Chakrabartty* Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO, United States Growth-transform (GT) neurons and their population models allow for independent control over the spiking statistics and the transient population dynamics while optimizing a physically plausible distributed energy functional involving continuous-valued neural variables. In this paper we describe a backpropagation-less learning approach to train a Edited by: network of spiking GT neurons by enforcing sparsity constraints on the overall network Siddharth Joshi, spiking activity. The key features of the model and the proposed learning framework University of Notre Dame, United States are: (a) spike responses are generated as a result of constraint violation and hence Reviewed by: can be viewed as Lagrangian parameters; (b) the optimal parameters for a given task Wenrui Zhang, can be learned using neurally relevant local learning rules and in an online manner; University of California, Santa Barbara, United States (c) the network optimizes itself to encode the solution with as few spikes as possible Youhui Zhang, (sparsity); (d) the network optimizes itself to operate at a solution with the maximum Tsinghua University, China dynamic range and away from saturation; and (e) the framework is flexible enough to *Correspondence: incorporate additional structural and connectivity constraints on the network. As a result, Shantanu Chakrabartty [email protected] the proposed formulation is attractive for designing neuromorphic tinyML systems that are constrained in energy, resources, and network structure.
    [Show full text]
  • Multilayer Spiking Neural Network for Audio Samples Classification Using
    Multilayer Spiking Neural Network for Audio Samples Classification Using SpiNNaker Juan Pedro Dominguez-Morales, Angel Jimenez-Fernandez, Antonio Rios-Navarro, Elena Cerezuela-Escudero, Daniel Gutierrez-Galan, Manuel J. Dominguez-Morales, and Gabriel Jimenez-Moreno Robotic and Technology of Computers Lab, Department of Architecture and Technology of Computers, University of Seville, Seville, Spain {jpdominguez,ajimenez,arios,ecerezuela, dgutierrez,mdominguez,gaji}@atc.us.es http://www.atc.us.es Abstract. Audio classification has always been an interesting subject of research inside the neuromorphic engineering field. Tools like Nengo or Brian, and hard‐ ware platforms like the SpiNNaker board are rapidly increasing in popularity in the neuromorphic community due to the ease of modelling spiking neural networks with them. In this manuscript a multilayer spiking neural network for audio samples classification using SpiNNaker is presented. The network consists of different leaky integrate-and-fire neuron layers. The connections between them are trained using novel firing rate based algorithms and tested using sets of pure tones with frequencies that range from 130.813 to 1396.91 Hz. The hit rate percentage values are obtained after adding a random noise signal to the original pure tone signal. The results show very good classification results (above 85 % hit rate) for each class when the Signal-to-noise ratio is above 3 decibels, vali‐ dating the robustness of the network configuration and the training step. Keywords: SpiNNaker · Spiking neural network · Audio samples classification · Spikes · Neuromorphic auditory sensor · Address-Event Representation 1 Introduction Neuromorphic engineering is a discipline that studies, designs and implements hardware and software with the aim of mimicking the way in which nervous systems work, focusing its main inspiration on how the brain solves complex problems easily.
    [Show full text]
  • Neuromorphic Computing Using Emerging Synaptic Devices: a Retrospective Summary and an Outlook
    electronics Review Neuromorphic Computing Using Emerging Synaptic Devices: A Retrospective Summary and an Outlook Jaeyoung Park 1,2 1 School of Computer Science and Electronic Engineering, Handong Global University, Pohang 37554, Korea; [email protected]; Tel.: +82-54-260-1394 2 School of Electronic Engineering, Soongsil University, Seoul 06978, Korea Received: 29 June 2020; Accepted: 5 August 2020; Published: 1 September 2020 Abstract: In this paper, emerging memory devices are investigated for a promising synaptic device of neuromorphic computing. Because the neuromorphic computing hardware requires high memory density, fast speed, and low power as well as a unique characteristic that simulates the function of learning by imitating the process of the human brain, memristor devices are considered as a promising candidate because of their desirable characteristic. Among them, Phase-change RAM (PRAM) Resistive RAM (ReRAM), Magnetic RAM (MRAM), and Atomic Switch Network (ASN) are selected to review. Even if the memristor devices show such characteristics, the inherent error by their physical properties needs to be resolved. This paper suggests adopting an approximate computing approach to deal with the error without degrading the advantages of emerging memory devices. Keywords: neuromorphic computing; memristor; PRAM; ReRAM; MRAM; ASN; synaptic device 1. Introduction Artificial Intelligence (AI), called machine intelligence, has been researched over the decades, and the AI now becomes an essential part of the industry [1–3]. Artificial intelligence means machines that imitate the cognitive functions of humans such as learning and problem solving [1,3]. Similarly, neuromorphic computing has been researched that imitates the process of the human brain compute and store [4–6].
    [Show full text]