
Emergent intelligence from neuromorphic complexity and synthetic synapses in nanowire networks AIA2019, ESO, Garching 25 July 2019 Zdenka Kuncic School of Physics and Sydney Nano Institute The University of Sydney Page 1 Outline 1) Background: AI in historical context 2) Introduction: Neuromorphic computing 3) Synthetic neural networks with complex topology and emergent dynamics 4) Training & learning in hardware The University of Sydney Page 2 AI in historical context § 1940s: Alan Turing first proposes “brain- inspired” machine intelligence § 1950s: Frank Rosenblatt (Cornell) proposes “perceptron” neuron model § 1960s: Marvin Minsky (MIT) argues for multi-layer (feedforward) network § 1970s: AI winter § 1980s: resurgence § 1990s: Carver Mead (Caltech) pioneers “neuromorphic engineering” The University of Sydney Page 3 AI in historical context Proc. IEEE 1990 The University of Sydney Page 4 Deep learning in 1997 IBM Gary DeepBlue Kasparov The University of Sydney Image Credit: Bernard H. Humm in Applied Artificial Intelligence Page 5 AI in the 21st C The University of Sydney Page 6 AI “holy grail”: general intelligence § How to realise more brain-like information processing? Machine computation Human thinking Deterministic Non-deterministic Accurate Creative Repetitive Adaptive Static Dynamic The University of Sydney Page 7 Neuron Review Neuroscience-Inspired Artificial Intelligence Demis Hassabis,1 ,2,* Dharshan Kumaran,1 ,3 Christopher Summerfield,1 ,4 and Matthew Botvinick1 ,2 1DeepMind, 5 New Street Square, London, UK 2Gatsby Computational Neuroscience Unit, 25 Howland Street, London, UK 3Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK 4Department of Experimental Psychology, University of Oxford, Oxford, UK *Correspondence: [email protected] http://dx.doi.org/10.1016/j.neuron.2017.06.011 The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields. The University of Sydney Page 8 In recent years, rapid progress has been made in the related tively. For example, if an algorithm is not quite attaining the level fields of neuroscience and artificial intelligence (AI). At the of performance required or expected, but we observe it is core to dawn of the computer age, work on AI was inextricably inter- the functioning of the brain, then we can surmise that redoubled twined with neuroscience and psychology, and many of the early engineering efforts geared to making it work in artificial systems pioneers straddled both fields, with collaborations between are likely to pay off. these disciplines proving highly productive (Churchland and Of course from a practical standpoint of building an AI Sejnowski, 1988; Hebb, 1949; Hinton et al., 1986; Hopfield, system, we need not slavishly enforce adherence to biological 1982; McCulloch and Pitts, 1943; Turing, 1950). However, plausibility. From an engineering perspective, what works is more recently, the interaction has become much less common- ultimately all that matters. For our purposes then, biological place, as both subjects have grown enormously in complexity plausibility is a guide, not a strict requirement. What we are and disciplinary boundaries have solidified. In this review, we interested in is a systems neuroscience-level understanding argue for the critical and ongoing importance of neuroscience of the brain, namely the algorithms, architectures, functions, in generating ideas that will accelerate and guide AI research and representations it utilizes. This roughly corresponds to (see Hassabis commentary in Brooks et al., 2012). the top two levels of the three levels of analysis that Marr We begin with the premise that building human-level general famously stated are required to understand any complex bio- AI (or ‘‘Turing-powerful’’ intelligent systems; Turing, 1936) is a logical system (Marr and Poggio, 1976): the goals of the sys- daunting task, because the search space of possible solutions tem (the computational level) and the process and computa- is vast and likely only very sparsely populated. We argue that tions that realize this goal (the algorithmic level). The precise this therefore underscores the utility of scrutinizing the inner mechanisms by which this is physically realized in a biological workings of the human brain— the only existing proof that substrate are less relevant here (the implementation level). such an intelligence is even possible. Studying animal cognition Note this is where our approach to neuroscience-inspired AI and its neural implementation also has a vital role to play, as it differs from other initiatives, such as the Blue Brain Project can provide a window into various important aspects of higher- (Markram, 2006) or the field of neuromorphic computing sys- level general intelligence. tems (Esser et al., 2016), which attempt to closely mimic or The benefits to developing AI of closely examining biological directly reverse engineer the specifics of neural circuits (albeit intelligence are two-fold. First, neuroscience provides a rich with different goals in mind). By focusing on the computational source of inspiration for new types of algorithms and architec- and algorithmic levels, we gain transferrable insights into gen- tures, independent of and complementary to the mathematical eral mechanisms of brain function, while leaving room to and logic-based methods and ideas that have largely dominated accommodate the distinctive opportunities and challenges traditional approaches to AI. For example, were a new facet of that arise when building intelligent machines in silico. biological computation found to be critical to supporting a cogni- The following sections unpack these points by considering the tive function, then we would consider it an excellent candidate past, present, and future of the AI-neuroscience interface. for incorporation into artificial systems. Second, neuroscience Before beginning, we offer a clarification. Throughout this article, can provide validation of AI techniques that already exist. If a we employ the terms ‘‘neuroscience’’ and ‘‘AI.’’ We use these known algorithm is subsequently found to be implemented in terms in the widest possible sense. When we say neuroscience, the brain, then that is strong support for its plausibility as an in- we mean to include all fields that are involved with the study of tegral component of an overall general intelligence system. the brain, the behaviors that it generates, and the mechanisms Such clues can be critical to a long-term research program by which it does so, including cognitive neuroscience, systems when determining where to allocate resources most produc- neuroscience and psychology. When we say AI, we mean work Neuron 95, July 19, 2017 ª 2017 Published by Elsevier Inc. 245 Neuromorphic computing § Beyond Von Neumann neuromorphic chip architecture IBM TrueNorth chips Intel Loihi chip The University of Sydney Page 9 The University of Sydney Page 10 Neuromorphic computing § Human Brain Project: neuromorphic chips and electronic circuitry for AI applications https://www.humanbrainproject.eu/ en/silicon-brains/ The University of Sydney Page 11 An alternate approach towards brain-like intelligence, beyond silicon? The University of Sydney Page 12 Neurodynamics + nanotechnology § The brain is a complex physical system whose structure + function are intricately linked à emergent phenomena § Bottom-up self-assembly of nano-materials creates bio- mimetic structures à neural network-like electronic circuitry The University of Sydney Page 13 Neuromorphic nanowire networks The University of Sydney Demis et al. Nanotech. 2015 Page 14 nanowire network Neuromorphic nanowire networks § Nanowires self-assemble into a complex, densely interconnected network, with a topology similar to a biological neural network biological neural network The University of Sydney Page 15 nanowire network Neuromorphic nanowire networks § Nanowires self-assemble into a complex, densely interconnected network, like neurons § When electrically stimulated, junctions respond like “synthetic synapses” biological neural network The University of Sydney Page 16 Neuromorphic nanowire networks § Key features: topology of network structure, adaptive synthetic synapses nanowire network (simulated graph representation) The University of Sydney Ullman, Science 2019 Page 17 Biological neural network models The University of Sydney Lynn & Bassett, Nat. Rev. Phys. 2019 Page 18 Network topology and connectivity (1011 neurons) A “small world network” (between ordered and random) is proposed to be optimal for synchronizing neural activity between different regions in the human brain. The University of Sydney Image credit: A. Loeffler (PhD student) Page 19 Neuromorphic nanowire networks - modelling § Synthetic synapses: V(!) = IR(!) , d!/dt = V à state variable !(t) depends on history à memory of past states § Kirchoff’s circuit laws: ∑# $# = 0 , ∑( )( = 0 § Use adjacency matrix (structural connectivity) to solve network dynamics
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages35 Page
-
File Size-