Neuron Design

Neuron Design

Reverse-Engineering the Brain: A Computer Architecture Grand Challenge J E Smith June 2018 Missoula, MT [email protected] The Grand Challenge “The most powerful computing machine of all is the human brain. Is it possible to design and implement an architecture that mimics the way the human brain works?” -- Workshop report: Mary J. Irwin and John P. Shen, “Revitalizing computer architecture research,” Computing Research Association (2005). June 2018 copyright JE Smith 2 Introduction ❑ The human brain is capable of: • Accurate sensory perception • High level reasoning and problem solving • Driving complex motor activity ❑ With some very impressive features: • Highly energy efficient • Flexible – supports a wide variety of cognitive functions • Learns dynamically, quickly, and simultaneously with operation ❑ Far exceeds anything conventional machine learning has achieved • Will the trajectory of conventional machine learning ever achieve the same capabilities? • OR should we seek new approaches based on the way the brain actually works? ❑ After the tutorial, it is hoped that attendees will: • Understand the nature of the problem, • View it as a computer architecture research problem, • Have a firm foundation for initiating study of the problem, • Participate in a serious effort to tackle the grand challenge. June 2018 copyright JE Smith 3 Computer Architecture Perspective ❑ This research is at the intersection of computer architecture, theoretical Reverse-Engineering neuroscience, and machine learning the Brain • It is of major importance to all three • And it requires knowledge of all three ❑ Most researchers are likely to rooted in one of them • This will shape the perspective and Computer Theoretical direction of the research Architecture Neuroscience ❑ Our perspective (and biases) come from computer architecture • We should tackle the problem from our perspective, using our background and strengths Machine Learning June 2018 copyright JE Smith 4 What does “Reverse-Engineering the Brain” mean? ❑ “Reverse-abstracting the neocortex” is a better way to put it • The neocortex is the part of the brain we are interested in • Reverse-abstracting is discovering the layers of Silicon Architecture Stack abstraction, bottom-up Application Software Neuro Architecture Stack Neocortex HLL Procedure Hierarchy Lobes Machine Language Software Region Hierarchy Processor Core + Memory Macro-Columns Block Hierarchy Current Research Feedforward/Micro-Columns Combinational/RTL Model Neurons Logic gates Biological Neurons Physical CMOS June 2018 copyright JE Smith 5 Science v. Architecture ❑ Consider two implementations of the brain’s computing paradigm: • Biological implementation that we each possess Neocortex • Human-engineered, silicon-based implementation that we Lobes would like to achieve ❑ From the neuroscience perspective: Region Hierarchy • Goal: Discover ever greater and more accurate information regarding biological processes Macro-Columns ❑ From a computer architecture perspective: • Goal: Discover a new computational paradigm Feedforward/Micro-Columns ❑ Differences in goals may lead to different approaches Model Neurons Biological Neurons June 2018 copyright JE Smith 6 Outline ❑ Introductory Remarks ❑ Research Focus: Milestone TNN • Classification and Clustering • Neural Network Taxonomy ❑ Biological Background • Overview • Neural Information Coding • Sensory Encoding ❑ Fundamental Hypotheses • Theory: Time as a Resource ❑ Low Level Abstractions • Excitatory Neuron Models • Modeling Inhibition ❑ Column Architecture ❑ Case Studies • Recent TNNs ❑ Space-Time Theory and Algebra ❑ Implementations • Indirect Implementations: Simulation • Direct Implementations: Neuromorphic Circuits ❑ Concluding Remarks June 2018 copyright JE Smith 7 Research Focus: Milestone Network Milestone Temporal Neural Network (TNN) ❑ We are attempting to solve a huge problem ❑ Reduce it to “bite-size” pieces • This is the first “bite” ❑ Feedforward clustering networks (unsupervised training) • Most current neuroscientific TNN projects have this target ❑ Sufficient for: • Demonstrating TNN capabilities, and • Exposing major differences wrt conventional machine learning methods First, some background June 2018 copyright JE Smith 9 Spike Communication spike trains rates ❑ Consider systems where information is communicated via transient events 5 • e.g., voltage spikes 2 ❑ Method 1: values are encoded as spike rates measured on individual 8 lines ❑ Changing spikes on one line only 4 effects values on that line time ❑ Method 2: values are encoded as precise timing temporal relationships across parallel relationships communication lines 3 ❑ Changing spikes on one line may effect values on any/all of the others 6 values encoded as spike 0 times relative to t=0 The temporal communication method has t = 0 significant, broad experimental support 4 • The rate method does not. time • (Much) more later June 2018 copyright JE Smith 10 Temporal Coding ❑ Use relative timing relationships across multiple parallel lines to encode information ❑ Changing the spike time on a given line may affect values on any/all the other lines precise timing relationships 3 6 values encoded as spike 0 times relative to t =0 4 t = 0 time June 2018 copyright JE Smith 11 Temporal Coding ❑ Use relative timing relationships across multiple parallel spikes to encode information ❑ Changing the spike time on a given line may affect values on any/all the other lines precise timing relationships 0 3 values encoded as spike 2 times relative to t =0 1 t = 0 time June 2018 copyright JE Smith 12 Plot on Same (Plausible) Time Scale Temporal method is An order of magnitude faster An order of magnitude more efficient (#spikes) 10 msec June 2018 copyright JE Smith 13 Spike Computation ❑ Temporal Neural Networks • Networks of functional blocks whose input and output values are communicated via precise timing relationships (expressed as spikes) • Feedforward flow (without loss of computational generality) • Computation: a single wave of spikes passes from inputs to outputs 2 57 x1 75 6 z1 x2 F Fn-2 0 1 12 ... F4 z2 ... Values Output . Encode ... ... 1 7 ... Decode . to F2 . ... 3 ... from ... Fn-1 . Spikes Spikes F5 . Input Values Values Input 9 60 F3 69 ... zp 4 62 Fn xm ❑ A critical first step • At the forefront of current theoretical neuroscience June 2018 copyright JE Smith 14 Functional Block Features 1) Total function w/ finite implementation 2) Asynchronous • Begin computing when the first spike arrives 3) Causal • The output spike time is independent of any later input spike times 4) Invariant • If all the input spike times are increased by some constant then the output spike time increases by the same constant ⇒ Functional blocks interpret input values according to local time frames ... 1 ... 0 ... 3 9 ... 2 8 F3 invariance F3 ... ... ... 4 ... 3 global time local time June 2018 copyright JE Smith 15 Compare with Conventional Machine Learning aka the Bright Shiny Object ❑ Three major processes: • Training (Learning) • Evaluation (Inference) • Re-Training June 2018 copyright JE Smith 16 Conventional Machine Learning: Training ❑ Training process • Apply lots and lots of training inputs, over and over again MetaData (labels) expected output patterns output Classifier Network patterns back propagate; Compare adjust weights Training requires sequence of HUGELY expensive and labeled data training patterns time-consuming June 2018 copyright JE Smith 17 Conventional Machine Learning: Evaluation (Inference) ❑ Evaluation process: • Apply input patterns, and compute output classes via vector inner products (inputs . weights). Classifier Network compute output patterns via inner products w/ weights sequence of sequence of output input patterns classifications June 2018 copyright JE Smith 18 Conventional Machine Learning: Re-Training ❑ If the input patterns change over time, perhaps abruptly, the training process must be re-done from the beginning • Apply lots and lots of training inputs, over and over again MetaData (labels) expected output patterns output Classifier Network patterns back propagate; Compare adjust weights sequence of training patterns HUGELY expensive and Training requires time-consuming labeled data June 2018 copyright JE Smith 19 Milestone Temporal Neural Network ❑ Perform unsupervised clustering • Not supervised classification ❑ Informally: A cluster is a group of similar input patterns • It’s a grouping of input patterns based on implementation-defined intrinsic similarities ❑ A clustering neural network maps input patterns into cluster identifiers • Note: some outlier patterns may belong to no cluster June 2018 copyright JE Smith 20 Milestone Temporal Neural Network ❑ Each cluster has an associated Clustering cluster identifier Temporal Neural Network • A concise output pattern that Evaluate cluster identifies a particular cluster identifiers via non- binary combinatorial • If there are M clusters, then there networks are M cluster identifiers. sequence of output cluster identifiers ❑ The exact mapping of cluster sequence of Train via adjusting identifiers to clusters is input patterns weights at local neuron level determined by the implementation • A direct by-product of unsupervised training Re-train via constantly re-adjusting weights at (there are no meta-data inputs) local neuron level June 2018 copyright JE Smith 21 Milestone Temporal Neural Network ❑ For outputs to be directly understood by humans, we must map cluster identifiers into known labels (decoding) Clustering • Via

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    192 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us