<<

the Original Positronic Brain?

Dan Hammerstrom Department of Electrical and Computer Engineering Portland State University

Maseeh College of Engineering 12/13/08 1 and Computer Science Wikipedia:

 “A positronic brain is a fictional technological device, originally conceived by writer

 “Its role is to serve as a central computer for a , and, in some unspecified way, to provide it with a form of consciousness recognizable to

 How close are we? You can judge the algorithms, in this talk I will focus on hardware and what the future might hold

Maseeh College of Engineering 12/13/08 Hammerstrom 2 and Computer Science Moore’s Law: The number of doubles every 18-24 months

 No discussion of computing is complete without addressing Moore’s law

 The industry has been following it for almost 30 years

 It is not really a physical law, but one of faith

 The fruits of a hyper-competitive $300 billion global industry

 Then there is Moore’s lesser known 2nd law st  The 1 law requires exponentially increasing investment

 And what I call Moore’s 3rd law st  The 1 law results in exponentially increasing design errata

Maseeh College of Engineering 12/13/08 Hammerstrom 3 and Computer Science  is now manufacturing in their new, innovative 45 nm process

 Effective gate lengths of 37 nm (HkMG)

 And they recently announced a 32 nm scaling of the 45 nm process

 Transistors of this size are no longer acting like ideal switches

 And there are other problems …

45 nm

Maseeh College of Engineering 12/13/08 Hammerstrom 4 and Computer Science Projected Power Density

Pat Gelsinger, ISSCC 2001

Maseeh College of Engineering 12/13/08 Hammerstrom 5 and Computer Science  Performance overkill - the highest volume segments of the market are no longer performance/clock frequency driven

 Density overkill – How do we use all these transistors?

 The end of Moore’s law – scaling will continue, though at a decreasing rate, asymptotically approaching 22nm in 10-15 years

 Lithography will be the primary constraint going forward

 The current business model based on shrinks and compactions will change dramatically

Maseeh College of Engineering 12/13/08 Hammerstrom 6 and Computer Science Parallelism

 Because of power and interconnect limitations, ever increasing processor performance will need to come more from parallel execution

 However, there are still few opportunities to leverage parallelism in volume market desktop applications

 And we have not yet solved the parallel computing problem

 Taking advantage of multiple cores will be much more difficult than taking advantage of faster clock speeds was

Maseeh College of Engineering 12/13/08 Hammerstrom 7 and Computer Science The Complexity Crisis

 And the complexity of systems is growing exponentially

 According to a recent study by the NIST, “Software bugs" cost the U.S. economy an estimated $60B annually, 0.6% of the GNP

 In spite of the heroic efforts of computer scientists and engineers around the globe, we are slowly losing this battle

 As Bill Wulf said once,

 “software is getting slower faster than hardware is getting faster”

Maseeh College of Engineering 12/13/08 Hammerstrom 8 and Computer Science The Design Productivity Gap

 Complexity is a problem for hardware too (recall Moore’s 3rd law)

 The “Gap” is the difference between the number of transistors that  The typical design team using state of the art tools / methodologies can design and validate on a typical schedule  And what’s available

 The Gap, therefore, results from the fact that the number of transistors is increasing faster than our ability to design them

 And how do we create a 100% guaranteed correct design of several billion transistors?

Maseeh College of Engineering 12/13/08 Hammerstrom 9 and Computer Science “Post-CMOS” or “Nanoelectronics”

 The industry is now talking about “Post-CMOS” electronics, which usually means “nano” or “molecular” electronics

 Can we, by moving to molecular scale “electronics,” buy a little more shrinkage?

 Is it possible? Is it economical? What will do with it?

 Will it enable new applications? Or will it be more of the same?

 And most importantly, what should the research agenda be?

 Will we hit the complexity or capital investment walls before Moore’s law runs out?

Maseeh College of Engineering 12/13/08 Hammerstrom 10 and Computer Science 12/13/08 11 12/13/08 12 Maseeh College of Engineering Hammerstrom and Computer Science Nanoelectronics

 You can get a good description of the basic candidates for molecular scale computing in the Emerging Research Devices chapter in the 2007 ITRS (the semiconductor roadmap)

 http://public.itrs.net/

 We’re mostly interested in device who computations are based on charge

 Charge based technologies can more closely approximate the “charge accumulation” model common in most functional neural models

 Non charged based technologies must emulate charge accumulation digitally

Maseeh College of Engineering 12/13/08 Hammerstrom 14 and Computer Science  Of the various problems facing the , which ones does nanotechnology solve?  The end of Moore’s law

 Maybe the memory bandwidth problem?

 Anything else?

 It severely aggravates the design complexity problem – having trouble using billions of transistors? Well, we’re going to give you trillions!

 Oh, and did I mention that they will be flaky and slow?

 It is unlikely that our tools and methodologies will stretch far enough to handle these densities

Maseeh College of Engineering 12/13/08 Hammerstrom and Computer15 Science  And Nanotechnology also creates a number of new problems

 Significant levels of signal/clock delay (asynchronous logic is suddenly looking very appealing)

 Loving device variability? Wait until we get to the nano-scale!

 Manufacturing defects at a level not seen since the earliest days of the industry

 High dynamic failure rates during operation

 Fault detection and correction circuitry as a fundamental part of every design

 How do we handle this in the tools?

 How do we test such systems?

 But, the $64K question is, what exactly will we use nanoelectronics for?

Maseeh College of Engineering 12/13/08 Hammerstrom and Computer16 Science  Can we assume that computation, algorithms, and applications will continue more or less as they have? Should we?

 The effective use of nanotechnology will require solutions to more than just increased density, we need to consider total system solutions

 And you cannot create an architecture without some sense of the applications it will execute

 An architecture is not an end in itself, but a tool to solve a problem

 Any paradigm shift in applications and architecture, and I think we are headed into one, will have a profound impact on the whole design process and the tools required

Maseeh College of Engineering 12/13/08 Hammerstrom and Computer17 Science Scaling

 It is very likely that sheer size is one of the major components of the “magic of cognition”

 Consider the differences: hundreds of rules or thousands of nodes vs. billions of neurons

 Such “mega-algorithms” can be run on supercomputers

 But how do we deploy very large networks in small portable form factors that consume very little power and operate in real time?

 Massive parallelism in the models enables specialized hardware

Maseeh College of Engineering 12/13/08 Hammerstrom 18 and Computer Science  Radical new technologies create opportunities

 What if we could find an application space that, in addition to promising a solution to the Intelligent Computing problem, also addressed some of the other challenges facing the computer industry?

 One that exhibited

 massive parallelism

 low power density – where performance was based on parallelism not speed

 tolerance of static and dynamic faults, and even some design fault tolerance

 asynchrony (no clock)

 self-organization and adaptation, rather than being programmed

Maseeh College of Engineering 12/13/08 Hammerstrom 19 and Computer Science We Need Nano, Nano Needs Us!

 The opportunity is real and it is coming!

 We need massively parallel algorithms to drive this effort and to justify the investment in the necessary architectures and implementation technology

 But, I believe that success in this area – this has the potential to be the “microprocessor” of the 21st century

 Biologically inspired algorithms are better positioned to leverage this opportunity than any other application domain

Maseeh College of Engineering 12/13/08 Hammerstrom 20 and Computer Science The Most Promising “Post-CMOS” Candidate: Nanogrids On CMOS

 Simplistically: a nanogrid consists of

 A roughly horizontal group of nanowires

 A layer of some specialized chemical

 Another roughly vertical group of nanowires

 Connections of both groups of nanowires to CMOS metal lines

 Currently researchers are making wires out of silicon and other materials that are ~15 nm in diameter, eventually going to < 10 nm, with lengths up to 10 µm

 These are itsy bitsy wires and they have very high resistance, severely limiting their speed, but oh that density …

Maseeh College of Engineering 12/13/08 Hammerstrom 21 and Computer Science CMOL – Developed by K. Likharev, SUNY Stony Brook

Maseeh College of Engineering 12/13/08 Hammerstrom 22 and Computer Science The Molecular Switch – “Memrister”

 Where a horizontal wire crosses a vertical wire (which is self-aligned incidentally), molecules in the molecular layer form a switchable diode  The switch is created from a few molecules

G. Snider “Computing with hysteretic resistor crossbars,” Hewlett-Packard Laboratories

Maseeh College of Engineering 12/13/08 Hammerstrom 23 and Computer Science CMOL

12/13/08 24 Analog Nano-CrossBar Implementation

 Synapse footprint: ~ 500 nm2

 Synapse density: ~ 2x1011 cm-2

 Neural density: ~ 5x107 cm-2

+ - wjk = {-1, 0, +1} soma j j jk+

Courtesy K. Likharev - + soma jk - k A Key Architectural Concept: Virtualization

 We define virtualization to be the degree of time-multiplexing of computations and communication tasks over hardware resources – trading off space and time

 Virtualization then is about taking advantage of the dynamic behavior of the network for sharing expensive resources

 Sparsely connected and sparsely activated networks …

 Generally virtualization implies a digital representation of the , but virtualization should not be thought of as an exclusively digital technique

 AER (Address Event Representation) used by aVLSI community

Maseeh College of Engineering 12/13/08 Hammerstrom 26 and Computer Science Generally Each Algorithm has its Unique “sweet spot”

Maseeh College of Engineering 12/13/08 Hammerstrom 27 and Computer Science An Exploration of the Virtualization Spectrum

The Four Major Configurations studied:

(a) all digital CMOS design (b) mixed-signal CMOS (2 configurations) (c) all digital hybrid CMOS/CMOL design (d) mixed-signal hybrid CMOS/CMOL design

Maseeh College of Engineering 12/13/08 Hammerstrom 28 and Computer Science Some Numbers

 These numbers were provided by Anders Lansner and his group at the Royal Institute (KTH) in Stockholm

Maseeh College of Engineering 12/13/08 Hammerstrom 29 and Computer Science CMOL Array Each Square is a single Auto-Associative module Nanogrid implements weight and local / non-local Nano-grid Nano-grid connection indices

CMOS provides sparse inter-module connectivity, Nano-grid Nano-grid I/O, signal amplification

Maseeh College of Engineering 12/13/08 Hammerstrom 30 and Computer Science Preliminary Analysis: A “Cortical” Scale Processor

 22 nm, 8 metal CMOS / Nano-grid molecular arrays, 1 inch on a side, 1013 devices, 100 nm2 CMOL memory cell

 1700 processors fabricated, each emulating a 16K node network, for a total of: ~30M nodes, and ~400B synapses, ~300Tops(1012)/sec

 FABA – Field Adaptable Bayesian Array - adapts rather than programmed to perform real-time, adaptive Bayesian inference over very complex spatial and temporal knowledge structures

 A wide range of applications for this type of device in , the reduction and compression of widely distributed sensor data, power management

Maseeh College of Engineering 12/13/08 Hammerstrom 31 and Computer Science  The next ten years will be an extraordinary time for electrical engineers and computer scientists

 The challenges of Moore’s law, and the search for new ways to use our transistor bounty will lead to more experimentation in new silicon architectures, fueled in part by ideas from biological computation

 Understanding and mapping biological computing models to silicon and then to real applications will be very difficult

 … but the rewards will be great

Maseeh College of Engineering 12/13/08 Hammerstrom 32 and Computer Science Returning to Moore’s Law

Bob Lucky (IEEE Spectum, Sept 98)

 Moore's law says there will be exponential progress and that doublings will occur every year and a half  One thing about exponentials, at first they are easy, but later they become overwhelming - and we are starting to enter the “overwhelming” phase in  Since the invention of the transistor, there have been about 32 doublings of the technology - the first half of a chessboard

 What overwhelming implications await us now as we begin the second half of the board?

Maseeh College of Engineering 12/13/08 Hammerstrom 33 and Computer Science