In-Datacenter Performance Analysis of a Tensor Processing Unit

Total Page:16

File Type:pdf, Size:1020Kb

In-Datacenter Performance Analysis of a Tensor Processing Unit In-Datacenter Performance Analysis of a Tensor Processing Unit By NP Jouppi et al. Presented by Alex Appel Note: Some slides adapted from Dave Patterson’s talk at EECS Colloqium with the same title Agenda - Introduction/Motivation - Architecture - Performance Comparisons - Main highlights/Summary - Questions Origin of Tensor Processing Unit - Projection: if people searched by voice for 3 minutes a day it would double Google’s computation demands - Domain-specific architecture is the solution - Goal: Make the inference phase 10X of GPUs - Very short development cycle: ~15 months Key Neural Net Concepts - Training (learning) in development vs Inference (prediction) in production - Batch size - Amortize weight-fetch time by inferring (or training) many input examples at a time - Quantization - Floating point is useful, but uses a lot more energy and takes more time - Do the training in floating point on GPUs, inference in integers 3 Types of NNs Represent 95% of Google Inference Workload - Multi-Layer Perceptrons (MLP) - Each new layer is a set of nonlinear functions of a weighted sum of all outputs from prior layer ("fully connected") - Convolutional Neural Network (CNN) - Popular for vision, each layer is a set of nonlinear functions of weighted sums at different coordinates of spatially nearby subsets of outputs from the prior layer, which allows the weights to be reused - Recurrent Neural Networks (RNN) / “Long Short-Term Memory” - Each subsequent layer is a collection of nonlinear functions of weighted sums of outputs and the previous state. Inference Datacenter Workload (95%) TPU Architecture - Matrix Unit has 65,536 (256x256) 8-bit multiply-accumulate units - 700 MHz clock rate - Peak: 92 trillion operations/second - >25X multiply-accumulate units vs GPU - >100X multiply-accumulate units vs CPU - 4 MiB of on-chip Accumulator memory - 24 MiB of on-chip Unified Buffer (activation memory) TPU Chip - Unified Buffer: 29% - Matrix Multiply Unit: 24% - Control: 2% Main CISC Instructions - Read_Host_Memory - Reads data from the CPU host memory into the Unified Buffer (UB) - Write_Host_Memory - Writes data from the Unified Buffer into the CPU host memory - Read_Weights - Reads weights from Weight Memory into the Weight FIFO as input to the Matrix Unit - MatrixMultiply/Convolve - Causes the Matrix Unit to perform a matrix multiply or a convolution from the Unified Buffer into the Accumulators - Activate - Performs the nonlinear function of the artificial neuron, with options for ReLU, Sigmoid, and so on Circuit Board Performance Comparisons Roofline Model - Y-axis: - FLOPs - X-axis: - Arithmetic Intensity - How many operations per byte fetched? TPU Roofline - Very high peak performance - Bottlenecked by memory bandwidth Haswell (CPU) Die Roofline - Lower peak performance - More memory bandwidth - The neural nets are not as close to the top as with the TPU K80 (GPU) Die Roofline - Higher memory bandwidth than CPU - The neural nets are far from their Roofline Relative Performance Table Performance/Watt Comparisons - GPU vs CPU: 1.2X-2.1X total performance/Watt - GPU vs CPU: 1.7X-2.9X incremental performance/Watt - TPU vs CPU: 17X-34X total performance/Watt - TPU vs GPU: 14X-16X total performance/Watt - TPU vs CPU: 41X-83X incremental performance/Watt - TPU vs GPU: 25X-29X incremental performance/Watt Energy Proportionality - TPU has lowest power - 40W per die - Poor energy proportionality - At 10% load, the TPU uses 88% of the power it uses at 100% Summary - Inference apps usually emphasize response-time over throughput since they are often user facing. - As a result of latency limits, the K80 GPU is just a little faster for inference than the Haswell CPU, despite it having much higher peak performance and memory bandwidth. - While most architects are accelerating CNNs, they are just 5% of Google's datacenter workload. - The TPU is about 15X – 30X faster at inference than the K80 GPU and the Haswell CPU. Summary (contd.) - Four of the six NN apps that were tested are memory bound; if the TPU were revised to have the same memory as the K80 GPU, it would be about 30 – 50X faster than the GPU and CPU. - Despite having a much smaller and lower power chip, the TPU has 25 times as many multiply accumulators and 3.5 times as much on-chip memory as the K80 GPU. - The performance per Watt of the TPU is 30X – 80X that of its contemporary CPUs and GPUs; a revised TPU with K80 memory would be 70X – 200X better. Resources Link to paper: https://www.cse.wustl.edu/~roger/566S.s21/P1-Norman-1.pdf Link to Dave Patterson talk: Dave Patterson Evaluation of the Tensor Processing Unit Thank you for listening! Questions?.
Recommended publications
  • Fault Tolerance and Re-Training Analysis on Neural Networks
    FAULT TOLERANCE AND RE-TRAINING ANALYSIS ON NEURAL NETWORKS by ABHINAV KURIAN GEORGE B.Tech Electronics and Communication Engineering Amrita Vishwa Vidhyapeetham, Kerala, 2012 A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science, Computer Engineering, College of Engineering and Applied Science, University of Cincinnati, Ohio 2019 Thesis Committee: Chair: Wen-Ben Jone, Ph.D. Member: Carla Purdy, Ph.D. Member: Ranganadha Vemuri, Ph.D. ABSTRACT In the current age of big data, artificial intelligence and machine learning technologies have gained much popularity. Due to the increasing demand for such applications, neural networks are being targeted toward hardware solutions. Owing to the shrinking feature size, number of physical defects are on the rise. These growing number of defects are preventing designers from realizing the full potential of the on-chip design. The challenge now is not only to find solutions that balance high-performance and energy-efficiency but also, to achieve fault-tolerance of a computational model. Neural computing, due to its inherent fault tolerant capabilities, can provide promising solutions to this issue. The primary focus of this thesis is to gain deeper understanding of fault tolerance in neural network hardware. As a part of this work, we present a comprehensive analysis of fault tolerance by exploring effects of faults on popular neural models: multi-layer perceptron model and convolution neural network. We built the models based on conventional 64-bit floating point representation. In addition to this, we also explore the recent 8-bit integer quantized representation. A fault injector model is designed to inject stuck-at faults at random locations in the network.
    [Show full text]
  • In-Datacenter Performance Analysis of a Tensor Processing Unit
    In-Datacenter Performance Analysis of a Tensor Processing Unit Presented by Josh Fried Background: Machine Learning Neural Networks: ● Multi Layer Perceptrons ● Recurrent Neural Networks (mostly LSTMs) ● Convolutional Neural Networks Synapse - each edge, has a weight Neuron - each node, sums weights and uses non-linear activation function over sum Propagating inputs through a layer of the NN is a matrix multiplication followed by an activation Background: Machine Learning Two phases: ● Training (offline) ○ relaxed deadlines ○ large batches to amortize costs of loading weights from DRAM ○ well suited to GPUs ○ Usually uses floating points ● Inference (online) ○ strict deadlines: 7-10ms at Google for some workloads ■ limited possibility for batching because of deadlines ○ Facebook uses CPUs for inference (last class) ○ Can use lower precision integers (faster/smaller/more efficient) ML Workloads @ Google 90% of ML workload time at Google spent on MLPs and LSTMs, despite broader focus on CNNs RankBrain (search) Inception (image classification), Google Translate AlphaGo (and others) Background: Hardware Trends End of Moore’s Law & Dennard Scaling ● Moore - transistor density is doubling every two years ● Dennard - power stays proportional to chip area as transistors shrink Machine Learning causing a huge growth in demand for compute ● 2006: Excess CPU capacity in datacenters is enough ● 2013: Projected 3 minutes per-day per-user of speech recognition ○ will require doubling datacenter compute capacity! Google’s Answer: Custom ASIC Goal: Build a chip that improves cost-performance for NN inference What are the main costs? Capital Costs Operational Costs (power bill!) TPU (V1) Design Goals Short design-deployment cycle: ~15 months! Plugs in to PCIe slot on existing servers Accelerates matrix multiplication operations Uses 8-bit integer operations instead of floating point How does the TPU work? CISC instructions, issued by host.
    [Show full text]
  • Abstractions for Programming Graphics Processors in High-Level Programming Languages
    Abstracties voor het programmeren van grafische processoren in hoogniveau-programmeertalen Abstractions for Programming Graphics Processors in High-Level Programming Languages Tim Besard Promotor: prof. dr. ir. B. De Sutter Proefschrift ingediend tot het behalen van de graad van Doctor in de ingenieurswetenschappen: computerwetenschappen Vakgroep Elektronica en Informatiesystemen Voorzitter: prof. dr. ir. K. De Bosschere Faculteit Ingenieurswetenschappen en Architectuur Academiejaar 2018 - 2019 ISBN 978-94-6355-244-8 NUR 980 Wettelijk depot: D/2019/10.500/52 Examination Committee Prof. Filip De Turck, chair Department of Information Technology Faculty of Engineering and Architecture Ghent University Prof. Koen De Bosschere, secretary Department of Electronics and Information Systems Faculty of Engineering and Architecture Ghent University Prof. Bjorn De Sutter, supervisor Department of Electronics and Information Systems Faculty of Engineering and Architecture Ghent University Prof. Jutho Haegeman Department of Physics and Astronomy Faculty of Sciences Ghent University Prof. Jan Lemeire Department of Electronics and Informatics Faculty of Engineering Vrije Universiteit Brussel Prof. Christophe Dubach School of Informatics College of Science & Engineering The University of Edinburgh Prof. Alan Edelman Computer Science & Artificial Intelligence Laboratory Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology ii Dankwoord Ik wist eigenlijk niet waar ik aan begon, toen ik in 2012 in de cata- comben van het Technicum op gesprek ging over een doctoraat. Of ik al eens met LLVM gewerkt had. Ondertussen zijn we vele jaren verder, werk ik op een bureau waar er wel daglicht is, en is het eindpunt van deze studie zowaar in zicht. Dat mag natuurlijk wel, zo vertelt men mij, na 7 jaar.
    [Show full text]
  • P1360R0: Towards Machine Learning for C++: Study Group 19
    P1360R0: Towards Machine Learning for C++: Study Group 19 Date: 2018-11-26(Post-SAN mailing): 10 AM ET Project: ISO JTC1/SC22/WG21: Programming Language C++ Audience SG19, WG21 Authors : Michael Wong (Codeplay), Vincent Reverdy (University of Illinois at Urbana-Champaign, Paris Observatory), Robert Douglas (Epsilon), Emad Barsoum (Microsoft), Sarthak Pati (University of Pennsylvania) Peter Goldsborough (Facebook) Franke Seide (MS) Contributors Emails: michael@codeplay.com vreverdy@illinois.edu rwdougla@gmail.com ebarsoum@gmail.com sarthak.pati@uphs.upenn.edu psag@fb.com fseide@microsoft.com Reply to: michael@codeplay.com Introduction 2 Motivation 2 Scope 5 Meeting frequency and means 6 Outreach to ML/AI/Data Science community 7 Liaison with other groups 7 Future meetings 7 Conclusion 8 Acknowledgements 8 References 8 Introduction This paper proposes a WG21 SG for Machine Learning with the goal of: ● Making Machine Learning a first-class citizen in ISO C++ It is the collaboration of a number of key industry, academic, and research groups, through several connections in CPPCON BoF[reference], LLVM 2018 discussions, and C++ San Diego meeting. The intention is to support such an SG, and describe the scope of such an SG. This is in terms of potential work resulting in papers submitted for future C++ Standards, or collaboration with other SGs. We will also propose ongoing teleconferences, meeting frequency and locations, as well as outreach to ML data scientists, conferences, and liaison with other Machine Learning groups such as at Khronos, and ISO. As of the SAN meeting, this group has been officially created as SG19, and we will begin teleconferences immediately, after the US thanksgiving, and after NIPS.
    [Show full text]
  • AI Chips: What They Are and Why They Matter
    APRIL 2020 AI Chips: What They Are and Why They Matter An AI Chips Reference AUTHORS Saif M. Khan Alexander Mann Table of Contents Introduction and Summary 3 The Laws of Chip Innovation 7 Transistor Shrinkage: Moore’s Law 7 Efficiency and Speed Improvements 8 Increasing Transistor Density Unlocks Improved Designs for Efficiency and Speed 9 Transistor Design is Reaching Fundamental Size Limits 10 The Slowing of Moore’s Law and the Decline of General-Purpose Chips 10 The Economies of Scale of General-Purpose Chips 10 Costs are Increasing Faster than the Semiconductor Market 11 The Semiconductor Industry’s Growth Rate is Unlikely to Increase 14 Chip Improvements as Moore’s Law Slows 15 Transistor Improvements Continue, but are Slowing 16 Improved Transistor Density Enables Specialization 18 The AI Chip Zoo 19 AI Chip Types 20 AI Chip Benchmarks 22 The Value of State-of-the-Art AI Chips 23 The Efficiency of State-of-the-Art AI Chips Translates into Cost-Effectiveness 23 Compute-Intensive AI Algorithms are Bottlenecked by Chip Costs and Speed 26 U.S. and Chinese AI Chips and Implications for National Competitiveness 27 Appendix A: Basics of Semiconductors and Chips 31 Appendix B: How AI Chips Work 33 Parallel Computing 33 Low-Precision Computing 34 Memory Optimization 35 Domain-Specific Languages 36 Appendix C: AI Chip Benchmarking Studies 37 Appendix D: Chip Economics Model 39 Chip Transistor Density, Design Costs, and Energy Costs 40 Foundry, Assembly, Test and Packaging Costs 41 Acknowledgments 44 Center for Security and Emerging Technology | 2 Introduction and Summary Artificial intelligence will play an important role in national and international security in the years to come.
    [Show full text]
  • Shinjae Yoo Computational Science Initiative Outline
    Lightning Overview of Machine Learning Shinjae Yoo Computational Science Initiative Outline • Why is Machine Learning important? • Machine Learning Concepts • Big Data and Machine Learning • Potential Research Areas 2 Why is Machine Learning important? 3 ML Application to Physics 4 ML Application to Biology 5 Machine Learning Concepts 6 What is Machine Learning (ML) • One of Machine Learning definitions • “How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?” Tom Mitchell, 2006 • Statistics: What conclusions can be inferred from data • ML incorporates additionally • What architectures and algorithms can be used to effectively handle data • How multiple learning subtasks can be orchestrated in a larger system, and questions of computational tractability 7 Machine Learning Components Algorithm Machine Learning HW+SW Data 8 Brief History of Machine Learning Speech Web Deep Blue N-gram Hadoop ENIAC Perceptron Recognition Browser Chess based MT 0.1.0 K-Means KNN 1950 1960 1970 1980 1990 2000 2010 1ST Chess WWW Data Driven Softmargin Image HMM GPGPU Learning Prog. invented ML SVM Net 2010 2011 2012 2013 2014 2015 2016 NN won ImageNet NN outperform IBM Watson AlphaGo with large margin human on ImageNet 9 Supervised Learning Pipeline Training Labeling Preprocessing Validate Data Cleansing Feature Engineering Normalization Testing Preprocessing Predict 10 Unsupervised Learning Pipeline Preprocessing Data Cleansing Feature Engineering Normalization Predict 11 Types of Learning 12 Types of Learning • Generative Learning 13 Types of Learning • Discriminative Learning 14 Types of Learning • Active Learning • How to select training data? 15 Types of Learning • Multi-task Learning 16 Types of Learning • Transfer Learning 17 Types of Learning • Kernel Learning • Metric Learning 18 Types of Learning • Kernel Learning • Metric Learning 19 Types of Learning • Kernel Learning • Metric Learning • Dimensionality Reduction 20 Types of Learning • Feature Learning 21 • Lee, et al.
    [Show full text]
  • Podracer Architectures for Scalable Reinforcement Learning
    2021-4-14 Podracer architectures for scalable Reinforcement Learning Matteo Hessel*,1, Manuel Kroiss*,1, Aidan Clark1, Iurii Kemaev1, John Quan1, Thomas Keck1, Fabio Viola1 and Hado van Hasselt*,1 *Equal contributions, 1DeepMind Supporting state-of-the-art AI research requires balancing rapid prototyping, ease of use, and quick iter- ation, with the ability to deploy experiments at a scale traditionally associated with production systems. Deep learning frameworks such as TensorFlow, PyTorch and JAX allow users to transparently make use of accelerators, such as TPUs and GPUs, to offload the more computationally intensive parts of training and inference in modern deep learning systems. Popular training pipelines that use these frameworks for deep learning typically focus on (un-)supervised learning. How to best train reinforcement learning (RL) agents at scale is still an active research area. In this report we argue that TPUs are particularly well suited for training RL agents in a scalable, efficient and reproducible way. Specifically we describe two architectures designed to make the best use of the resources available on a TPU Pod (a special configuration in a Google data center that features multiple TPU devices connected to each other by extremely low latency communication channels). Introduction Reinforcement learning (RL) algorithms have been shown to be capable of performing well on challenging sequential decision problems, ranging from board games (Silver et al., 2016) to video games (Mnih et al., 2015) and continuous control (Lillicrap et al., 2016; Van Hasselt, 2012). Many of these advances were powered by the adoption of deep learning (Krizhevsky et al., 2012) in RL.
    [Show full text]
  • Patent Claim Generation by Fine-Tuning Openai GPT-2
    Patent Claim Generation by Fine-Tuning OpenAI GPT-2 Jieh-Sheng Lee and Jieh Hsiang Department of Computer Science and Information Engineering National Taiwan University {d04922013, jhsiang}@ntu.edu.tw Abstract (Bidirectional Encoder Representations from Transformers) [4] has become the best practice In this work, we focus on fine-tuning an for state-of-the-art results. GPT-2 is the successor OpenAI GPT-2 pre-trained model for to GPT. Although both GPT-2 and BERT are generating patent claims. GPT-2 has capable of text generation, Wang and Cho [5] demonstrated impressive efficacy of pre- trained language models on various tasks, found that GPT-2 generations are of better particularly coherent text generation. quality. In fact, GPT-2 is claimed to be so Patent claim language itself has rarely powerful that the risk of its malicious use is high. been explored in the past and poses a For this reason, OpenAI decided to keep its unique challenge. We are motivated to largest model (1.5B parameters) closed so that generate coherent patent claims there is more time to discuss its ramifications. automatically so that augmented inventing In this work, we generated patent claims by might be viable someday. In our fine-tuning the released 345M medium version implementation, we identified a unique [6]. Overall we are impressed by how coherent language structure in patent claims and and complicate the generated patent claims could leveraged its implicit human annotations. We investigated the fine-tuning process by be, although not all text are generated equally in probing the first 100 steps and observing terms of quality.
    [Show full text]
  • Automatic Full Compilation of Julia Programs and ML Models to Cloud
    AUTOMATIC FULL COMPILATION OF JULIA PROGRAMS AND ML MODELS TO CLOUD TPUS Keno Fischer 1 Elliot Saba 1 ABSTRACT Google’s Cloud TPUs are a promising new hardware architecture for machine learning workloads. They have powered many of Google’s milestone machine learning achievements in recent years. Google has now made TPUs available for general use on their cloud platform and as of very recently has opened them up further to allow use by non-TensorFlow frontends. We describe a method and implementation for offloading suitable sections of Julia programs to TPUs via this new API and the Google XLA compiler. Our method is able to completely fuse the forward pass of a VGG19 model expressed as a Julia program into a single TPU executable to be offloaded to the device. Our method composes well with existing compiler-based automatic differentiation techniques on Julia code, and we are thus able to also automatically obtain the VGG19 backwards pass and similarly offload it to the TPU. Targeting TPUs using our compiler, we are able to evaluate the VGG19 forward pass on a batch of 100 images in 0.23s which compares favorably to the 52.4s required for the original model on the CPU. Our implementation is less than 1000 lines of Julia, with no TPU specific changes made to the core Julia compiler or any other Julia packages. 1 INTRODUCTION accelerator available to the public via their cloud offering. Originally, the use of TPUs was restricted to applications One of the fundamental changes that has enabled the steady written using Google’s TensorFlow machine learning frame- progress of machine learning techniques over the past sev- work.
    [Show full text]
  • Interoperating Deep Learning Models with ONNX.Jl
    Interoperating Deep Learning models with ONNX.jl Ayush Shridhar1, Phil Tomson2, and Mike Innes3 1International Institute of Information Technology, Bhubaneswar, India 2Unaffiliated 3Julia Computing, Inc ABSTRACT network, with complete support for control flow, recursion, clo- sures and data structures. Implementing models in Flux.jl is as sim- Flux [17] is a machine learning framework, written using the nu- ple as writing regular Julia code. Implementing models is as simple merical computing language Julia[4]. The framework makes writ- as writing the formulae for those, and Zygote.jl will compute the ing layers as simple as writing mathematical formulae, and it’s ad- derivatives seamlessly. vanced AD, Zygote [11] , applies automatic differentiation (AD) to Flux.jl also provides support for other hardware options using ex- calculate derivatives and train the model. It makes heavy use of Ju- ternal packages such as CuArrays.jl and CLArrays.jl. CuArrays lia’s language and compiler features to carry out code analysis and is written completely in Julia, making implementing GPU kernels make optimisations. For example, Julia’s GPU compilation support very simple. Making a model run on GPU can be done in a hassle- [3] can be used to JIT-compile custom GPU kernels for model lay- free manner: It is as simple as calling a few functions to trans- ers [19]. Flux also supports a number of a hardware options, from fer data to GPU. Flux.jl also has support for running models on CPUs, GPUs and even TPUs via XLA.jl, that compiles Julia code Google’s Tensor Processing Unit (TPU). TPUs help in very fast to XLA: an advanced compiler for linear algebra that is capable of linear algebra computation.
    [Show full text]
  • Data Movement Is All You Need: a Case Study on Optimizing Transformers
    Data Movement Is All You Need: A Case Study on Optimizing Transformers Andrei Ivanov∗, Nikoli Dryden∗, Tal Ben-Nun, Shigang Li, Torsten Hoefler ETH Zürich firstname.lastname@inf.ethz.ch ∗ Equal contribution Abstract—Transformers have become widely used for language challenges such as artificial general intelligence [27]. Thus, modeling and sequence learning tasks, and are one of the most improving transformer performance has been in the focus of important machine learning workloads today. Training one is a numerous research and industrial groups. very compute-intensive task, often taking days or weeks, and significant attention has been given to optimizing transformers. Significant attention has been given to optimizing transform- Despite this, existing implementations do not efficiently utilize ers: local and fixed-window attention [28]–[32], more general GPUs. We find that data movement is the key bottleneck when structured sparsity [33], learned sparsity [34]–[36], and other training. Due to Amdahl’s Law and massive improvements in algorithmic techniques [19], [37] improve the performance of compute performance, training has now become memory-bound. transformers. Major hardware efforts, such as Tensor Cores Further, existing frameworks use suboptimal data layouts. Using these insights, we present a recipe for globally optimizing data and TPUs [38] have accelerated tensor operations like matrix- movement in transformers. We reduce data movement by up matrix multiplication (MMM), a core transformer operation. to 22.91% and overall achieve a 1.30× performance improve- Despite this, existing implementations do not efficiently ment over state-of-the-art frameworks when training BERT. utilize GPUs. Even optimized implementations such as Mega- Our approach is applicable more broadly to optimizing deep tron [18] report achieving only 30% of peak GPU flop/s.
    [Show full text]
  • Accelerators for Cyber-Physical Systems Sam Green, İhsan Çiçek and Çetin Kaya Koç University of California, Santa Barbara Introduction Capabilities Desired in CPS?
    Accelerators for Cyber-Physical Systems Sam Green, İhsan Çiçek and Çetin Kaya Koç University of California, Santa Barbara Introduction Capabilities desired in CPS? • Interact with physical world • Networked • Potentially low-power • Resistant to environment • Perform safety-critical tasks • Cryptographically secure • Autonomous • Inexpensive Benefits from Moore’s Law are over • Since about 1970, could safely assume the number of transistors/$ would exponentially increase every 2 years • What can be done today for $X will be doable in 2 years for $X/2 dollars • Accelerators (aka ASICs) existed during this time, but CPU/µcontroller/DSP-based approaches dominated • No longer the case… [http://www.economist.com/node/21693710/sites/all/modules/custom/ec_essay] Other methods to increase performance/$? • Approximate computing • Analog computing • Neuromorphic computing Approximate Computing • Selective approximation can bring disproportionate gains in efficiency • 5% accuracy loss gives • 50x less energy for k-means clustering • 26x less energy for neural network evaluation [S. Mittal. A Survey of Techniques for Approximate Computing. ACM Comput. Surv., vol. 48, no. 4, p. 62:1–62:33, Mar. 2016.] [https://upload.wikimedia.org/wikipedia/commons/b/b7/3-bit_resolution_analog_comparison.png] Analog Computing • Physical world is a computational device • E.g. Use KVL and KCL to approximate activation function for analog neuron • 4X speedup, 20X less energy, 2.4% higher error across benchmarks vs. approximate digital neuron [St. Amant et al. General-purpose Code Acceleration with Limited-precision Analog Computation. ISCA, 2014] Neuromorphic Computing • Non-von Neumann, neuro-bio inspired architectures • Community sees biological circuits as the ultimate in efficiency [https://upload.wikimedia.org/wikipedia/commons/4/4a/Action_potential.svg] Accelerators for Deep Learning Inference [A.
    [Show full text]