Fault Tolerance and Re-Training Analysis on Neural Networks

Total Page:16

File Type:pdf, Size:1020Kb

Fault Tolerance and Re-Training Analysis on Neural Networks FAULT TOLERANCE AND RE-TRAINING ANALYSIS ON NEURAL NETWORKS by ABHINAV KURIAN GEORGE B.Tech Electronics and Communication Engineering Amrita Vishwa Vidhyapeetham, Kerala, 2012 A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science, Computer Engineering, College of Engineering and Applied Science, University of Cincinnati, Ohio 2019 Thesis Committee: Chair: Wen-Ben Jone, Ph.D. Member: Carla Purdy, Ph.D. Member: Ranganadha Vemuri, Ph.D. ABSTRACT In the current age of big data, artificial intelligence and machine learning technologies have gained much popularity. Due to the increasing demand for such applications, neural networks are being targeted toward hardware solutions. Owing to the shrinking feature size, number of physical defects are on the rise. These growing number of defects are preventing designers from realizing the full potential of the on-chip design. The challenge now is not only to find solutions that balance high-performance and energy-efficiency but also, to achieve fault-tolerance of a computational model. Neural computing, due to its inherent fault tolerant capabilities, can provide promising solutions to this issue. The primary focus of this thesis is to gain deeper understanding of fault tolerance in neural network hardware. As a part of this work, we present a comprehensive analysis of fault tolerance by exploring effects of faults on popular neural models: multi-layer perceptron model and convolution neural network. We built the models based on conventional 64-bit floating point representation. In addition to this, we also explore the recent 8-bit integer quantized representation. A fault injector model is designed to inject stuck-at faults at random locations in the network. The networks are trained with the basic backpropagation algorithm and tested against the standard MNIST benchmark. For training pure quantized networks, we propose a novel backpropagation strategy. Depending on the performance degradation, the faulty networks are re-trained to recover their accuracy. Results suggest that: (1) neural networks cannot be considered as completely fault tolerant; (2) quantized neural networks are more susceptible to faults; (3) using a novel training algo- rithm for quantized networks, comparable accuracy is achieved; (4) re-training is an effective strategy to improve fault tolerance. In this work, 30% improvement in quantized network is achieved as compared to 6% improvement in floating point networks using the basic backprop- agation algorithm. We believe that using more advanced re-training strategies can enhance fault tolerance to a greater extent. Copyright 2019, Abhinav Kurian George This document is copyrighted material. Under copyright law, no parts of this document may be reproduced without the expressed permission of the author. To my loving family and friends iv Acknowledgments I would like to extend my sincere thanks to my advisor Dr. Wen-Ben Jone. He has always been a source of inspiration and motivated me to keep moving forward. I am extremely grateful to him for working along with me even though I had to move away from the university. Dr. Jone was always available for me even through his rough times and I will always cherish working under him. His guidance and advice were very instrumental for shaping the course of this thesis. I would like to thank my thesis committee members Dr. Ranganadha Vemuri and Dr. Carla Purdy for reviewing and providing feedback on this work. I really appreciate the kind gesture and efforts they have taken to go through this material. Finally I would like to thank my family back in India who have always supported me. They instilled confidence and faith in me when I started doubting my abilities. Also, would like to mention a special thanks to my friends especially Sangeetha for sitting hours together and reviewing my work. Without all your prayers and blessings this work would not have been possible. Thank you all. v Contents Acknowledgments v Contents vi List of Figures ix List of Tables xii List of Abbreviations xiii 1 Introduction 1 2 Background 5 2.1 Neural Network . .5 2.1.1 Multi-Layer Perceptrons (MLP) . .6 2.1.2 Convolution Neural Network . .7 2.1.3 Recurrent Neural Networks . 10 2.2 Backpropagation Algorithm . 11 2.3 Applications of ANNs . 13 2.3.1 Pattern Classification . 13 2.3.2 Clustering or unsupervised pattern classification . 14 2.3.3 Prediction/Forecasting . 14 2.3.4 Content Addressable Memory . 14 2.3.5 Control . 15 2.4 Fault Tolerance . 15 2.4.1 Basic terms related to fault tolerance . 18 2.4.2 Fault tolerance in neural networks . 18 vi 2.4.3 TPU and quantization . 21 3 System Architecture 26 3.1 Fault Injection . 26 3.2 Experimental Setup for Feed Forward type neural network . 31 3.2.1 MNIST Benchmark . 31 3.2.2 The Feed Forward Neural Network . 32 3.2.3 Overall System for Feed Forward Re-training Experimentation . 35 3.3 System Design For Convolution Neural Network . 37 3.3.0.1 Fault Injection System For CNN . 42 3.3.0.2 Re-training of CNN . 43 3.4 Quantized Neural Network . 44 3.4.1 Quantization And De-quantization Scheme . 45 3.4.2 Quantized Feed Forward Network (Q-FNN) . 47 3.4.3 Fault Injection in Q-FNN . 52 3.4.4 Re-training Experiment for Quantized Feed Forward Network . 53 4 Experimentation Results 57 4.1 Metrics For Qualification . 57 4.2 Results for Floating Point Data-path . 59 4.2.1 Results for Feed Forward Neural Network . 59 4.2.2 Results for Convolution Neural Network . 68 4.3 Results for Quantized Data Path . 73 4.3.1 Results for Quantized Feed Forward Neural Network . 73 4.3.1.1 Results for Two Layer Quantized Feed Forward Neural Net- work.............................. 73 4.3.1.2 Results for Five Layer Quantized Feed Forward Neural Net- work.............................. 79 5 Conclusion and Future Work 86 5.1 Conclusion . 86 vii 5.2 Future Work . 88 viii List of Figures 2-1 Simple neuron in ANN [1] . .6 2-2 Multi-layer perceptron (MLP) [2] . .7 2-3 Convolution Layer Output with receptive field 3x3 [3] . .9 2-4 Hyperparameters of convolution layer . .9 2-5 Max pooling layer [3] . 10 2-6 Recurrent Neural Network [4] . 10 2-7 Feed forward network - backpropagation . 11 2-8 Architecture of LeNet-5 [5] . 14 2-9 Cause effect relationship between fault, error and failure [6] . 16 2-10 Classification of faults [6] . 17 2-11 Block diagram of TPU [7, 8] . 22 2-12 Systolic array of MXU [7, 8] . 23 2-13 Performance-per-watt comparison [8] . 23 2-14 Quantizing in Tensorflow [8] . 24 3-1 A simple neuron with two inputs . 27 3-2 Two input neuron with possible fault sites . 28 3-3 Perceptron Neural Network to identify greater of two inputs . 28 3-4 Fault injected in the Network . 30 3-5 Results of simulating the faulty system in Simulink . 31 3-6 Custom neural network architecture . 32 3-7 Feed forward neural network used for hand written digit recognition [9] . 33 3-8 Feed forward neural network with re-Training . 33 3-9 Flow chart describing overall system . 35 ix 3-10 Convolution neural network . 37 3-11 Convolution example - CNN . 37 3-12 Max pooling example - CNN . 39 3-13 Layer-1 convolution and maxpooling - CNN . 39 3-14 Layer-2 convolution - CNN . 40 3-15 Layer-2 flatten - CNN . 41 3-16 Layer-3 dense - CNN . 41 3-17 CNN fault injector output - single stuck-at fault . 43 3-18 Quantization function [10] . 45 3-19 De-quantization function [10] . 47 3-20 Quantized two layered feed forward network architecture . 48 3-21 Components of quantized two layered feed forward network . 49 3-22 Quantized five layered feed forward network architecture . 50 3-23 Components of quantized five layered feed forward network . 51 3-24 Fault injector output for two layer QFNN . 52 3-25 Re-training architecture for two layered QFNN . 54 3-26 Re-training architecture for five Layered QFNN . 55 4-1 Maximum accuracy of floating FFN . 59 4-2 Minimum accuracy of floating FFN . 60 4-3 Average accuracy of floating FFN . 61 4-4 Accuracy plot with one stuck-at fault . 61 4-5 Average confidence plot for floating FFN . 62 4-6 Minimum confidence plot for floating FFN . 63 4-7 Accuracy improvement after re-training . 64 4-8 Confidence improvement after re-training . 65 4-9 Number of times re-trained for each fault . 65 4-10 QoR plot - re-training with critical faults . 66 4-11 QoC plot - re-training with critical faults . 67 4-12 Number of times re-trained - QoR metrics . 68 x 4-13 Maximum accuracy floating CNN . 69 4-14 Minimum accuracy floating CNN . 70 4-15 Average accuracy floating CNN . 70 4-16 QoR improvement by re-training CNN . 71 4-17 Worst case QoR after re-training CNN . 72 4-18 Number of times re-trained CNN . 72 4-19 Maximum QoR for two layer QFFN . 74 4-20 Minimum QoR for two layer QFFN . 75 4-21 Average QoR for two layer QFFN . ..
Recommended publications
  • In-Datacenter Performance Analysis of a Tensor Processing Unit
    In-Datacenter Performance Analysis of a Tensor Processing Unit Presented by Josh Fried Background: Machine Learning Neural Networks: ● Multi Layer Perceptrons ● Recurrent Neural Networks (mostly LSTMs) ● Convolutional Neural Networks Synapse - each edge, has a weight Neuron - each node, sums weights and uses non-linear activation function over sum Propagating inputs through a layer of the NN is a matrix multiplication followed by an activation Background: Machine Learning Two phases: ● Training (offline) ○ relaxed deadlines ○ large batches to amortize costs of loading weights from DRAM ○ well suited to GPUs ○ Usually uses floating points ● Inference (online) ○ strict deadlines: 7-10ms at Google for some workloads ■ limited possibility for batching because of deadlines ○ Facebook uses CPUs for inference (last class) ○ Can use lower precision integers (faster/smaller/more efficient) ML Workloads @ Google 90% of ML workload time at Google spent on MLPs and LSTMs, despite broader focus on CNNs RankBrain (search) Inception (image classification), Google Translate AlphaGo (and others) Background: Hardware Trends End of Moore’s Law & Dennard Scaling ● Moore - transistor density is doubling every two years ● Dennard - power stays proportional to chip area as transistors shrink Machine Learning causing a huge growth in demand for compute ● 2006: Excess CPU capacity in datacenters is enough ● 2013: Projected 3 minutes per-day per-user of speech recognition ○ will require doubling datacenter compute capacity! Google’s Answer: Custom ASIC Goal: Build a chip that improves cost-performance for NN inference What are the main costs? Capital Costs Operational Costs (power bill!) TPU (V1) Design Goals Short design-deployment cycle: ~15 months! Plugs in to PCIe slot on existing servers Accelerates matrix multiplication operations Uses 8-bit integer operations instead of floating point How does the TPU work? CISC instructions, issued by host.
    [Show full text]
  • Abstractions for Programming Graphics Processors in High-Level Programming Languages
    Abstracties voor het programmeren van grafische processoren in hoogniveau-programmeertalen Abstractions for Programming Graphics Processors in High-Level Programming Languages Tim Besard Promotor: prof. dr. ir. B. De Sutter Proefschrift ingediend tot het behalen van de graad van Doctor in de ingenieurswetenschappen: computerwetenschappen Vakgroep Elektronica en Informatiesystemen Voorzitter: prof. dr. ir. K. De Bosschere Faculteit Ingenieurswetenschappen en Architectuur Academiejaar 2018 - 2019 ISBN 978-94-6355-244-8 NUR 980 Wettelijk depot: D/2019/10.500/52 Examination Committee Prof. Filip De Turck, chair Department of Information Technology Faculty of Engineering and Architecture Ghent University Prof. Koen De Bosschere, secretary Department of Electronics and Information Systems Faculty of Engineering and Architecture Ghent University Prof. Bjorn De Sutter, supervisor Department of Electronics and Information Systems Faculty of Engineering and Architecture Ghent University Prof. Jutho Haegeman Department of Physics and Astronomy Faculty of Sciences Ghent University Prof. Jan Lemeire Department of Electronics and Informatics Faculty of Engineering Vrije Universiteit Brussel Prof. Christophe Dubach School of Informatics College of Science & Engineering The University of Edinburgh Prof. Alan Edelman Computer Science & Artificial Intelligence Laboratory Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology ii Dankwoord Ik wist eigenlijk niet waar ik aan begon, toen ik in 2012 in de cata- comben van het Technicum op gesprek ging over een doctoraat. Of ik al eens met LLVM gewerkt had. Ondertussen zijn we vele jaren verder, werk ik op een bureau waar er wel daglicht is, en is het eindpunt van deze studie zowaar in zicht. Dat mag natuurlijk wel, zo vertelt men mij, na 7 jaar.
    [Show full text]
  • P1360R0: Towards Machine Learning for C++: Study Group 19
    P1360R0: Towards Machine Learning for C++: Study Group 19 Date: 2018-11-26(Post-SAN mailing): 10 AM ET Project: ISO JTC1/SC22/WG21: Programming Language C++ Audience SG19, WG21 Authors : Michael Wong (Codeplay), Vincent Reverdy (University of Illinois at Urbana-Champaign, Paris Observatory), Robert Douglas (Epsilon), Emad Barsoum (Microsoft), Sarthak Pati (University of Pennsylvania) Peter Goldsborough (Facebook) Franke Seide (MS) Contributors Emails: [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Reply to: [email protected] Introduction 2 Motivation 2 Scope 5 Meeting frequency and means 6 Outreach to ML/AI/Data Science community 7 Liaison with other groups 7 Future meetings 7 Conclusion 8 Acknowledgements 8 References 8 Introduction This paper proposes a WG21 SG for Machine Learning with the goal of: ● Making Machine Learning a first-class citizen in ISO C++ It is the collaboration of a number of key industry, academic, and research groups, through several connections in CPPCON BoF[reference], LLVM 2018 discussions, and C++ San Diego meeting. The intention is to support such an SG, and describe the scope of such an SG. This is in terms of potential work resulting in papers submitted for future C++ Standards, or collaboration with other SGs. We will also propose ongoing teleconferences, meeting frequency and locations, as well as outreach to ML data scientists, conferences, and liaison with other Machine Learning groups such as at Khronos, and ISO. As of the SAN meeting, this group has been officially created as SG19, and we will begin teleconferences immediately, after the US thanksgiving, and after NIPS.
    [Show full text]
  • AI Chips: What They Are and Why They Matter
    APRIL 2020 AI Chips: What They Are and Why They Matter An AI Chips Reference AUTHORS Saif M. Khan Alexander Mann Table of Contents Introduction and Summary 3 The Laws of Chip Innovation 7 Transistor Shrinkage: Moore’s Law 7 Efficiency and Speed Improvements 8 Increasing Transistor Density Unlocks Improved Designs for Efficiency and Speed 9 Transistor Design is Reaching Fundamental Size Limits 10 The Slowing of Moore’s Law and the Decline of General-Purpose Chips 10 The Economies of Scale of General-Purpose Chips 10 Costs are Increasing Faster than the Semiconductor Market 11 The Semiconductor Industry’s Growth Rate is Unlikely to Increase 14 Chip Improvements as Moore’s Law Slows 15 Transistor Improvements Continue, but are Slowing 16 Improved Transistor Density Enables Specialization 18 The AI Chip Zoo 19 AI Chip Types 20 AI Chip Benchmarks 22 The Value of State-of-the-Art AI Chips 23 The Efficiency of State-of-the-Art AI Chips Translates into Cost-Effectiveness 23 Compute-Intensive AI Algorithms are Bottlenecked by Chip Costs and Speed 26 U.S. and Chinese AI Chips and Implications for National Competitiveness 27 Appendix A: Basics of Semiconductors and Chips 31 Appendix B: How AI Chips Work 33 Parallel Computing 33 Low-Precision Computing 34 Memory Optimization 35 Domain-Specific Languages 36 Appendix C: AI Chip Benchmarking Studies 37 Appendix D: Chip Economics Model 39 Chip Transistor Density, Design Costs, and Energy Costs 40 Foundry, Assembly, Test and Packaging Costs 41 Acknowledgments 44 Center for Security and Emerging Technology | 2 Introduction and Summary Artificial intelligence will play an important role in national and international security in the years to come.
    [Show full text]
  • Shinjae Yoo Computational Science Initiative Outline
    Lightning Overview of Machine Learning Shinjae Yoo Computational Science Initiative Outline • Why is Machine Learning important? • Machine Learning Concepts • Big Data and Machine Learning • Potential Research Areas 2 Why is Machine Learning important? 3 ML Application to Physics 4 ML Application to Biology 5 Machine Learning Concepts 6 What is Machine Learning (ML) • One of Machine Learning definitions • “How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?” Tom Mitchell, 2006 • Statistics: What conclusions can be inferred from data • ML incorporates additionally • What architectures and algorithms can be used to effectively handle data • How multiple learning subtasks can be orchestrated in a larger system, and questions of computational tractability 7 Machine Learning Components Algorithm Machine Learning HW+SW Data 8 Brief History of Machine Learning Speech Web Deep Blue N-gram Hadoop ENIAC Perceptron Recognition Browser Chess based MT 0.1.0 K-Means KNN 1950 1960 1970 1980 1990 2000 2010 1ST Chess WWW Data Driven Softmargin Image HMM GPGPU Learning Prog. invented ML SVM Net 2010 2011 2012 2013 2014 2015 2016 NN won ImageNet NN outperform IBM Watson AlphaGo with large margin human on ImageNet 9 Supervised Learning Pipeline Training Labeling Preprocessing Validate Data Cleansing Feature Engineering Normalization Testing Preprocessing Predict 10 Unsupervised Learning Pipeline Preprocessing Data Cleansing Feature Engineering Normalization Predict 11 Types of Learning 12 Types of Learning • Generative Learning 13 Types of Learning • Discriminative Learning 14 Types of Learning • Active Learning • How to select training data? 15 Types of Learning • Multi-task Learning 16 Types of Learning • Transfer Learning 17 Types of Learning • Kernel Learning • Metric Learning 18 Types of Learning • Kernel Learning • Metric Learning 19 Types of Learning • Kernel Learning • Metric Learning • Dimensionality Reduction 20 Types of Learning • Feature Learning 21 • Lee, et al.
    [Show full text]
  • Podracer Architectures for Scalable Reinforcement Learning
    2021-4-14 Podracer architectures for scalable Reinforcement Learning Matteo Hessel*,1, Manuel Kroiss*,1, Aidan Clark1, Iurii Kemaev1, John Quan1, Thomas Keck1, Fabio Viola1 and Hado van Hasselt*,1 *Equal contributions, 1DeepMind Supporting state-of-the-art AI research requires balancing rapid prototyping, ease of use, and quick iter- ation, with the ability to deploy experiments at a scale traditionally associated with production systems. Deep learning frameworks such as TensorFlow, PyTorch and JAX allow users to transparently make use of accelerators, such as TPUs and GPUs, to offload the more computationally intensive parts of training and inference in modern deep learning systems. Popular training pipelines that use these frameworks for deep learning typically focus on (un-)supervised learning. How to best train reinforcement learning (RL) agents at scale is still an active research area. In this report we argue that TPUs are particularly well suited for training RL agents in a scalable, efficient and reproducible way. Specifically we describe two architectures designed to make the best use of the resources available on a TPU Pod (a special configuration in a Google data center that features multiple TPU devices connected to each other by extremely low latency communication channels). Introduction Reinforcement learning (RL) algorithms have been shown to be capable of performing well on challenging sequential decision problems, ranging from board games (Silver et al., 2016) to video games (Mnih et al., 2015) and continuous control (Lillicrap et al., 2016; Van Hasselt, 2012). Many of these advances were powered by the adoption of deep learning (Krizhevsky et al., 2012) in RL.
    [Show full text]
  • Patent Claim Generation by Fine-Tuning Openai GPT-2
    Patent Claim Generation by Fine-Tuning OpenAI GPT-2 Jieh-Sheng Lee and Jieh Hsiang Department of Computer Science and Information Engineering National Taiwan University {d04922013, jhsiang}@ntu.edu.tw Abstract (Bidirectional Encoder Representations from Transformers) [4] has become the best practice In this work, we focus on fine-tuning an for state-of-the-art results. GPT-2 is the successor OpenAI GPT-2 pre-trained model for to GPT. Although both GPT-2 and BERT are generating patent claims. GPT-2 has capable of text generation, Wang and Cho [5] demonstrated impressive efficacy of pre- trained language models on various tasks, found that GPT-2 generations are of better particularly coherent text generation. quality. In fact, GPT-2 is claimed to be so Patent claim language itself has rarely powerful that the risk of its malicious use is high. been explored in the past and poses a For this reason, OpenAI decided to keep its unique challenge. We are motivated to largest model (1.5B parameters) closed so that generate coherent patent claims there is more time to discuss its ramifications. automatically so that augmented inventing In this work, we generated patent claims by might be viable someday. In our fine-tuning the released 345M medium version implementation, we identified a unique [6]. Overall we are impressed by how coherent language structure in patent claims and and complicate the generated patent claims could leveraged its implicit human annotations. We investigated the fine-tuning process by be, although not all text are generated equally in probing the first 100 steps and observing terms of quality.
    [Show full text]
  • Automatic Full Compilation of Julia Programs and ML Models to Cloud
    AUTOMATIC FULL COMPILATION OF JULIA PROGRAMS AND ML MODELS TO CLOUD TPUS Keno Fischer 1 Elliot Saba 1 ABSTRACT Google’s Cloud TPUs are a promising new hardware architecture for machine learning workloads. They have powered many of Google’s milestone machine learning achievements in recent years. Google has now made TPUs available for general use on their cloud platform and as of very recently has opened them up further to allow use by non-TensorFlow frontends. We describe a method and implementation for offloading suitable sections of Julia programs to TPUs via this new API and the Google XLA compiler. Our method is able to completely fuse the forward pass of a VGG19 model expressed as a Julia program into a single TPU executable to be offloaded to the device. Our method composes well with existing compiler-based automatic differentiation techniques on Julia code, and we are thus able to also automatically obtain the VGG19 backwards pass and similarly offload it to the TPU. Targeting TPUs using our compiler, we are able to evaluate the VGG19 forward pass on a batch of 100 images in 0.23s which compares favorably to the 52.4s required for the original model on the CPU. Our implementation is less than 1000 lines of Julia, with no TPU specific changes made to the core Julia compiler or any other Julia packages. 1 INTRODUCTION accelerator available to the public via their cloud offering. Originally, the use of TPUs was restricted to applications One of the fundamental changes that has enabled the steady written using Google’s TensorFlow machine learning frame- progress of machine learning techniques over the past sev- work.
    [Show full text]
  • Interoperating Deep Learning Models with ONNX.Jl
    Interoperating Deep Learning models with ONNX.jl Ayush Shridhar1, Phil Tomson2, and Mike Innes3 1International Institute of Information Technology, Bhubaneswar, India 2Unaffiliated 3Julia Computing, Inc ABSTRACT network, with complete support for control flow, recursion, clo- sures and data structures. Implementing models in Flux.jl is as sim- Flux [17] is a machine learning framework, written using the nu- ple as writing regular Julia code. Implementing models is as simple merical computing language Julia[4]. The framework makes writ- as writing the formulae for those, and Zygote.jl will compute the ing layers as simple as writing mathematical formulae, and it’s ad- derivatives seamlessly. vanced AD, Zygote [11] , applies automatic differentiation (AD) to Flux.jl also provides support for other hardware options using ex- calculate derivatives and train the model. It makes heavy use of Ju- ternal packages such as CuArrays.jl and CLArrays.jl. CuArrays lia’s language and compiler features to carry out code analysis and is written completely in Julia, making implementing GPU kernels make optimisations. For example, Julia’s GPU compilation support very simple. Making a model run on GPU can be done in a hassle- [3] can be used to JIT-compile custom GPU kernels for model lay- free manner: It is as simple as calling a few functions to trans- ers [19]. Flux also supports a number of a hardware options, from fer data to GPU. Flux.jl also has support for running models on CPUs, GPUs and even TPUs via XLA.jl, that compiles Julia code Google’s Tensor Processing Unit (TPU). TPUs help in very fast to XLA: an advanced compiler for linear algebra that is capable of linear algebra computation.
    [Show full text]
  • Data Movement Is All You Need: a Case Study on Optimizing Transformers
    Data Movement Is All You Need: A Case Study on Optimizing Transformers Andrei Ivanov∗, Nikoli Dryden∗, Tal Ben-Nun, Shigang Li, Torsten Hoefler ETH Zürich [email protected] ∗ Equal contribution Abstract—Transformers have become widely used for language challenges such as artificial general intelligence [27]. Thus, modeling and sequence learning tasks, and are one of the most improving transformer performance has been in the focus of important machine learning workloads today. Training one is a numerous research and industrial groups. very compute-intensive task, often taking days or weeks, and significant attention has been given to optimizing transformers. Significant attention has been given to optimizing transform- Despite this, existing implementations do not efficiently utilize ers: local and fixed-window attention [28]–[32], more general GPUs. We find that data movement is the key bottleneck when structured sparsity [33], learned sparsity [34]–[36], and other training. Due to Amdahl’s Law and massive improvements in algorithmic techniques [19], [37] improve the performance of compute performance, training has now become memory-bound. transformers. Major hardware efforts, such as Tensor Cores Further, existing frameworks use suboptimal data layouts. Using these insights, we present a recipe for globally optimizing data and TPUs [38] have accelerated tensor operations like matrix- movement in transformers. We reduce data movement by up matrix multiplication (MMM), a core transformer operation. to 22.91% and overall achieve a 1.30× performance improve- Despite this, existing implementations do not efficiently ment over state-of-the-art frameworks when training BERT. utilize GPUs. Even optimized implementations such as Mega- Our approach is applicable more broadly to optimizing deep tron [18] report achieving only 30% of peak GPU flop/s.
    [Show full text]
  • Accelerators for Cyber-Physical Systems Sam Green, İhsan Çiçek and Çetin Kaya Koç University of California, Santa Barbara Introduction Capabilities Desired in CPS?
    Accelerators for Cyber-Physical Systems Sam Green, İhsan Çiçek and Çetin Kaya Koç University of California, Santa Barbara Introduction Capabilities desired in CPS? • Interact with physical world • Networked • Potentially low-power • Resistant to environment • Perform safety-critical tasks • Cryptographically secure • Autonomous • Inexpensive Benefits from Moore’s Law are over • Since about 1970, could safely assume the number of transistors/$ would exponentially increase every 2 years • What can be done today for $X will be doable in 2 years for $X/2 dollars • Accelerators (aka ASICs) existed during this time, but CPU/µcontroller/DSP-based approaches dominated • No longer the case… [http://www.economist.com/node/21693710/sites/all/modules/custom/ec_essay] Other methods to increase performance/$? • Approximate computing • Analog computing • Neuromorphic computing Approximate Computing • Selective approximation can bring disproportionate gains in efficiency • 5% accuracy loss gives • 50x less energy for k-means clustering • 26x less energy for neural network evaluation [S. Mittal. A Survey of Techniques for Approximate Computing. ACM Comput. Surv., vol. 48, no. 4, p. 62:1–62:33, Mar. 2016.] [https://upload.wikimedia.org/wikipedia/commons/b/b7/3-bit_resolution_analog_comparison.png] Analog Computing • Physical world is a computational device • E.g. Use KVL and KCL to approximate activation function for analog neuron • 4X speedup, 20X less energy, 2.4% higher error across benchmarks vs. approximate digital neuron [St. Amant et al. General-purpose Code Acceleration with Limited-precision Analog Computation. ISCA, 2014] Neuromorphic Computing • Non-von Neumann, neuro-bio inspired architectures • Community sees biological circuits as the ultimate in efficiency [https://upload.wikimedia.org/wikipedia/commons/4/4a/Action_potential.svg] Accelerators for Deep Learning Inference [A.
    [Show full text]
  • A Deep Neural Network Using the Posit Number System
    Deep Positron: A Deep Neural Network Using the Posit Number System Zachariah Carmichaelx, Hamed F. Langroudix, Char Khazanovx, Jeffrey Lilliex, John L. Gustafson∗, Dhireesha Kudithipudix xNeuromorphic AI Lab, Rochester Institute of Technology, NY, USA ∗National University of Singapore, Singapore Abstract—The recent surge of interest in Deep Neural Net- works (DNNs) has led to increasingly complex networks that tax EMAC EMAC computational and memory resources. Many DNNs presently use in0 16-bit or 32-bit floating point operations. Significant performance EMAC EMAC and power gains can be obtained when DNN accelerators support in1 EMAC low-precision numerical formats. Despite considerable research, EMAC EMAC there is still a knowledge gap on how low-precision operations in2 EMAC Identity can be realized for both DNN training and inference. In this EMAC EMAC Output Layer work, we propose a DNN architecture, Deep Positron, with posit in3 numerical format operating successfully at ≤8 bits for inference. EMAC EMAC We propose a precision-adaptable FPGA soft core for exact Input layer ReLu Hidden Layer I Hidden Layer II multiply-and-accumulate for uniform comparison across three numerical formats, fixed, floating-point and posit. Preliminary Fig. 1: An overview of a simple Deep Positron architec- results demonstrate that 8-bit posit has better accuracy than ture embedded with the exact multiply-and-accumulate blocks 8-bit fixed or floating-point for three different low-dimensional datasets. Moreover, the accuracy is comparable to 32-bit floating- (EMACs). point on a Xilinx Virtex-7 FPGA device. The trade-offs between DNN performance and hardware resources, i.e. latency, power, on the end-device (e.g.
    [Show full text]