Mobile/Embedded DNN and AI Socs

Total Page:16

File Type:pdf, Size:1020Kb

Mobile/Embedded DNN and AI Socs Mobile/Embedded DNN and AI SoCs Hoi-Jun Yoo KAIST Outline 1. Deep Neural Network Processor – Mobile DNN Applications – Basic CNN Architectures 2. M/E-DNN: Mobile/Embedded Deep Neural Network – Requirements of M/E-DNN – M/E-DNN Design & Example 3. SoC Applications of M/E-DNN – Hybrid CIS, CNNP Processor and DNPU Processor – Hybrid Intelligent Systems – AR Processor, UI/UX Processor and ADAS Processor Hoi-Jun Yoo 2 Types of AI SoCs H i Cloud • General AI g • Scalability h · Control based on overall conditions · Learning with Data collected from many edge machines • Global Data Sharing · High Maintenance and Cooling Cost Control & G Control Model e n e Stand-Alone AI Data & r a Learned Model Edge l I · Control based on individual situation n t · Real-time Inference and Learning e l l i g Control & e Control Model n c e Embedded AI Data & Learned Model L Mobile o · Low Power Operation • Specific AI w · Real-time Autonomous Learning • UI / UX • Limited Data Sharing Low Real-Time Operation High Requirement Hoi-Jun Yoo 3 Emerging Mobile Applications • The Needs for Embedded Vision & AI Security & Surveillance Visual Perception & Analytics ADAS & Autonomous Cars Augmented Reality Drones Hoi-Jun Yoo 4 Deep Neural Networks MLP CNN RNN (Multi-Layer (Convolutional) (Recurrent) Perceptron) Input Convolution Hidden Pooling Output Input Output Input Output Fully Connected Sequential Data Characteristic Fully Connected Convolutional Layer Feedback Path Internal State Major Speech Recognition Speech Recognition Vision Processing Application Action Recognition Number of 3~10 Layers Max ~100 Layers 3~5 Layers Layers Hoi-Jun Yoo 5 Outline 1. Deep Neural Network Processor – Mobile DNN Applications – Basic CNN Architectures 2. M/E-DNN: Mobile/Embedded Deep Neural Network – Requirements of M/E-DNN – M/E-DNN Design & Example 3. SoC Applications of M/E-DNN – Hybrid CIS, CNNP Processor and DNPU Processor – Hybrid Intelligent Systems – AR Processor, UI/UX Processor and ADAS Processor Hoi-Jun Yoo 6 Mobile/Embedded DNN Requirements Cloud-based DNN Mobile/Embedded DNN FC FC FC FC FC pool pool pool pool pool pool pool pool pool conv conv conv conv conv conv conv conv conv conv conv conv conv conv conv conv conv conv • Cloud Intelligence • On-device Intelligence • General AI • Application-specific AI - Very deep network - Specific DNN network • Communication latency • Real-time operation • High Number Precision • Reduced Number Precision • High power / cooling cost • Low-power consumption • Large memory capacity • Efficient memory arch. Hoi-Jun Yoo 7 Mobile/Embedded DNN Architecture [1] Kwanho Kim et al, ISSCC 2008 [2] Seungjin Lee et al, IEEE TNN 2011 • 2D Cell + Shared PE Digital CNN – 2D Cell Array to Remove Emulation Overhead – Programmable PE for Flexible Operation • Cells: Storage and Communication • PEs: Processing of Cell Data Cell-to-PE Read Bus C00 C01 C02 C03 PE1 Cell-to-Cell PE-to-Cell Communication Write Bus (2D Shift Register) C10 C11 C12 C13 PE2 C20 C21 C22 C23 PE3 C30 C31 C32 C33 PE4 Hoi-Jun Yoo 8 SRAM based Architecture [8] Kwanho Kim et al, ISSCC 2008 6 6 0 0 V V P P 20x60 E 20x60 20x60 E 20x60 Cell A Cell Cell A Cell R R Array R Array Array R Array A A Y Y Controller IMEM(2KB) CELL VPE CELL CELL CELL • Digital Cellular Neural Net. • 80x60 shift-register array CELL VPE CELL CELL CELL • 120 VPEs shared by cells CELL VPE CELL CELL CELL Hoi-Jun Yoo 9 SRAM Memory Based NN Cell [8] Kwanho Kim et al, ISSCC 2008 read bus 1 NMOS only Mux / Demux read bus 2 (Dynamic Logic) North WL0 read read enable1 south en enable2 /pre- WL1 charge north en South WL2 West S load east en H I WL3 enable F west en T O U D N East I T 6T SRAM Cell A Register T O F L /shift I H /equalize S write enable REGISTER FILE SHIFT shift REGISTER write bus /write bus • Dynamic logic-based shift register Small cell area Hoi-Jun Yoo 10 Chip Summary and Applications [8] Kwanho Kim et al, ISSCC 2008 [10] Seungjin Lee et al, ISSCC 2010 Cell PE Cell Cell PE Cell Array Array Array Array Array Array 1 1 2 3 2 4 Controller Process Technology 0.13μm 8M CMOS Area 4.5 mm2 Number of Cells 4800 Number of PEs 120 Operating Frequency 200 MHz Peak Performance 24 GOPS (22 GOPS) (Sustained) Active Power 84 mW Average Power (15FPS) 6 mW Hoi-Jun Yoo 11 Outline 1. Deep Neural Network Processor – Mobile DNN Applications – Basic CNN Architectures 2. M/E-DNN: Mobile/Embedded Deep Neural Network – Requirements of M/E-DNN – M/E-DNN Design & Examples 3. SoC Applications of M/E-DNN – Hybrid CIS, CNNP Processor and DNPU Processor – Hybrid Intelligent Systems – AR Processor, UI/UX Processor and ADAS Processor Hoi-Jun Yoo 12 KAIST Approach: MC Processor + M/E DNN Multi-Core SoC Left brain Right brain • Hard computing • Soft computing - CPU / DSP - Deep Neural Nets - Special purpose HW - Fuzzy logic - Multiple cores - Learning / evolution NoC Fast / accurate computing Suitable for solving imprecision, performance uncertainty, and ambiguous problems Intelligence computing system: multi-core digital processors + DNN soft-computing hardware (through Networks-on- Chip). Hoi-Jun Yoo 13 Low Power Face Recognition SoC [1] Kyeongryeol Bong et al, ISSCC 2017 Conventional Face Detector • Face detection by vision processor Hybrid Face Detector • Face detection by CMOS image sensor • Combine analog & digital face detector 320x240 Pixel Array CMOS CMOS Column Amp. Array Image Sensor Image ADC Array 80x20 Analog Memory Controller Top Analog Haar-like like like Filtering Circuit - Integral Digital Haar Image Haar-like Use Ultra-low Power CNN Processor Unit Filt. Unit Face Detector Face Hoi-Jun Yoo 14 Chip & System implementation [1] Kyeongryeol Bong et al, ISSCC 2017 Always-on face detector (FD) Ultra low-power CNN processor • Always-on FD: 3.3mm x 3.36mm CIS in 65nm – 320 x 240 pixel array, Haar-like filtering circuit & analog MEM • CNN processor: 4mm x 4mm CNNP in 65nm – 4 x 4 PE array with local T-SRAM Hoi-Jun Yoo 15 Chip & System implementation [1] Kyeongryeol Bong et al, ISSCC 2017 • The Most Accurate Face Recognition SoC – 97% Achieved using CNN @ LFW dataset • 0.62mW Face Recognition System w/ imaging Hoi-Jun Yoo 16 CNN + RNN Deep Neural Network [7] Dongjoo Shin et al, ISSCC 2017 • CNN: Visual feature extraction and recognition – Face recognition, image classification… • RNN: Sequential data recognition and generation – Translation, speech recognition… • CNN + RNN: CNN-extracted features RNN input RNN: CNN: Temporal Look Visual feature data extraction Around processing (Single frame) (Multi frame) !! • Previous works – Optimized for convolution layer only: [6], [3] [3] B. Moons, SOVC 2016 [5] S. Han, ISCA 2016 – Optimized for FC layer and RNN only: [5] [6] Y. Chen, ISSCC 2016 Hoi-Jun Yoo 17 Processor for support both CNN & RNN [7] Dongjoo Shin et al, ISSCC 2017 Ext. IF Convolution Processor FC RNN-LSTM Processor Input Buffer Weight Buffer Convolution Convolution Cluster 0 Cluster 1 Quantization Table-based Matrix Multiplier Aggregation Core Accumulation Registers Convolution Convolution Activation Function LUT Cluster 3 Cluster 3 Top RISC Controller Custom Instruction Set-based Instruction Controller Data Memory Memory Ext. IF Convolution layer Heterogeneous Fully-connected layer (CNN) Architecture (CNN) Recurrent neural network Hoi-Jun Yoo 18 DNPU: Chip implementation [7] Dongjoo Shin et al, ISSCC 2017 Specifications Process 65nm 1P8M CMOS Ext. Gateway 2 CNN Die Area 4mm x 4mm (16mm ) Convolution Ctrlr. Convolution Cluster 0 Cluster 3 CLP FCRLP SRAM 280 KB 10 KB Aggregation Core Supply 0.77V ~ 1.1V Convolution Convolution Frequency 50 ~ 200MHz Cluster 1 Cluster 2 50MHz 34.6mW Top @ 0.77V Ctrlr. Power 200MHz 279mW @ 1.1V y Pre-Processing a FC LSTM w CLP Word CLP Word e t Core a Processor Length: 16b Length: 4b G . t x Energy 50MHz E 2.1 TOPS/W 8.1 TOPS/W Efficiency @ 0.77V 200MHz 1.0 TOPS/W 3.9 TOPS/W @ 1.1V • CNN-RNN processor : 4mm x 4mm in 65nm • Supply Voltage : 0.77V ~ 1.1V, frequency : 50 ~ 200MHz • Energy Efficiency : 8.1TOPS/W (50MHz, 0.77V, 4bit word length) 3.9TOPS/W (200MHz, 1.1V, 16bit word length) Hoi-Jun Yoo 19 Pet Robot Demonstration Video [7] Dongjoo Shin et al, ISSCC 2017 Hoi-Jun Yoo 20 Requirements for AR: High-Throughput Recognition BEEP! - Mini Cooper Cooper: 98% - 1600CC Peugeot: 45% - $35,000 AUDI TT: 11% Marker-based AR Markerless AR • 2D-barcode / 2D-marker • Natural Feature Extraction • Markers on Everything? • >10x Higher Computation Impossible Solution in Real Difficulties in Real-time (30fps) World Operation Hoi-Jun Yoo 21 Markerless AR Process Hoi-Jun Yoo 22 Requirements for AR in HMD: Energy Hands-free Head Mounted Display (HMD) – Smaller Battery Size – Complicated Markerless AR Functionalities [a] [b] [c] [a] HP iPAQ [b] Apple iPhone5 [c] Google Project Glass Hoi-Jun Yoo 23 SoC Architecture of K-Glass [12] Gyeonghoon Kim et al, ISSCC 2014 Hoi-Jun Yoo 24 BONE-AR: Chip Implementation Visual Keypoint Detection Scale Space Attention Rendering Processor (KDP) Processor (SSP) ACC. (RA) Processor 8 Cores 2 Cores 2 Cores (VAP) 5 Cores . Keypoint Matching Pose ACC Descriptor Generation Neural Accelerator (KMA) Estimation Network Processor (DGP) 4 Cores Processor 10 Cores (PEP) 2 Cores Mixed-Mode SVM • 4mm x 8mm by 65nm CMOS Technology • 381mW Average / 778mW Peak Power Consumption • 1.22 TOPS Peak Performance • 1.57 TOPS/W Energy Efficiency for 720p Video [12] Gyeonghoon Kim et al, ISSCC 2014 Hoi-Jun Yoo 25 Demo: BONE-AR Hoi-Jun Yoo 26 Gaze User Interface Step1. “Point” the Cursor by “Gaze” Gaze Point Low-power(<50mW [a]) Always-On Gaze UI - Previous Work [b] : 100mW [a] Power Consumption of Wireless Optical Mouse Reported in “Mouse Battery Life Comparison Test Report”, Microsoft Corp., 2004 [b] D.
Recommended publications
  • DEEP-Hybriddatacloud ASSESSMENT of AVAILABLE TECHNOLOGIES for SUPPORTING ACCELERATORS and HPC, INITIAL DESIGN and IMPLEMENTATION PLAN
    DEEP-HybridDataCloud ASSESSMENT OF AVAILABLE TECHNOLOGIES FOR SUPPORTING ACCELERATORS AND HPC, INITIAL DESIGN AND IMPLEMENTATION PLAN DELIVERABLE: D4.1 Document identifier: DEEP-JRA1-D4.1 Date: 29/04/2018 Activity: WP4 Lead partner: IISAS Status: FINAL Dissemination level: PUBLIC Permalink: http://hdl.handle.net/10261/164313 Abstract This document describes the state of the art of technologies for supporting bare-metal, accelerators and HPC in cloud and proposes an initial implementation plan. Available technologies will be analyzed from different points of views: stand-alone use, integration with cloud middleware, support for accelerators and HPC platforms. Based on results of these analyses, an initial implementation plan will be proposed containing information on what features should be developed and what components should be improved in the next period of the project. DEEP-HybridDataCloud – 777435 1 Copyright Notice Copyright © Members of the DEEP-HybridDataCloud Collaboration, 2017-2020. Delivery Slip Name Partner/Activity Date From Viet Tran IISAS / JRA1 25/04/2018 Marcin Plociennik PSNC 20/04/2018 Cristina Duma Aiftimiei Reviewed by INFN 25/04/2018 Zdeněk Šustr CESNET 25/04/2018 Approved by Steering Committee 30/04/2018 Document Log Issue Date Comment Author/Partner TOC 17/01/2018 Table of Contents Viet Tran / IISAS 0.01 06/02/2018 Writing assignment Viet Tran / IISAS 0.99 10/04/2018 Partner contributions WP members 1.0 19/04/2018 Version for first review Viet Tran / IISAS Updated version according to 1.1 22/04/2018 Viet Tran / IISAS recommendations from first review 2.0 24/04/2018 Version for second review Viet Tran / IISAS Updated version according to 2.1 27/04/2018 Viet Tran / IISAS recommendations from second review 3.0 29/04/2018 Final version Viet Tran / IISAS DEEP-HybridDataCloud – 777435 2 Table of Contents Executive Summary.............................................................................................................................5 1.
    [Show full text]
  • Wire-Aware Architecture and Dataflow for CNN Accelerators
    Wire-Aware Architecture and Dataflow for CNN Accelerators Sumanth Gudaparthi Surya Narayanan Rajeev Balasubramonian University of Utah University of Utah University of Utah Salt Lake City, Utah Salt Lake City, Utah Salt Lake City, Utah [email protected] [email protected] [email protected] Edouard Giacomin Hari Kambalasubramanyam Pierre-Emmanuel Gaillardon University of Utah University of Utah University of Utah Salt Lake City, Utah Salt Lake City, Utah Salt Lake City, Utah [email protected] [email protected] pierre- [email protected] ABSTRACT The 52nd Annual IEEE/ACM International Symposium on Microarchitecture In spite of several recent advancements, data movement in modern (MICRO-52), October 12–16, 2019, Columbus, OH, USA. ACM, New York, NY, USA, 13 pages. https://doi.org/10.1145/3352460.3358316 CNN accelerators remains a significant bottleneck. Architectures like Eyeriss implement large scratchpads within individual pro- cessing elements, while architectures like TPU v1 implement large systolic arrays and large monolithic caches. Several data move- 1 INTRODUCTION ments in these prior works are therefore across long wires, and ac- Several neural network accelerators have emerged in recent years, count for much of the energy consumption. In this work, we design e.g., [9, 11, 12, 28, 38, 39]. Many of these accelerators expend sig- a new wire-aware CNN accelerator, WAX, that employs a deep and nificant energy fetching operands from various levels of the mem- distributed memory hierarchy, thus enabling data movement over ory hierarchy. For example, the Eyeriss architecture and its row- short wires in the common case. An array of computational units, stationary dataflow require non-trivial storage for scratchpads and each with a small set of registers, is placed adjacent to a subarray registers per processing element (PE) to maximize reuse [11].
    [Show full text]
  • Efficient Management of Scratch-Pad Memories in Deep Learning
    Efficient Management of Scratch-Pad Memories in Deep Learning Accelerators Subhankar Pal∗ Swagath Venkataramaniy Viji Srinivasany Kailash Gopalakrishnany ∗ y University of Michigan, Ann Arbor, MI IBM TJ Watson Research Center, Yorktown Heights, NY ∗ y [email protected] [email protected] fviji,[email protected] Abstract—A prevalent challenge for Deep Learning (DL) ac- TABLE I celerators is how they are programmed to sustain utilization PERFORMANCE IMPROVEMENT USING INTER-NODE SPM MANAGEMENT. Incep Incep Res Multi- without impacting end-user productivity. Little prior effort has Alex VGG Goog SSD Res Mobile Squee tion- tion- Net- Head Geo been devoted to the effective management of their on-chip Net 16 LeNet 300 NeXt NetV1 zeNet PTB v3 v4 50 Attn Mean Scratch-Pad Memory (SPM) across the DL operations of a 1 SPM 1.04 1.19 1.94 1.64 1.58 1.75 1.31 3.86 5.17 2.84 1.02 1.06 1.76 Deep Neural Network (DNN). This is especially critical due to 1-Step 1.04 1.03 1.01 1.10 1.11 1.33 1.18 1.40 2.84 1.57 1.01 1.02 1.24 trends in complex network topologies and the emergence of eager execution. This work demonstrates that there exists up to a speedups of 12 ConvNets, LSTM and Transformer DNNs [18], 5.2× performance gap in DL inference to be bridged using SPM management, on a set of image, object and language networks. [19], [21], [26]–[33] compared to the case when there is no We propose OnSRAM, a novel SPM management framework SPM management, i.e.
    [Show full text]
  • Efficient Processing of Deep Neural Networks
    Efficient Processing of Deep Neural Networks Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, Joel Emer Massachusetts Institute of Technology Reference: V. Sze, Y.-H.Chen, T.-J. Yang, J. S. Emer, ”Efficient Processing of Deep Neural Networks,” Synthesis Lectures on Computer Architecture, Morgan & Claypool Publishers, 2020 For book updates, sign up for mailing list at http://mailman.mit.edu/mailman/listinfo/eems-news June 15, 2020 Abstract This book provides a structured treatment of the key principles and techniques for enabling efficient process- ing of deep neural networks (DNNs). DNNs are currently widely used for many artificial intelligence (AI) applications, including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the- art accuracy on many AI tasks, it comes at the cost of high computational complexity. Therefore, techniques that enable efficient processing of deep neural networks to improve key metrics—such as energy-efficiency, throughput, and latency—without sacrificing accuracy or increasing hardware costs are critical to enabling the wide deployment of DNNs in AI systems. The book includes background on DNN processing; a description and taxonomy of hardware architectural approaches for designing DNN accelerators; key metrics for evaluating and comparing different designs; features of DNN processing that are amenable to hardware/algorithm co-design to improve energy efficiency and throughput; and opportunities for applying new technologies. Readers will find a structured introduction to the field as well as formalization and organization of key concepts from contemporary work that provide insights that may spark new ideas. 1 Contents Preface 9 I Understanding Deep Neural Networks 13 1 Introduction 14 1.1 Background on Deep Neural Networks .
    [Show full text]
  • Accelerate Scientific Deep Learning Models on Heteroge- Neous Computing Platform with FPGA
    Accelerate Scientific Deep Learning Models on Heteroge- neous Computing Platform with FPGA Chao Jiang1;∗, David Ojika1;5;∗∗, Sofia Vallecorsa2;∗∗∗, Thorsten Kurth3, Prabhat4;∗∗∗∗, Bhavesh Patel5;y, and Herman Lam1;z 1SHREC: NSF Center for Space, High-Performance, and Resilient Computing, University of Florida 2CERN openlab 3NVIDIA 4National Energy Research Scientific Computing Center 5Dell EMC Abstract. AI and deep learning are experiencing explosive growth in almost every domain involving analysis of big data. Deep learning using Deep Neural Networks (DNNs) has shown great promise for such scientific data analysis appli- cations. However, traditional CPU-based sequential computing without special instructions can no longer meet the requirements of mission-critical applica- tions, which are compute-intensive and require low latency and high throughput. Heterogeneous computing (HGC), with CPUs integrated with GPUs, FPGAs, and other science-targeted accelerators, offers unique capabilities to accelerate DNNs. Collaborating researchers at SHREC1at the University of Florida, CERN Openlab, NERSC2at Lawrence Berkeley National Lab, Dell EMC, and Intel are studying the application of heterogeneous computing (HGC) to scientific prob- lems using DNN models. This paper focuses on the use of FPGAs to accelerate the inferencing stage of the HGC workflow. We present case studies and results in inferencing state-of-the-art DNN models for scientific data analysis, using Intel distribution of OpenVINO, running on an Intel Programmable Acceleration Card (PAC) equipped with an Arria 10 GX FPGA. Using the Intel Deep Learning Acceleration (DLA) development suite to optimize existing FPGA primitives and develop new ones, we were able accelerate the scientific DNN models under study with a speedup from 2.46x to 9.59x for a single Arria 10 FPGA against a single core (single thread) of a server-class Skylake CPU.
    [Show full text]
  • Exploring the Programmability for Deep Learning Processors: from Architecture to Tensorization Chixiao Chen, Huwan Peng, Xindi Liu, Hongwei Ding and C.-J
    Exploring the Programmability for Deep Learning Processors: from Architecture to Tensorization Chixiao Chen, Huwan Peng, Xindi Liu, Hongwei Ding and C.-J. Richard Shi Department of Electrical Engineering, University of Washington, Seattle, WA, 98195 {cxchen2,hwpeng,xindil,cjshi}@uw.edu ABSTRACT Custom tensorization explores problem-specific computing data This paper presents an instruction and Fabric Programmable Neuron flows for maximizing energy efficiency and/or throughput. For Array (iFPNA) architecture, its 28nm CMOS chip prototype, and a example, Eyeriss proposed a row-stationary (RS) data flow to reduce compiler for the acceleration of a variety of deep learning neural memory access by exploiting feature map reuse [1]. However, row networks (DNNs) including convolutional neural networks (CNNs), stationary is only effective for convolutional layers with small recurrent neural networks (RNNs), and fully connected (FC) net­ strides, and shows poor performance on the Alexnet CNN CONVl works on chip. The iFPNA architecture combines instruction-level layer and RNNs. Systolic arrays, on the other hand, feature both programmability as in an Instruction Set Architecture (ISA) with efficiency and high throughput for matrix-matrix computation [4]. logic-level reconfigurability as in a Field-Prograroroable Gate Anay But they take less advantage of convolutional reuse, thus require (FPGA) in a sliced structure for scalability. Four data flow models, more data transfer bandwidth. namely weight stationary, input stationary, row stationary and Fixed data flow schemes in deep learning processors limit their tunnel stationary, are described as the abstraction of various DNN coverage of advanced algorithms. A flexible data flow engine is data and computational dependence. The iFPNA compiler parti­ desired.
    [Show full text]
  • Deep Learning Inference in Facebook Data Centers: Characterization, Performance Optimizations and Hardware Implications
    Deep Learning Inference in Facebook Data Centers: Characterization, Performance Optimizations and Hardware Implications Jongsoo Park,∗ Maxim Naumov†, Protonu Basu, Summer Deng, Aravind Kalaiah, Daya Khudia, James Law, Parth Malani, Andrey Malevich, Satish Nadathur, Juan Pino, Martin Schatz, Alexander Sidorov, Viswanath Sivakumar, Andrew Tulloch, Xiaodong Wang, Yiming Wu, Hector Yuen, Utku Diril, Dmytro Dzhulgakov, Kim Hazelwood, Bill Jia, Yangqing Jia, Lin Qiao, Vijay Rao, Nadav Rotem, Sungjoo Yoo and Mikhail Smelyanskiy Facebook, 1 Hacker Way, Menlo Park, CA Abstract The application of deep learning techniques resulted in re- markable improvement of machine learning models. In this paper we provide detailed characterizations of deep learning models used in many Facebook social network services. We present computational characteristics of our models, describe high-performance optimizations targeting existing systems, point out their limitations and make suggestions for the fu- ture general-purpose/accelerated inference hardware. Also, Figure 1: Server demand for DL inference across data centers we highlight the need for better co-design of algorithms, nu- merics and computing platforms to address the challenges of In order to perform a characterizations of the DL models workloads often run in data centers. and address aforementioned concerns, we had direct access to the current systems as well as applications projected to drive 1. Introduction them in the future. Many inference workloads need flexibility, availability and low latency provided by CPUs. Therefore, Machine learning (ML), deep learning (DL) in particular, is our optimizations were mostly targeted for these general pur- used across many social network services. The high quality pose processors. However, our characterization suggests the visual, speech, and language DL models must scale to billions following general requirements for new DL hardware designs: of users of Facebook’s social network services [25].
    [Show full text]
  • Energy-Efficient Neural Network Architectures
    Energy-Efficient Neural Network Architectures by Hsi-Shou Wu A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Electrical and Computer Engineering) in The University of Michigan 2018 Doctoral Committee: Professor Marios C. Papaefthymiou, Co-Chair Associate Professor Zhengya Zhang, Co-Chair Professor Trevor N. Mudge Professor Dennis M. Sylvester Associate Professor Thomas F. Wenisch Hsi-Shou Wu [email protected] ORCID iD: 0000-0003-3437-7706 c Hsi-Shou Wu 2018 All Rights Reserved To my family and friends for their love and support ii ACKNOWLEDGMENTS First and foremost, my sincere gratitude goes to my co-advisors Professor Marios Papaefthymiou and Professor Zhengya Zhang for their support and guidance. Co- advising has provided me with a diversity of perspectives, allowing me to approach problems from different angles during the course of this research. I am grateful to Marios for sharing his vision and valuable advice throughout my research journey. His willingness to share the process of conducting good research as well as developing generous collaborative relationships in the field has been instru- mental to my growth as a colleague, teacher, and researcher. I also would like to express my gratitude to Zhengya for making beneficial sug- gestions as well as sharing his expertise with me. Our insightful discussions enriched my research. His encouragement and support during my years as Ph.D. student are deeply appreciated. In addition to my co-advisors, I would like to thank my thesis committee mem- bers, Professors Trevor Mudge, Thomas Wenisch, and Dennis Sylvester, for providing feedback and advice during the course of this work.
    [Show full text]
  • Deep Learning in Mobile and Wireless Networking: a Survey Chaoyun Zhang, Paul Patras, and Hamed Haddadi
    IEEE COMMUNICATIONS SURVEYS & TUTORIALS 1 Deep Learning in Mobile and Wireless Networking: A Survey Chaoyun Zhang, Paul Patras, and Hamed Haddadi Abstract—The rapid uptake of mobile devices and the rising The growing diversity and complexity of mobile network popularity of mobile applications and services pose unprece- architectures has made monitoring and managing the multi- dented demands on mobile and wireless networking infrastruc- tude of network elements intractable. Therefore, embedding ture. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, real-time extraction of fine-grained an- versatile machine intelligence into future mobile networks is alytics, and agile management of network resources, so as to drawing unparalleled research interest [6], [7]. This trend is maximize user experience. Fulfilling these tasks is challenging, reflected in machine learning (ML) based solutions to prob- as mobile environments are increasingly complex, heterogeneous, lems ranging from radio access technology (RAT) selection [8] and evolving. One potential solution is to resort to advanced to malware detection [9], as well as the development of machine learning techniques, in order to help manage the rise in data volumes and algorithm-driven applications. The recent networked systems that support machine learning practices success of deep learning underpins new and powerful tools that (e.g. [10], [11]). ML enables systematic mining of valuable tackle problems in this space. information from traffic data and automatically uncover corre- In this paper we bridge the gap between deep learning lations that would otherwise have been too complex to extract and mobile and wireless networking research, by presenting a by human experts [12].
    [Show full text]
  • Packing Sparse Convolutional Neural Networks for Efficient Systolic Array Implementations: Column Combining Under Joint Optimization
    Preliminary version Final version to appear in ASPLOS 2019 Packing Sparse Convolutional Neural Networks for Efficient Systolic Array Implementations: Column Combining Under Joint Optimization H. T. Kung Bradley McDanel Sai Qian Zhang Harvard University Harvard University Harvard University Cambridge, MA, USA Cambridge, MA, USA Cambridge, MA, USA Abstract CNN weights are distributed in an irregular manner. In tradi- This paper describes a novel approach of packing sparse tional approaches, zero weights still occupy systolic cells in convolutional neural networks for their efficient systolic ar- the systolic array. ray implementations. By combining subsets of columns in the In this paper we propose a novel approach, called column original filter matrix associated with a convolutional layer, combining, which can pack sparse convolutional networks for we increase the utilization efficiency of the systolic array sub- their efficient systolic array implementations. In combining stantially (e.g., 4x) due to the increased density of nonzeros in columns to increase the percentage of nonzeros in a packed the resulting packed filter matrix. In combining columns, for CNN, within a group of combined columns, we prune all each row, all filter weights but one with the largest magnitude weights on conflicting rows but the one with the largest mag- are pruned. We retrain the remaining weights to preserve nitude. We then bring up the classification accuracy of the high accuracy. We demonstrate that in mitigating data pri- pruned network via retraining. We iteratively perform these vacy concerns the retraining can be accomplished with only column-combining and network-retraining steps to improve fractions of the original dataset (e.g., 10% for CIFAR-10).
    [Show full text]
  • The Use of Appropriate Hardware Fabrics to Achieve a Low Latency
    Ultra low latency deep neural network inference for gravitational waves interferometer M.C.B. de Rooij Ultra low latency deep neural network inference for gravitational waves interferometer by Martijn Cornelis Bernardus de Rooij A THESIS submitted in partial fulfilment of the requirements for the degree of MASTER OF SCIENCE in COMPUTER ENGINEERING at the Delft University of Technology, To be defended publicly on Friday March 5, 2021 Student number: 4731743 Project duration: March 2, 2020 { March 5, 2021 Company Supervisor: Roel Aaij Nikhef Thesis committee: Chair: Dr. ir. Z. Al-Ars, TU Delft, supervisor Members: Dr. R. Aaij Nikhef J. Petri-K¨onig TU Delft, PhD Student, CE Group Dr. J.S. Rellermeyer TU Delft, Distributed Systems group Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS) Delft University of Technology Mekelweg 4 2600 GA Delft, The Netherlands Nikhef (National Institute for Subatomic Physics) Science Park 105 1098 XG Amsterdam, The Netherlands Abstract Low latency Convolutional Neural Network (CNN) inference research is gaining more and more momentum for tasks such as speech and image classifications. This is because CNNs have the ability to surpass human accuracy in classification of images. For improving the measurement setup of gravitational waves, low latency CNNs inference are researched. The CNN needs to process data from images to enable certain automatic controls for the control system. The data of these images need to be processed within 0.1 ms from the moment of taking the image to the control system obtaining the result of a deep neural network. Hardware acceleration is needed to reduce the execution latency of the network and reach the 0.1 ms requirement.
    [Show full text]
  • DLIR: an Intermediate Representation for Deep Learning Processors
    DLIR: An Intermediate Representation for Deep Learning Processors Huiying Lan123 and Zidong Du13 1 Intelligent Processor Research Center, Institute of Computing Technology (ICT), CAS, China 2 University of Chinese Academy of Sciences (UCAS), China 3 Cambricon Tech. Ltd. Abstract. The Deep learning processor (DLP), especially ASIC-based accelerators, have been proved to be a promising device for accelerating the computation of deep learning algorithms. However, the learning cost of mastering these DLPs is high as they use different programming inter- faces. On the other hand, many deep learning frameworks are proposed to ease the burden of developing deep learning algorithms, but few of them support DLPs. Due to the special features in DLPs, it is hard to integrate a DLP into existed frameworks. In this paper, we propose an intermediate representation (called DLIR) to bridge the gap between DL frameworks and DLPs. DLIR is a tensor- based language with built-in tensor intrinsics that can be directly mapped to hardware primitives. We show that DLIR allows better developing ef- ficiency and is able to generate efficient code. Keywords: deep learning processor, intermediate representation, deep learning framework, deep learning 1 Introduction Deep learning processors (DLPs) have become powerful devices for processing large scale neural networks, especially ASIC-based DLPs [1{6]. However, DLPs are still not fully accepted by DL participants due to the lack of programming supports. On the other hand, many DL programming frameworks [7{10] have been proposed to ease the burden of developing DL algorithms but often only on traditional devices (e.g., CPUs and GPUs). Primitives on such devices are basically scalar computations and they use cache in their system.
    [Show full text]